Posts
Comments
You might capture value out of that relative to broad equities if the world ends up both severely deflationary due to falling costs, and where current publicly traded companies are mostly unable to compete in the new context.
Yeah, but assuming your p(doom) isn't really high, this needs to balanced against the chance that AI goes well, and your kid has a really, really, really good life.
I don't expect my daughter to ever have a job, but think that in more than half of worlds that seem possible to me right now, she has a very satisfying life -- one that is better than it would be otherwise in part because she never has a job.
I'd note that acoup's model of fires primacy making defence untenable between hi tech nations, while not completely disproven by the Ukraine war, is a hypothesis that seems much less likely to be true/ less true than it did in early 2022. The Ukraine war has shown in most cases a strong advantage to a prepared defender and the difficulty of taking urban environments.
The current Israel - Hamas was shows a similar tendency, where Israel is moving very slowly into the core urban concentrations (ie it has surrounded Gaza city so far, but not really entered it), though its superiority in resources relative to its opponent is vastly greater than Russia's advantage over Ukraine was.
I'd expect per capita war deaths to have nothing to do with offence/ defence balance as such (unless the defence gets so strong that wars simply don't happen, in which case it goes to zero).
Per capita war deaths in this context are about the ability of states to mobilize populations, and about how much damage the warfare does to the civilian population that the battle occurs over. I don't think there is any uncomplicated connection between that and something like 'how much bigger does your army need to be for you to be able to successfully win against a defender who has had time to get ready'.
This matches my sense of how a lot of people seem to have... noticed that GPT-4 is fairly well aligned to what the OpenAI team wants it to be, in ways that Yudkowsky et al said would be very hard, and still not view this as at a minimum a positive sign?
Ie problems of the class 'I told the intelligence to get my mother out of the burning building and it blew her up so the dead body flew out the window, this is because I wasn't actually specific enough' just don't seem like they are a major worry anymore?
Usually when GPT-4 doesn't understand what I'm asking, I wouldn't be surprised if a human was confused also.
Weirdly, and I think this is because my childhood definitely was not optimized for getting into a good university (I was homeschooled, and ended up transferring to Berkley based off perfect grades for two years in a community college), but reading the last paragraphs here made me rather nostolgic for the two or three weeks I spent doing practice SAT tests.
I mean, it kind of does fine at arithmetic?
I just gave gpt3.5 three random x plus y questions, and it managed one that I didn't want to bother doing in my head.
I think the issue is that creating an incentive system where people are rewarded for being good at an artificial game that has very little connection to their real world cericumstances, isn't going to tell us anything very interesting about how rational people are in the real world, under their real constraints.
I have a friend who for a while was very enthused about calibration training, and at one point he even got a group of us from the local meetup + phil hazeldon to do a group exercise using a program he wrote to score our calibration on numeric questions drawn from wikipedia. The thing is that while I learned from this to be way less confident about my guesses -- which improves rationality, it is actually, for the reasons specified, useless to create 90% confidence intervals about making important real world decisions.
Should I try training for a new career? The true 90% confidence interval on any difficult to pursue idea that I am seriously considering almost certainly includes 'you won't succeed, and the time you spend will be a complete waste' and 'you'll do really well, and it will seem like an awesome decision in retrospect'.
If you think P(doom) is 1, you probably don't believe that terrorist bombing of anything will do enough damage to be useful. That is probably one of EYs cruxes on violence.
You don't become generally viewed by society as a defector when you file a lawsuit. Private violence defines you in that way, and thus marks you as an enemy of ethical cooperators, which is unlikely to be a good long term strategy.
Yeah, but I read somewhere that loneliness kills. So actually risking being murdered by grass is safer, because you'll be less lonely.
I think we agree though.
Making decisions based on tiny probabilities is generally a bad approach. Also, there is no option that is actually safe.
You are right that I have no idea about whether near complete isolation has a higher life expectancy than being normally social, and the claim needed to compare them to make logical sense in that way.
I think the claim does still make sense if interpreted as 'whether it is positive or negative on net, deciding to be completely isolated has way bigger consequences, even in terms of direct mortality risk, than taking the covid vaccine' - and thus avoiding the vaccine should not be seen as a major advantage of being isolated.
My experience is that it is like having extra in laws, who you may or may not like, but have to sort of get along with occasionally.
I don't think most people actually talk very much with their in laws, or assume that people who the in law dislikes should be disliked.
What I meant is that it is possible the things that Von Neumann discovered were easier to discover than anything that is still undiscovered, so new Von Neumann's won't be as impressive.
"this is something that the data has to actually exist for since several percent of US children have been homeschooled for the last several decades."
Never mind. There aren't particularly good studies. But what exists seems to say that homeschooled students do much better than average for all students, but maybe somewhat worse than the average for students with their parent's SES backgrounds.
But the data mostly comes from non-random samples, so it is hard to generate firm conclusions.
So this is based on my memory of homeschooling propaganda articles that I saw as a kid. But I'm pretty sure the data they had there showed most kids went to college. In my family three of us got University of California degrees, and the one who only got a nursing degree in his thirties authentically enjoyed manual labor jobs until he decided he also wanted more money.
Perhaps these numbers do stop at college, and so we don't see in them children who get a good college education, but then fail in some important way later on in life, but I've never gotten an impression from anywhere that homeschooled children have generally worse life outcomes -- anyways, this is something that the data has to actually exist for since several percent of US children have been homeschooled for the last several decades.
I did have substantial social problems, even as an adult, and they have led me to be less successful in career terms than I probably would have been with stronger social skills. But this might be driven by a selection effect: The reason my parents actually started homeschooling me was because I was being bullied and having severe social problems in third grade.
oops, that was supposed to be something like 'low hanging fruit', I'm pretty sure it was a typo.
I recently looked through the wikipedia list of the thirty richest Americans, and then tried to dig back into their class background (or the class background of the founder of the family fortune for heirs, like the Walton family). In almost every single case where I could identify the class background, they were from a top couple of percent background, but in only a few cases were they from an old money background. So a lot of the founders of big fortunes have backgrounds like 'father was a lawyer/ stockbroker/ ran a grocery store/ dentist/ college professor/ middle manager'.
One interesting feature here was that there were several Russian immigrants or children of immigrants on the list (usually they moved to the US before they were a teenager, and usually they were Jewish). In these cases I found that I have generally no idea what class status is implied by the descriptions of their parent's work in the Soviet Union. But I sort of suspect it usually was still top couple of percent.
I then looked at the European numbers, which were an interesting constrast in that:
A) A lot of the European super fortunes start with people who are as rich as far back as wikipedia tracks it. Ie the founder of the company got his money from his rich textile factory father (who doesn't have a wikipedia account) in the late nineteenth century.
B) Weirdly, there were also more actual rags to riches stories in among the European superrich. The Zara founder is the one that stuck in my head. He seems to have been from definitely a lower two thirds of the income distribution household, and possibly even genuinely poor family in early Franco era Spain. There were several other stories that felt very much like 'person with a totally normal family background somehow builds a giant fortune', while again that seemed to not happen in the US listings.
I probably should make a post based on this at some point.
Or von Neumann and his contemporaries and predecessors stole all the insights that someone with merely Neumann's intellect could develop independently, leaving future geniuses to have to be part of collaborative teams?
What specifically do you think is really high variance as opposed to the main downside being that it is expensive? If it is the 'not going to school thing' at least when I was growing up as a religiously homeschooled kid in the 90s, the strong impression that I got was that homeschooled kids systematically did better than other kids in terms of college success and other legible metrics -- of course this has a gargantuan selection bias going on. But that does give a strong lower bound for how bad that specifically can be for kids.
The other stuff I recall from the article (ie being from a high resource background, having an intellectual mentor, being surrounded by intellectual conversations, getting one on one tutoring, good intrinsic capability) all seem to be things that either you can't pick whether a child has or not, or where it would be weird if they left the child worse off.
One on one tutoring, for example, just doesn't seem like a high variance thing, it seems like a positive expected value thing that might not actually be that causally important or have that big of impact, but where it will only make things worse in exceptional cases.
Prime age labor force participation rate is the standard measure the econobloggers I've followed (most notably Krugman and Brad DeLong, who are part of the community also pushing for this interpretation of monetary policy) tend to use to measure economic health, and there are reasons to see it as pointing most closely to what we actually care about (that and hourly productivity, which isn't in these charts).
This makes me think it is more likely that there is some problem specifically with EA that is driving this. Or maybe something wrong with the sorts of people drawn to EA? I've burned out several times while following a career that is definitely not embedded in an EA organization. But it seems more likely there is something going on there.
Perhaps the key question is what does research on burnout in general say, and are there things about the EA case that don't match that?
Also to what extent is burnout specifically a problem, vs pepole from different places bouncing and moving onto different social groups (either wihtin a year or two, or after a long relationship)?
One part of this issue: The answer to the question is literally unknowable with our current scientific tools (though as we develop better models for simulating biology and culture this might change). We can't run experiments that are not contaminated by culture/biology.
What is left is observational evidence.
Proving causality with observational evidence usually doesn't work. This is especially the case with an issue like this with only a moderate effect size (a one SD effect on test scores is tiny compared to the impact of smoking on lung cancer, or stomach sleeping on SIDS), and where both factors are always present and connected.
What is left is reasoning from priors.
Personally I think HBD is unlikely because the observed outcome differences are exactly the sort of thing the known cultural forces would create even if there was no genetic difference, so the existence of these outcome differences does not serve as additional evidence of genetic differences. This means that while it is totally possible there could be major intelligence differences between groups, I don't have any particular reason to think they actually exist.
But this argument is simply not a robust or rigorous proof. I give it around a 1/100 chance of being wrong, while things that I actually know, like the name of the president, have a far, far smaller chance of being wrong.
Yeah, except it is bad to be forced to do things you don't want to.
Hahaha
That's actually a good idea. I just had my first who is 7 weeks old right now. So I should probably start making some up for her in a year or so.
Actually, I think someone is trying to make EA themed children's books. I saw an example cover for one from a friend, but I have no idea if this was just a cover, or an actual project.
And Mother of Learning is likely to be better -- but with less EA themed philosophical arguments and streams of thought.
So I'm just reasoning off the general existence of a really strong demographic transition effect where richer populations that among other things are way, way less likely to die in childbirth have way fewer children than poor populations.
The impression I get, without having looked into this very deeply, is that the two most common models for what is going on is a female education effect, which correlates with wealth and thus lower mortality, but where the lower mortality effect is not having a direct causal influence on having fewer children, and a certainty of having surviving children effect, where once child mortality is low enough, there isn't a perceived need to have lots of births to ensure having some kids who survive to adulthood.
I'm sure there are other theories, and I don't know the literature trying to disentangle from observational studies and 'natural experiments' exactly what component of the changes that are involved with becoming a rich industrialized society causes birthrates to collpase.
The basic point though is that whatever the causal story, empirically you will find an extremely strong association between low childhood mortality rates and low birth rates. This is why people who are concerned with overpopulation generally see reducing childhood death rates as a good thing from an overpopulation perspective: There is a good chance that it is causal for fewer people being born, and it definitely in the historical record doesn't seem to drive rapid population growth.
Having said that, when I was more interested in demographics ten years ago, I got the impression that Africa was seen as transitioning slower than Asia, Europe, Latin America or the Middle East had.
It wouldn't. First the time it takes for population changes to happen is very slow compared tithe business cycles that drive adaptations to economic changes. Second, eliminating malaria is considerably more likely to reduce population growth than increase it.
While I think there are cases where condensing world details is better writing, I think in general that is more of a style preference than actual good or bad. Some people like jargon heavy fantasy/ sci-fi, and I'm one of them.
But the second point that I should pay more attention to how what the character notices says about him is completely right, and probably by shifting that around more is a strong way to improve the viewpoint.
This seems to be a consistent (and not really surprising) point of criticism. I'll soon try rewriting the first chapter somewhat to see if I can make a version which works better. Though I suspect that the book is inevitably going to have somewhat of a preachy feeling, in part simply because I'm not as good of a writer as EY.
I was just checking if you might have introspective knowledge about how you'd respond to that :P, also I think I may have been trying to demonstrate that I am in fact paying attention to and thinking about the criticisms -- the important thing is in fact that X didn't work for you (and didn't work for several other people in the same way). Isn't there some saying about product development that when the customer tells you that it isn't working, they are right. When they tell you how to fix it, they have no idea what they usually don't know what they are talking about?
The too preachy feeling definitely is something to soften out and try fiddling with.
Yeah, you are right.
I added the prologue when an earlier version of the first chapter had a much weaker opening couple of opening sentences, but the first sentences here really don't need that extra intro.
But tone down the preachiness seems to be the general advice. I think I went too far in trying to make sure that certain ideas were clearly covered.
"It sort of feels like the author is a perfect EA machine who exists only to maximize total utility. I'm not getting much in the way of feelings or emotions from him."
Do you think you'd find him more relatable and emotional if I strongly emphasized how he is afraid of dying again?
Though maybe trying to bring out points of joy might work better, but that could also make him seem more like what you are talking about.
I don't think that is relevant to this project.
I'm not trying to have a fictional world provide evidence that EA is true. I'm trying to write a basic intro to EA essay that people who wouldn't read an 'EA 101 post' will read because it is embedded in the text of a novel that they are reading because I got them to care about what happens to the characters and how the story problems get resolved.
Also, I do think works of fiction can definitely be places to create extended thought experiments that are philosophically useful. I mean something like Those Who Walk Away from Omelas is a perfectly good expression and explanation of a view about the problems with utilitarianism. I don't like it because I bite the bullet involved and because I think vaguely pointing in a direction and saying 'there has to be a better solution' isn't actually pointing at a solution. But the problem with it as a piece of philosophical evidence is not that it is fiction, any more than the problem with every single trolley problem ever is that it is a work of fiction.
I'm definitely not saying/ assuming that you are wrong on this point (most likely you are right for some readers, and wrong for some others), but part of my theory of how to write the character comes in part from Harry in MoR who definitely begins as being extremely who he is.
A priori I don't see any reason to think that a textbooky novelization of a set of philosophical ideas will be worse if I have the MC start with that set of beliefs than if I have him develop them over time. I went through four different outline concepts while planning out this novel, and this resonated more strongly with me than the ones that were more focused on the MC being the one who gradually turns into an EA.
I guess the question should be to test how people respond to the charater and opening on average (but it probably shouldn't be random people, but fantasy readers who are inclined to be interested in EA in the first place).
Perhaps I could run some sort of mechanical turk or similar survey of a hundred or so people and ask questions about whether they find this preachy/ etc.
Or does anyone know good reddit subthreads to post this to, to see if people who are not part of the community react negatively to this as overly preachy?
"Having the protagonist get all these magic powers and saying that what he was most excited about was being able to help more people isn't something the reader is likely to connect to."
Do you mean that you didn't connect to this, or that you are guessing that EA naive or semi-naive readers won't connect? -- in the latter case I think that falls very much under the heading of a theory that should be tested, and if this novel doesn't work, the next author ought to try a different approach (and if this novel does work, the next novel ought to have a different approach anyways, because fiction is anti-inductive).
Regarding sterility perhaps that feeling points to something that can be improved in a straightforward way. Can you maybe try to draw out what you mean by it in more detail? That might spark some creative thought I can use.
That sounds very weird to me and surprising. I have been actively self publishing for seven years, and I've never heard anything about that. It might be some weird specific contract with Amazon.
The general problem that does come up is there are benefits to having an exclusive contract with Amazon, where only a ten percent sample can be posted elsewhere, but I'm not planning to go that route as it would probably limit the audience more than it would expand it.
Perhaps I'm misunderstanding the notion of the anthropic shadow, but it seems like whether it implies anthropic fragility depends strongly on the gears level explanation of what is causing the anthropic shadow.
For example, a tank might have survived a dozen battles where statistically it ought to have been hit and destroyed lots of times, but where it got lucky and was missed each time. In this case the selection effect does not make us think that the tank will perform any differentfly from another tank in the next battle.
So the question is whether we have a plane with holes, that can fly, but can't land, or if we have a tank that got really lucky, and is currently fine.
Having said that, it still seems plausible to me that I should view the eight degree temperature rise in the fossil record as less reassuring than I generally have due to this sort of argument.
Note: I am aware that this might be already addressed by your text, and I would see that if I closely reread it.
My objection is that we only have around 1^21 or so of observed improbability of intelligent civilizations ccoming per planet to burn off due to the Fermi paradox, while a strong anthropic shadow implies the odds against us reaching this position to be vastly worse than that. If you think that abiogenisis is incredibly unlikely, that reduces the pressure to think that there were lots of potential catastrophes that could have wiped out life on earth.
It's pretty bad if you are tall anf it's a cramped leg room.
Point that isn't necessarily connected to anything in this specific post, but I finally got Covid this week, and possibly because after the first one, my responses to the covid vaccines were pretty mild, but I definitely would have preferred to get a fourth shot than being sick in this way for the last week (also I'm missing the Less Wrong Europe Community weekend, which is a big bummer).
The link is the post where I recently encountered the idea.
My response is 'the argument from the existence of new self made billionaires'.
There are giant holes in our collective understanding of the world, and giant opportunities. There are things that everyone misses until someone doesn't.
A much smarter than human beings thing is simply going to be able to see things that we don't notice. That is what it means for it to be smarter than us.
Given how high dimensional the universe is, it would be really weird in my view if none of the things that something way smarter than us can notice don't point to highly certain pathways for gaining enough power to wipe out humanity.
I mean sure, this is a handwavy, thought experiment level counter argument. And I can't really think of any concrete physical evidence that might convince me otherwise. But, despite the weakness of this thought experiment evidenece, I'd be shocked if I ever viewed it as highly unlikely (ie less than one percent, or even less than ten percent) that a much smarter than human AI I won't be able to kill us.
And remember: To worry, we don't need to prove that it can, just that it might.
She doesn't want to permanently destroy his life because her definition of rape is focused on consent violations, not on 'vicious crime that must be punished severely'.
I've come to believe that she actually doesn't see rape or sexual assault as automatically a serious crime deserving of serious punishment, but that makes communicating with people whose English got wired up differently in childhood difficult on both sides.
Almost certainly what he means is: restrictive zoning leads to small amounts of new housing, which leads to high rents, which according to this essay we just read, leads to high homelessness.
My personal logic here I think is the same as Zvi's: I know at least ten or fifteen people fairly well who have had Covid, I think in at least one case twice, and only one of them seems to have had significant long term fatigue (and that was from a bad untested case in April 2020, and he is highly sensitive to health concerns -- that is to say, I think he is a hypochondriac, but he probably doesn't think he is one -- and whose fatigue mostly went away after more than a year).
If there was a really high chance of healthy pepole having bad fatigue/ brain fog from each mild case of Covid, everyone's anecdata would look different.
Hurrah!
I could be wrong, but my impression is that Yudkowski's main argument isn't right now about the technical difficulty of a slow program creating something aligned, but mainly about the problem of coordinating so that nobody cuts corners while trying to get there first (I mean of course he has to believe that alignment is really hard, and that it is very likely for things that look aligned to be unaligned for this to be scary).
The question of how to enforce power sharing and protection of minority rights is obviously one of the core 'in principle this can be really bad' issues.
In the specific care of Iraq though, I wonder how much of the issue going wrong was that the US decision makers wanted a simple majoritarian system, rather than doing something like having a second house for that would be elected along communitarian grounds, in which the Sunni's would have an effective veto over future policies.
Was something like that being considered at the time, and rejected, not considered, or just trusted in the context?
I feel like the rally is one of those things that isn't news in the important sense that it happening provides no new information. It is the sort of thing that I would expect to still see even if 95 percent of the Russian population actually desperately wants out of Ukraine, and hates the war, and also the sort of thing I'd expect to see if 95% of the population enthusiastically supports the war.
Though on considering this, I also realized that I don't think all of the peace protests and arrests associated with them provide us any information either: Even if only five percent of the population seriously dislikes the war while the rest is supportive, you'd expect these protests and vice versa.
Though if most of the population was strongly opposed to the war, they would have probably gotten a lot bigger as a focal point. So the peace protests being are weak evidence that the regime is fairly strong. I think.
That feels like a real and substantive response to me. Ie the amount of feedback that would go to a response that feels intelligent and in the same ballpark, but not promising (in the view of the researcher replying). I don't think the reply and non reply to your followup should be taken as a signal of anything.
I would note, I know absolutely nothing about the technical aspects of the matter. I am rather thinking about your attitude as similar to stories of people in the ea space who apply to two jobs, go through several rounds of interviews, and then decide that not getting the job in the end meant they were a terrible candidate.
I mean if your risk was zero, and you don't care about the downsides of having a risk of zero, go for it. Though I suspect there are mortality risks in being that isolated that are on the order of 1/30,000 a year too.
Also, if you want to get a different vaccine, go for it. My wife's boyfriend got his first two as sinopharm, just get an extra shot or two, and the inactivated vaccines are probably as strong as the mrna, with a more established technology.
Also if you are socially interacting with people in closed spaces more than maybe once a month, I suspect your odds of getting some exposed to some form of covid sooner or later are still probably close to 1.