Posts
Comments
I think that crux is doing a lot of work in that it forces the conversation to be about something more specific than the main topic, and because it makes it harder to move the goal posts partway through the conversation. If you're not talking about a crux then you can write off a consideration as "not really the main thing" after talking about it.
What's the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a "restart LW" focus seem easier than trying to guarantee tech support responsiveness.
"Strong LW diaspora writers" is a small enough group that it should be straightforward to ask them what they think about all of this.
Yes. This meetup is at the citadel.
My impression is that the OP says that history is valuable and deep without needing to go back as far as the big bang -- that there's a lot of insight in connecting the threads of different regional histories in order to gain an understanding of how human society works, without needing to go back even further.
The second and most already-implemented way is to jump outside the system and change the game to a non-doomed one. If people can't share the commons without defecting, why not portion it up into private property? Or institute government regulations? Or iterate the game to favor tit-for-tat strategies? Each of these changes has costs, but if the wage of the current game is 'doom,' each player has an incentive to change the game.
This is cooperation. The hard part is in jumping out, and getting the other person to change games with you, not in whether or not better games to play exist.
Moloch has discovered reciprocal altruism since iterated prisoner's dilemmas are a pretty common feature of the environment, but because Moloch creates adaptation-executors rather than utility maximizers, we fail to cooperate across social, spatial, and temporal distance, even if the payoff matrix stays the same.
Even if you have an incentive to switch, you need to notice the incentive before it can get you to change your mind. Since many switches require all the players to cooperate and switch at the same time, it's unlikely that groups will accidentally start playing the better game.
Convincing people that the other game is indeed better is hard when evaluating incentives is difficult. Add too much complexity and it's easy to imagine that you're hiding something. This is hard to get past since moving past it requires trust, in a context where we maybe are correct to distrust people -- i.e. if only lawyers know enough law to write contracts, they should probably add loopholes that lawyers can find, or at least make it complicated enough that only lawyers can understand it, so that you need to continue to hire lawyers to use your contracts. In fact contracts are generally complicated and full of loopholes and basically require lawyers to deal with.
Also, most people don't know about Nash equilibria, economics, game theory, etc., and it would be nice to be able to do things in a world with sub-utopian levels of understanding incentives. Also, trying to explain game theory to people as a substep of getting them to switch to another game runs into the same kind of justified mistrust as the lawyer example -- if they don't know game theory and you're saying that game theory says you're right, and evaluating arguments is costly and noisy, and they don't trust you at the start of the interaction, it's reasonable to distrust you even after the explanation, and not switch games.
I tend to think of downvoting as a mechanism to signal and filter low-quality content rather than as a mechanism to 'spend karma' on some goal or another. It seems that mass downvoting doesn't really fit the goal of filtering content -- it just lets you know that someone is either trolling LW in general, or just really doesn't like someone in a way that they aren't articulating in a PM or response to a comment/article.
That just means that the sanity waterline isn't high enough that casinos have no customers -- it could be the case that there used to be lots of people who went to casinos, and the waterline has been rising, and now there are fewer people who do.
I have the same, though it seems to be stronger when the finger is right in front of my nose. It always stops if the finger touches me.
Hobbes uses a similar argument in Leviathan -- people are inclined towards not starting fights unless threatened, but if people feel threatened they will start fights. But people disagree about what is and isn't threatening, and so (Hobbes argues) there needs to be a fixed set of definitions that all of society uses in order to avoid conflict.
See the point about why its weird to think that new affluent populations will work more on x-risk if current affluent populations don't do so at a particularly high rate.
Also, it's easier to move specific people to a country than it is to raise the standard of living of entire countries. If you're doing raising-living-standards as an x-risk strategy, are you sure you shouldn't be spending money on locating people interested in x-risk instead?
My guess is that Eli is referring to the fact that the EA community seems to largely donate to where GiveWell says to donate, and that a lot of the discourse is centered around a system of trying to figure out all of the effects of a particular intervention, weigh it against all other factors, and then come up with a plan of what to do, where said plan is incredibly sensitive to you being right about the prioritization, facts about the situation, etc. in a way that will cause you to predictably fail to do as well as you could, due to factors like lack of on-the-ground feedback suggesting other important areas, misunderstanding people's values, errors in reasoning, and a lack of diversity in attempts to do something so that if one of the parts fails nothing gets accomplished.
I tend to think that global health is relatively non-controversial as a broad goal (nobody wants malaria! like, actually nobody) that doesn't suffer from the "we're figuring out what other people value" problem as much as other things, but I also think that that's almost certainly not the most important thing for people to be dealing with now to the exclusion of all else, and lots of people in the EA community seem to hold similar views.
I also think that GiveWell is much better and handling that type of issue than people in the EA community are, but that (at least the facebook group) is somewhat slow to catch up.
It seems that "donate to a guide dog charity" and "buy me a guide dog" are pretty different w/r/t the extent that it's motivated cognition. EAs are still allowed to do expensive things for themselves, or even as for support in doing so.
It seems easier to evaluate "is trying to be relevant" than "has XYZ important long-term consequence". For instance, investing in asteroid detection may not be the most important long-term thing, but it's at least plausibly related to x-risk (and would be confusing for it to be actively harmful), whereas third-world health has confusing long-term repercussions, but is definitely not directly related to x-risk.
Even if third world health is important to x-risk through secondary effects, it still seems that any effect on x-risk it has will necessarily be mediated through some object-level x-risk intervention. It doesn't matter what started the chain of events that leads to decreased asteroid risk, but it has to go through some relatively small family of interventions that deal with it on an object level.
Insofar as current society isn't involved in object-level x-risk interventions, it seems weird to think that bringing third-world living standards closer to our own will lead to more involvement in x-risk intervention without there being some sort of wider-spread availability of object-level x-risk intervention.
(Not that I care particularly much about asteroids, but it's a particularly easy example to think about.)
Social feedback is an incentive, and the bigger the community gets the more social feedback is possible.
Insofar as Utilitarianism is weird, negative social feedback is a major reason to avoid acting on it, and so early EAs must have been very strongly motivated to implement utilitarianism in order to overcome it. As the community gets bigger, it is less weird and there is more positive support, and so it's less of a social feedback hit.
This is partially good, because it makes it easier to "get into" trying to implement utilitarianism, but it's also bad because it means that newer EAs need to care about utilitarianism relatively less.
It seems that saying that incentives don't matter as long as you remove social-approval-seeking ignores the question of why the remaining incentives would actually push people towards actually trying.
It's also unclear what's left of the incentives holding the community together after you remove the social incentives. Yes, talking to each other probably does make it easier to implement utilitarian goals, but at the same time it seems that the accomplishment of utilitarian goals is not in itself a sufficiently powerful incentive, otherwise there wouldn't be effectiveness problems to begin with. If it were, then EAs would just be incentivized to effectively pursue utilitarian goals.
My guess is just that the original reason was that there were societal hierarchies pretty much everywhere in the past, and they wanted some way to have nobles/high-status people join the army and be obviously distinguished from the general population, and to make it impossible to be demoted far down enough so as to be on the same level. Armies without the officer/non-officer distinction just didn't get any buy-in from the ruling class, and so they wouldn't exist.
I think there's also a pretty large difference in training -- becoming an officer isn't just about skills in war, but also involves socialization to the officer culture, through the different War Colleges and whatnot.
You would want your noticing that something is bad to, in some way, indicate what would be a better way to make the thing better. You want to know what in particular is bad and can be fixed, rather than the less informative "everything". If your classifier triggers on everything, it tells you less on average about any given thing.
My personal experience (going to Harvard, talking to students and admissions counselors) suggests that at one of the following is true:
Teacher recommendations and the essays that you submit to the colleges are also important in admissions, and the main channel through which human capital not particularly captured by grades, and personal development are signaled.
There are particularly known-to-be-good schools that colleges disproportionately admit students from, and for slightly different reasons that they admit students from other schools.
I basically completely ignored signalling while in high school, and often prioritized taking more interesting non-AP classes over AP classes, and focused on a couple of extracirricular relationships rather than diversifying and taking many. My grades and standardized test scores also suffered as a result of my investment in my robotics team.
All I can say is that I don't understand why intelligence is relevant for whether you care about suffering.
Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering.
As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating human and animal suffering is huge -- the difference in potential impact on the future between a suffering human vs a non-suffering human is massive compared to that between a suffering animal and a non-suffering animal.
Basically, it seems like alleviating one human's suffering has more potential to help the far future than alleviating one animal's suffering. A human who might be incapacitated to say, deal with x-risk might become helpful, while an animal is still not going to be consequential on that front.
So my opinion winds up being something like "We should help the animals, but not now, or even soon, because other issues are more important and more pressing".
Political instrumental rationality would be about figuring out and taking the political actions that would cause particular goals to happen. Most of this turns out to be telling people compelling things that you know that they don't happen to, and convincing different groups that their interests align (or can align in a particular interest) when it's not obvious that they do.
Political actions are based on appeals to identity, group membership, group bounding, group interests, individual interests, and different political ideas in order to get people to shift allegiances and take action toward a particular goal.
For any given individual, the relative importance of these factors will vary. For questions of identity and affiliation, they will weigh those factors based on meaning being reinforced, and memory-related stuff (i.e. clear memories of meaningful experiences count, but so do not-particularly meaningful but happens every day stuff). For actual action, it will be based on various psychological factors, as well as simply options being available and salient while they have the opportunity to act in a way that reinforces their affiliations/meaning/standing with others in the group/personal interests.
As a result, political instrumental rationality is going to be incredibly contingent on local circumstances -- who talks to who, who believes what how strongly, who's reliable, who controls what, who wants what, who hears about what, etc.
A more object level example takes place in The Wire, when a pastor is setting up various public service programs in an area where drug dealing is effectively legalized.
The pastor himself is able to appeal to his community on the basis of religious solidarity in order to get money, and so he can fund some stuff. He cares about public health and the fate of the now unemployed would-be drug runners who are no longer necessary for drug dealing because of Christian reasons (since drugs are legal, the gang members don't bother with various steps that ensure that none of them can be photographed handing someone drugs for money -- the dealer gets the money then the runner (typically a child) goes to the stash to give the buyer drugs). Further, he knows people from various community/political events in Baltimore.
So far, so good. He controls some resources (money), has a goal (public health, child development), and knows some people.
One of the first people he talks to is a doctor who has been trying to do STD prevention for a while, but hasn't had the funding or organizational capacity to do much of anything. The pastor points out to him that there are a lot of at-risk people who are now concentrated in a particular location so that the logistics of getting services to people is much simpler. In this case, the pastor simply had information (through his connections) that the doctor didn't, and got the doctor to cooperate by pointing out the opportunity to do something that the doctor had wanted.
He gets the support of the police district chief who decided to selectively enforce drug laws by appealing to the police chief's desire for improving the district under his command (he was initially trying to shift drug trafficking away from more populated areas, and decrease violence by decreasing competition over territory), and it more or less worked.
That being said, I have more or less no idea what kinds of large-scale political action ought to be possible/is desirable.
I totally have the intuition though that step one of any plan is to become personally acquainted with people who have some sort of influence over the areas that you're interested in, or to build influence by getting people who have some control over what you're interested in to pay more attention to you. Borderline, if you can't name names, and can't point at groups of people involved in the action, then you can't do anything particularly useful politically.
This distinction is just flying/not-flying.
Offense has an advantage over defense in that defense needs to defend against more possible offensive strategies than offense needs to be capable of doing, and offense only needs one undefended plan in order to succeed.
I suspect that not-flying is a pretty big advantage, even relative to offense/defense. At the very least, moving underground (and doing hydroponics or something for food) makes drones just as offensively helpful as missles. Not flying additionally can have more energy and matter supplying whatever it is that it's doing than flying, which allows for more exotic sensing and destructive capabilities.
Almost certainly, but the point that stationary counter-drones wouldn't necessarily be in a symmetric situation to counter-counter-drones holds. Just swap in a different attack/defense method.
I think that if you used an EMP as a stationary counter-drone you would have an advantage over drones in that most drones need some sort of power/control in order to keep on flying, and so counter-drones would be less portable, but more durable than drones.
From off site:
Energy and Focus is more scarce than Time (at least for me), Be Specific (somewhat on site, but whatever),
From on the site:
Mind Projection Fallacy, Illusion of Transparency, Trivial Inconveniences, Goals vs. Roles, Goals vs. Urges
Fair, but at least some component of this working in practice seems to be a status issue. Once we're talking about awesomeness and importance, and the representativeness of a person's awesomeness and the importance of what they're working on, and how different people evaluate importance and awesomeness, it seems decently likely that status will come into play.
Good point, I did summarize a bit fast.
There's two issues at hand, one asserting that you're doing something that's high status within your community, and asserting that your community's goals are more important (and higher status) than the goals of the listener's community.
If there's a large inferential distance in justifying your claims of importance, but the importance is clear, then it's difficult to distinguish you from say, cranks and conspiracy theorists.
(The dialogues are fairly unrealistic, but trying to gesture at the pattern)
A within culture issue:
"I do rocket surgery"
"I'm working on hard Brain Science problem X"
"Doesn't Charlie work on X?"
"Yeah."
"Are you working with Charlie on X?"
"No."
"Isn't Charlie really smart though?"
"Yep."
"Are you saying that you're really smart too?"
"No."
"Why bother?"
Between cultures:
"I do Rocket Surgery".
"That's pretty cool. I'm trying to destroy the One Ring".
"Huh?"
"Basically, I'm trying to destroy the power source for the dark forces that threaten everything anyone holds dear".
"Shouldn't Rocket Brain Surgery Science be able to solve that"?
"No. that's a fundamentally flawed approach on this problem -- the One Ring doesn't have a brain, and you carry it around. If you look at --"
"So you're looking for a MacGuffin?"
"No."
I entirely agree with this point, but suspect that actually following this advice would make people uncomfortable.
Since different occupations/goals have some amount of status associated with them (nonprofits, skilled trades, professions) many people seem to take statements about what you're working on to be status claims in addition to their denotational content.
As a result, working on something "outside of your league" will often sound to a person like you're claiming more status than they would necessarily give you.
Textbooks replace each other on clarity of explanation as well as adherence to modern standards of notation and concepts.
Maybe just cite the version of an experiment that explains it the best? Replications have a natural advantage because you can write them later when more of the details and relationships are worked out.
If I were in London, or even within an hour or two of it, I would try to go to this.
"May your plans come to fruition"
I used to say that more when leaving megameetups or going on a trip or something. It has the disadvantage that you can't say it very fast.
I also want a word/phrase that expresses sympathy but isn't "sorry".
Entirely agreed. Even if you more often than not get the same answers from fMRI and surveys, the fMRI externalizes the judgment of whether or not someone is empathizing/emotional/cognitive stating with regards to something else.
One might argue that we probably have a decent understanding of how well people's verbal statements line up with different facts, but where this diverges from the neurological reality is interesting enough to be spending money on the chance of finding the discrepancies. If we don't find them, that's also fascinating, and is worth knowing about.
Taking for granted that what people say about themselves is accurate, but externalized measurement is also worthwhile for it's own sake.
I think it would probably be worth going into a bit more about what delineates tacit rationality from tacit knowledge. Rationality seems to me to apply to things that you can reflect about, and so the concept of things that you can reflect about but can't necessarily articulate seems weird.
For instance, at first it wasn't clear to me that working at a startup would give you any rationality-related skills except insofar as it gives you instrumental rationality skills, which could possibly just be explained as better tacit knowledge -- you know a bajillion more things about the actual details necessary to run a business and make things happening.
There's actually a ton of non-tacit knowledge potential powerups from running a startup though! That probably even engage reflection!
For instance, a person could learn what it feels like when they're about to be too tired to work for the rest of the day, and learn to stop before then so that they could avoid burnout. This would be a reflective skill (noticing a particular sensation of tiredness), and yet it would be nigh impossible to articulate (can you describe what it feels like to almost be unable to work well enough that I can detect it in myself?).
When evaluating the relationship between success and rationality it seems worth keeping in mind survivorship bias.
An interesting case is that Will Smith seems likely to be explicitly rational in a way that other people in entertainment don't talk about -- he'll plan and reflect on various movie-related strategies so that he can get progressively better roles and box office receipts.
For instance, before he started acting in movies, he and his agent thought about what top-grossing movies all had in common, and then he focused on getting roles in those kinds of movies.
http://www.time.com/time/magazine/article/0,9171,1689234,00.html
Marginal effort within the bounds of a consulting agency offering a service "tailored" to each school district.
I think the hard part of refitting the model would probably just be getting access to the data -- beyond that it seems like a statistician or programmer would be able to just tell a computer how to minimize some appropriate cost function.
Something like most of the marginal effort is devoted to gathering the data, which presumably doesn't require that much expertise relative to understanding the model in the first place.
Maybe slightly vary the parameters to make the model "new"? Like, fit it to data from that district, and it will probably be slightly different from "other" models.
Has anyone published data on the effectiveness of Bayesian prediction models as an educational intervention? It seems like that would be very helpful in terms of being able to convince school districts to give them a shot.
We've relocated to Sever 105.
Same. I'd be interested in trying this for a bit starting after mid-May.
It's somewhat tricky to separate "actions which might change my utility function" from "actions". Gandhi might not want the murder pill, but should he eat eggs? They have cholesterol that can be metabolized into testosterone which can influence aggression. Is that a sufficiently small effect?
A lot of Herodotus' histories have interesting stories about people exhibiting and not exhibiting ancient Greek virtues.
Though, the other stuff in the post, and his other comments on the thread, really make it seem to me to be related to the house rather than to him, or his friends.
Given that Johnny Depp appears to be on the Singularity side (as the uploaded human), I suspect that they'll be portrayed sympathetically, even if the ending isn't exactly happy.
I think that the nutritional value of the food, or at least the perceived nutritional value of the food, also plays a role in how quickly you start liking it. I've started liking raw beef liver and fish oil after waaaay fewer tries than say, ceviche.
So given some data, to determine the relative probability of two competing hypotheses, we start from the ratio of their prior probabilities, and then multiply by the ratio of their likelihoods. If we restrict to hypotheses which make predictions "within our means"---if we treat the result of a computation as uncertain when we can't actually compute it---then this calculation is tractable for any particular pair of hypotheses.
...
When two people disagree about the relative complexity of two hypotheses, it must be because that hypothesis is simpler in one of their languages than in the other.
This is a fairly minor point, but do you mean to imply that the prior probability of a never-happened-before-event normally swamps out updates about it's probability that you find out later on? Or that people update to information by reformulating their concepts so as to express more probable events to have lower complexity?
Either of those would be very interesting, though I also think the argument would stand if you didn't mean either of those as well.
From what I understand, Watson is more supposed to do machine learning and question answering in order to do something like make medical diagnoses based on the literature.
MetaMed tries to evaluate the evidence itself, in order to come up with models for treatment for a patient that are based on good data and an understanding of their personal health.
They both involve reviewing literature, but MetaMed is actually trying to ignore and discard parts of the literature that aren't statistically/logically valid.
Upvoted, but I'm a bit confused as to what we're trying to refer to with "spam".
If by spam we mean advertising, yes. Definitely.
If by spam we mean undesirable messaging that lowers the quality of the site, then I would think that this is very much not spam.
It's in some weird-to-link-to facebook format.
Basically, it's the same as the edge essay, but you should replace the last paragraph with...
Robert Altmeyer's research shows that for a population of authoritarian submissives, authoritarian dominators are a survival necessity. Since those who learn their school lessons are too submissive to guide their own lives, our society is forced to throw huge wads of money at the rare intelligent authoritarian dominants it can find, from derivative start-up founders to sociopathic Fortune 500 CEOs. However, with their attention placed on esteem, their concrete reasoning underdeveloped and their school curriculum poorly absorbed, such leaders aren’t well positioned to create value. They can create some, by imperfectly imitating established models, but can’t build the abstract models needed to innovate seriously. For such innovations, we depend on the few self-actualizers we still get; people who aren’t starving for esteem. People like Aaron Swartz.
Aaron Swartz is dead now. He died surrounded by friends; the wealthy, the powerful and the ‘smart’. He died desperate and effectively alone. A friend of mine, when she was seventeen, was involuntarily incarcerated in a mental hospital. She hadn’t created Reddit, but she had a blog with some readers- punks, fan girls and street kids. They helped her to escape, and to hide until the chase blew over. Aaron didn’t have friends like that. The wealthy, the powerful, and the ‘smart’ tend not to fight back; they learned their lessons well in school.
I more or less agree with your reading of this essay, but it misses an important point that the edited version on Edge leaves out -- in the original version, he compared the friends of Aaron Swartz with the friends of someone Michael knows.
Basically, when she was institutionalized against her will, her low-status, relatively poor friends helped break her out of the mental hospital and hide her until the police chase blew over. In contrast, when placed in a legal battle Aaron Swartz wasn't able to rely on his much smarter, wealthier, and in almost every way better off friends to help him. The well-learned elites couldn't really help protect him because they were too used to submitting.
With this in mind, the point of the essay is much more that we rely on non-social cognition for innovation, but that the culture of submission has destroyed our mechanisms for supporting self-actualizing innovators who come under fire. With this lack of support, our innovators are even worse off than they are just based on worse social skills and being less well understood.
I think it's pretty possible to macro-optimize successfully and still lose. All you have to do is know what to do and not how to do it.