Posts
Comments
Portland, Oregon
December 22nd, 6pm, ceremony starts at 7
BridgeSpace
133 SE Madison St, Portland, OR 97214
Meetup link: https://www.meetup.com/portland-effective-altruism-and-rationality/events/304910660/
You need to differentiate the question of how law is managed from who has commit rights. Managing law as code, with patches and such, is an implementation detail. Current laws are actually written similar to git hashes - changes to the existing code that are then applied. That all of this is manual is not at all interesting, and automating it with git would not in any way change the fundamental power structures at play.
On the other hand, proposing that anyone can change the law would clearly be insane, just as large open source projects must have maintainers or go entirely off the rails. Currently you can call up your representative and propose a change to the law, they just will very very rarely bother to listen to you. Just like an open source project where the maintainer cares about their particular concerns, not yours. So the question is who has commit rights and how to manage them - in other words, it's fundamentally a question of political power and deciding who has it.
I'm in support of anti-aging research, and think we should fund it more highly, specifically because the long-term benefits are so high once we get it right. Does anyone have any comments on whether SENS is the best place to put money if you're interested in donating to anti-aging?
As a side note, my experience working with complex codebases has led me to disbelieve your optimism for how quickly we can find reliable ways to get more than a decade of increased healthspan. The human body is vastly, vastly, vastly more complex than nearly any codebase humans have developed, and less well factored by far. And working to make notable improvements to complex codebases that are well-factored still takes years of dedicated effort, with much better tooling than we have for modifying the body.
I think it'd be interesting to have an online unconference, as well. Maybe put up a post here on the day, and people can write in comments with a time, topic, and google hangout link.
As a rationalist who had kids while within a deep community, I will say that only some of the community (that mostly said they wanted to stick around) actually stuck around after the kids showed up. I think there's a whole series to be written about that, but I'll sketch towards it now:
- Parents schedules are different. If you really want to see them, you have to show up, not just invite them to your nonparent parties.
- After a dozen invites that we don't make it to, nonparents stop inviting us parents, and then we're cut off. Even if we don't show up, we appreciate the invitation - I have occasionally made it to a nonparent invitation, but only from those who persist in inviting me.
- Immediately after the baby arrives, the best things to do to help parents are chores. Prepping and making food, laundry, cleaning, etc.
- Now that the kids are old enough for a consistent bedtime, I'm probably best available to hang out at 5:30pm or 9pm, but not 8pm. The 9pm one relies on you visiting me, or my partner hanging out in case the kids wake up. (I love 9pm visitors). If you're a nonparent who wants to help, you can always offer to hang out after the kids are asleep so parents can go out (if they're not going to sleep by 10, which is pretty common, so don't be surprised if that doesn't work for many parents)
- As a nonparent, expect to build familiarity with the kids over a handful of events before you can babysit. Kids warm up to adults just like people warm up to other people - often slowly.
There are a handful of developers who specialize in building cohousings so that folks interested in living in one can focus on building community and then all moving in together. In Portland one of the longer persisting ones is Orange Splot. http://www.orangesplot.net/ I'm sure there are Bay Area ones, and it's possible the folks at Orange Splot know them. I'd expect they'd also show up at the Cohousing Conference.
Doing both community development and building development is, of course, three times as hard as just doing the community development part and moving in to a building that someone else prepares for you.
The cohousing conference ( http://www.cohousing.org/2017 ) is a great place to get questions answered and learn from the folks who've been doing this for a while. The Bay Area definitely has a handful of solid cohousings, and often they give tours and talk to folks who are interested in setting them up.
(I'm happy to talk about this further, but may well lose track of this thread. feel free to email me or catch me on the slack.)
Cohousing, in the US, is the term of art. I spent a while about a decade ago attempting to build a cohousing community, and it's tremendously hard. In the last few months I've moved, with my kids, into a house on a block with friends with kids, and I can now say that it's tremendously worthwhile.
Cohousings in the US are typically built in one of three ways:
- Condo buildings, each condo sold as a condominium
- Condo/apartment buildings, each apartment sold as a coop share
- Separate houses.
The third one doesn't really work in major cities unless you get tremendously lucky.
The major problem with the first plan is, due to the Fair Housing Act in the 1960s, which was passed because at the time realtors literally would not show black people houses in white neighborhoods, you cannot pick your buyers. Any attempt to enforce rationalists moving in is illegal. Cohousings get around this by having voluntary things, but also by accepting that they'll get freeriders and have to live with it. Some cohousings I know of have had major problems with investors deciding cohousing is a good investment, buying condos, and renting them to whoever while they wait for the community to make their investment more valuable.
The major problem with the coop share approach is that, outside of New York City, it's tremendously hard to get a loan to buy a coop share. Very few banks do these, and usually at terrible interest rates.
Some places have gotten around this by having a rich benefactor who buys a big building and rents it, but individuals lose out on the financial benefits of homeownership. In addition, it is probably also illegal under the Fair Housing Act to choose your renters if there are separate units.
The other difficulties with cohousing are largely around community building, which you've probably seen plenty of with rationalist houses, so I won't belabor the point on that.
The author does not seem to understanding survivorship bias. He never approaches the question of whether the things he proposes are the reason for Musk's success actually work, or whether they happen to work for Musk in a context-dependent way. In other words, if you give this as advice to someone random, will they end up successful or an outcast. I'd guess the latter in most cases. This is in general the problem of evaluating the reasons behind success.
Also, unnecessary evolutionary psychology, done badly, even to the point of suggesting group selection. Ick.
The idea that using technical language (which isn't actually any more precise in meaning in the examples cited) in regular life is beneficial in being more scientific is also pretty suspect.
75% probability that the following things will be gone by: LessWrong: 2020 Email: 2135 The web: 2095 Y Combinator: 2045 Google: 2069 Microsoft: 2135 USA: 2732 Britain: 4862
These don't seem unreasonable.
I'm not sure that this method works with something that doesn't exist coming into existence. Would we say that we expect a 75% chance that someone will solve the problems of the EmDrive by 2057? That we'll have seasteading by 2117?
I'm starting by reading through the cites on this page:
caveats: they're new; it's hard to do what they're doing; they have to look serious; this is valuable the more it's taken seriously.
They have really wonderful site design/marketing...except that it doesn't give me the impression that they will ever be making the world better for anyone other than their clients. Here's what I'd see as ideal:
- They've either paid the $5k themselves, a drop in the bucket of their funding apparently, and put up one report as both a sample and proof of their intent to publish reports for everyone, or (better) gotten a client who's had a report to agree to allow them to release it.
- This report, above, is linked to from their news section and there's a prominent search field on the news section (ok), or there's a separate reports section (better)
- The news section has RSS (or the reports section has RSS, or both, best)
On a more profiteering viewpoint, they could offer a report for either $5k for a private report, or $3k for a public report, with a promise to charge $50 for the public report until they reach $5k (or $6k, or an internal number that isn't unreasonable) and then release it.
Most people who are seriously sick tend to get into a pretty idealistic mode, is my experience, and would actually be further convinced by putting their $5k both to help themselves and to help others, and while sure, they could release the report themselves, metamed has a central, more trustable platform. If they want me to believe that they're interested in doing that kind of thing, it'd be nice if they had something up there to show me that they hope to.
On preview, I realize that the easy objection is that these are personalized reports, and data confidentiality is important. They obviously will only be able to publish pieces of reports that are not personal, and this is obviously a more costly thing than just tossing a pdf up on a website. Hm.
All of that said, they look like a really exciting company, I really hope they do well (and then take my advice =).
It's less the colors available to the kid and more the way the outside world responds to the kid in those colors, I think.
I've seen there be much more color variation among boys clothes, yes, but more importantly, a toddler wearing pink is gendered by others as female, and talked to as if female, and all other colors are generally talked to as if male. Occasionally yellow is gendered female too.
Within the domain of building-a-system, paper prototyping/wireframing teaches people to be specific with their ideas. It's only helpful when your ideas are "I want there to be this kind of thing" and then putting it on paper creates the specifics in your head.
I think your terrifying vision sounds like a lot of fun.
I would imagine you can play it with any cooperative game. Another great one that wouldn't quite fall prey to the problem you describe is Scotland Yard, which has a group against a single player. The group could play with biases, while the single player plays without and tries to guess the biases. People have also suggested competitive games, such as Munchkin, but I'm skeptical so far. If anyone does play it with competitive games, I'd love to hear about it of course.
We hope to get there. It's going to take a while, I suspect.
I came here to say this, and also to say that nursing closes some doors, but it opens up others. Doctors I know often regret not becoming Nurse Practitioners, who can do almost everything doctors can do, but also get to switch fields when they want to, and get paid pretty well too.
Still, that's about the details, and your post is about the generalizations from them. I think they're pretty interesting generalizations, but mostly I just want to point people reading this to Study Hacks for a lot more conversation about how to achieve excellence in whatever field you end up in.
I think that might be the source of the somebody's wrong on the internet thing.
me! me! I'll be there! I've wanted a meetup here for a long time, but was pretty sure nobody was here.
SPRs can be gamed much more directly than human experts. For example, imagine an SPR in place of all hiring managers. In our current place, with hiring managers, we can guess at what goes in to their decisionmaking and attempt to optimize for it, but because each manager is somewhat different, we can't know that well. A single SPR that took over for all the managers, or even a couple of very popular ones, would strongly encourage applicants to optimize for the variable most weighted in the equation. Over time this would likely decrease the value of the SPR back to that of a human expert.
This has a name in the literature, but I can't remember it at the moment. You see this problem in, for example, the current obsessive focus on GDP as the only measure of national well-being. Now that we've had that measure for some time, we're able to have countries whose GDP is improving but who suck on lots of other measures, and thus politicians who are proud of what they've done but who are hated by the people.
Yes, in some cases, this would cause us to improve the SPR to the point where it accurately reflected the qualities that go into success. But that's not a proven thing.
That said, I'd really like to see a wiki or other attempting-to-be-complete resource for finding an SPR for any particular application. Anyone got one?
All of the members last night were professional programmers, so I'm not sure that will help us, particularly, but I do think algorithmic thinking is useful to people who don't currently have it.
That's interesting. I'd be worried about establishing safety and about unstable mental states in unknown new members. But I'm interested in trying to make an exercise out of this.
The thing I've noticed about high status people is that they're only interested in associating with other high status people. But low status people are interested in associating with high status people. So high status people seem to spend a lot of time assuming that the person who just came up to talk to them is only interested in shining in their status. So a hypothesis:
- More time defending status than low status people need to spend
- Energy spent identifying need to defend status prevents engaged interaction with many of the people who come up to them.
To test this hypothesis, I would argue that high status people are more intelligent when they are in either contexts where they only interact with high status people or contexts where no one knows they are high status than they are in contexts where they interact with low status people who know who they are.
I've seen this with people who have high community status -- they're more intelligent in communities that they're not usually members of.
Yeah, I think you're attempting to take over a separate concept (fluency?) with your idea of taskification. You generate tasks when you want to complete something piece-wise, and it may be valuable to break complex things into tasks for explanatory purposes, but fluency isn't based primarily on understanding the tasks as tasks, it's based on experience and, well, fluency.
I was just lamenting this morning how my todo list, a set of tasks for the next few days, was depressing me. When I wrote it, it was a great joy to get all these things out of my head, but now that all I had to do was follow them, it felt mechanical and boring. I could rewrite the list and gain some excitement about a few of the tasks that way, but instead I've been trying to figure out the why of this feeling, and your post gets me right back into it.
I think there's an ideal working state -- perhaps the state of Flow is describing it, or perhaps that's simply some peoples' ideal working state, and there's a more general form of it (I'll use flow for this comment). In this ideal working state, we're constantly encountering problems that are within a known scope. So they're problems -- we don't immediately know how to handle them -- but they're scoped problems, so we know how to figure it out. This is fun, because there are problems, but they're solvable problems.
Dating advice you describe as useful does the opposite of flow -- it creates tasks. Tasks, because they don't require the overcoming of scoped problems, are boring. Taskifying things make them routine, easy, and boring. Taskifying itself can be in flow. Re-taskifying recreates the sense of flow and allows a task to fall within that flow.
What I would want isn't taskified advice, it's the experience that would allow dating to feel flowful.
(I've italicised to try to mark flow as a technical term. Please let me know if I should change the format.)
When you talk about pain being good, you're talking about the information it sends being useful to survival, not about the method of signalling (pain).
Just as you looked at CIPA patients to ask what's good about pain by looking at those who don't have it, you can look at people who suffer from chronic pain to look at what's bad about it.
People with chronic pain have the method all the time without the useful information, and their lives suck. Chronic pain suffers are exhausted and depressed because they're fundamentally unable to do anything without it hurting.
Worse, because people without chronic pain don't highly dis-value chronic pain, it's not respected as being as bad as it is -- most people, when asked, would prefer chronic joint pain to a broken arm, yet most people with one of these conditions have the opposite preference, for good reason.
Shouldn't thought experiments for consequentialism then emphasize the difficult task of correctly determining the consequences from minimal data? It seems like your thought experiments would want to be stripped down versions of real events to try to guess, from a random set of features (to mimic the randomness of which aspects of the situation you would notice at the time), what the consequences of a particular decision were. So you'd hold the decision and guess the consequences from the initial featureset.
I think you should expand this into a post.
Ignoring, for the moment, the deeper metaphorical question of how many of us are any given brain failure, does anyone know whether anosognosics actually think that they're using their paralyzed arm? Because I have a very strong sense of using my arms, and I suspect from the earlier description that anosognosics deny their arm being paralyzed, but wouldn't claim that they are actually typing with two hands, for example. Anybody know more on that?
False dichotomy. Autonomy isn't absolute, nor is "causing" someone to make choices.
Your last phrase, "there is no need for solidarity of the fans in the face of criticism or being made fun of" really gets to what I think of as the core of fannishness.
It's not about bad vs. good, it's about ingroup vs. outgroup. The things that have fanatic fans have other people/society/social norm telling them one or more of a number of things designed to create an ingroup/outgroup dynamic. Bad in an artistic sense is one, but so are uninteresting, geeky, against the social norms, etc.
Under this theory, I would expect more fannishness now for Star Wars than for Indiana Jones: space is geeky. But I wouldn't necessarily expect it back at the beginning, because in the 70s, space was cooler. And fannishness should have increased with time, as folks recognized that Star Wars has some significant artistic flaws.
This theory holds up, even in the face of Firefly, which is hugely fannish but doesn't, as far as I know, have major artistic criticisms (but does have geeky, and also has FOX and the world at large saying that it's not interesting enough to be worth keeping).
But this theory has a significant flaw: celebrity worship seems to be another side of the fan behavior, but fails to be explained by this. Sure, celebrities are high status, highly desirable people who, as primates, we would expect to worship -- it's not the worship itself that I think needs to be explained by group dynamics. But celebrity worship behavior and fan worship behavior seem to be very similar (and very different from other kinds of respect and worship), and I would hope there'd be an underlying unification of thinking to draw from that.
I agree with that.
I guess what I'm not sure about is, it seems (very nearly) everything we do is social, so (very nearly) everything would have signaling. Asking what signaling activities we do seems to be asking the wrong question.
Thinking of it from an evpsych point of view, I would expect that there is a mental organ of signaling (or the result of several organs) which attempts to signal at all possible opportunities. So whenever there are humans around, we seek the shortest path to the highest signal value.
Robin assumes that anything done in public (visible to others) is for signaling, so for his assumptions, I think you're right that this is the best answer.
I'm really questioning that assumption though. I think anything we do that species with less complex social environments also do would qualify as likely not for signaling: eating, sex, anti-predatory activities, etc.
And I think there's value in distinguishing between things we do to strut (showing off the newest cell phone) and things we do because of required social signaling (mowing the lawn). Otherwise it seems too easy to say "Everything is signalling" and not really have learned much.
I keep trying to take Eliezer's advice and think of things I learn here as not just applying here. And the problem with the rest of the internet is that so many people are wrong on the internet that it's hard to take the extra time to be this thorough.
But then I remember that one of the reasons I like Eliezer's posts so much more than Robin's is a willingness to spell things out carefully. So this is probably a good idea.
That depends on whether you're making the point for the sake of the person who's wrong, or other readers.
"Better" in what way?
Do you mean better in that you think it's a more accurate view of the inside of your head?
Or better in that it's a more helpful metaphorical view of the situation that can be used to overcome the difficulties described?
I think the view of it as a conflict between different algorithms is useful, and it's the one that I start with, but I wonder whether different views of this problem might be helpful in developing more methods for overcoming it.
I doubt it's the circadian rhythm that's messing you up as much as it is the indoor light. Indoor lights are strong enough to affect cycle, and it is a common suggestion of sleep doctors to spend a half an hour with low light before going to sleep.
A similar trick once worked for hiccups. A friend of mine pointed at me and said "there's a trick to not hiccuping. You want to know what it is?"
I, of course, asked to know.
"Don't hiccup."
And it worked for a couple of years.
Very few tricks have worked for me for the long term. Exercise helps, as does eating well. Most tricks I've tried, including scheduling tasks, taking days off, changes of location and taskcard systems, have only given me the benefit of any change -- a few days of productivity, followed by return of akrasia.
Project Euler is a start on your last request.
One difficulty with the least convenient possible world is where that least convenience is a significant change in the makeup of the human brain. For example, I don't trust myself to make a decision about killing a traveler with sufficient moral abstraction from the day-to-day concerns of being a human. I don't trust what I would become if I did kill a human. Or, if that's insufficient, fill in a lack of trust in the decisionmaking in general for the moment. (Another example would be the ability to trust Omega in his responses)
Because once that's a significant issue in the subject , then the least convenient possible world you're asking me to imagine doesn't include me -- it includes some variant of me whose reactions I can predict, but not really access. Porting them back to me is also nontrivial.
It is an interesting thought experiment, though.
And in Robert Jordan's Wheel of Time, no one trusts the Aes Sedai, because after they vow to always tell the truth, they learn how to twist their words to get what they want anyway.
Someone who would tell the truth in a way that they knew would not convey the truth would not hold my trust.
Children need pretend. Don't squash their play. That's not to say that you should tell them things that are false. They'll generate plenty of fantasy on their own.
This was exactly my thought, and I now wonder whether it's possible to determine via experiment. So how do you give the information to the subjects but not have them think that the researchers know it.
A confederate who's a subject and just happens to gossip about the thing is one way -- if the researchers proceed to deny it, you might be able to split them into groups based on a low status confederate versus a high-status confederate, and a vehement denial vs a "that study hasn't been verified" vs a "that was an urban legend."
Or providing a status signal that it's better to have a "bad" heart -- having a high status researcher who says "sure, we may live less long, but there are all sorts of other benefits they're not telling us about"
It's really hard to separate the information from the humans passing on the information.
reversed stupidity
followed by
dissolving the question and mind-projection fallacy.