Posts
Comments
I agree, and I'll keep that in mind. The topic is extremely broad, though, so I don' t know how much time I'll have to focus on it. I'm actually thinking of having several meetups on this, depending on people's interest.
I always forget that...thanks.
I don't have time to write a full report, but Less Wrong Montreal had a meetup on the 22nd of January that went well. Here's the handout that we used; the exercise didn't work out too well because we picked an issue that we all mostly agreed on and understood pretty well. A topic where we disagreed more would have been more interesting (afterwards I thought "free will" might have been a good one).
Thanks for pointing it out, fixed.
I run the Montreal Less Wrong meetup, which for the last few months has started structuring the content of our meetups with varying degrees of success.
This was the first meetup that was posted to meetup.com in an effort to find some new members. There were about 12 of us, most of which were new and had never heard of Less Wrong before; although this was a bit more than I was expecting, the meetup was still a really good introduction to Less Wrong/rationality and was appreciated by all those that were present.
My strategy for the meetup was to show a concrete exercise that was useful and that gave a good idea what Less Wrong/rationality was about. This is a handout I composed for the meetup to explain the exercise we were going to be doing. It's a five-second-level breakdown of a few mental skills for changing your mind when you're in an argument; any feedback on the steps I listed is appreciated, as no one reviewed them before I used them. People found the handout to be useful, and it gave a good idea of what we would be trying to accomplish.
The meetup began by going around and introducing ourselves, and how we came to find the meetup. Some general remarks about the demographics:
- The attendees were 100% male. There were a few women who were going to attend, but cancelled at the last minute.
- Only two out of the 12 didn't have a background in science. The science backgrounds included math, biology, engineering and others.
After a quick overview of what rationality is, people wanted to go through the handout. We read through each of the skills, several of which sparked interesting discussions. Although the conversation went off on tangents often, the tangents were very productive as they served to explain what rationality is. The tangents often took the form of people discussing situations where they had noticed people reacting in the ways that are described in the handout, and how someone should think in such cases.
The exercise that is described on the second page of the handout was not successful. I had been trying to find beliefs that are not too controversial, but might still cause people to disagree with them. Feedback from the group indicated that I could have used more controversial beliefs (religion, spirituality, politics, etc) as the feelings evoked would have been more intense and easier to to notice; however, that might also have offended more people, so I'm not sure whether that would have been better or not. If I were to run this meetup again, I would rethink this exercise.
The meetup concluded with me giving a brief history of Less Wrong, and mentioning HPMOR and the sequences. I provided everyone with some links to relevant Less Wrong material and HPMOR in the discussion section of the meetup group afterwards.
Let me know if you have any questions or comments, any feedback is appreciated!
I like this idea; seeing as I have a meetup report to post, I just started a monthly Meetup Report Thread. Hopefully, people will do what you describe.
That's true, those points ignore the pragmatics of a social situation in which you use the phrase "I don't know" or "There's no evidence for that". But if you put yourself in the shoes of the boss instead of the employee (in the example given in "I don't know"), where even if you have "no information" you still have to make a decision, then remembering that you probably DO know something that can at least give you an indication of what to do, is useful.
The points are also useful when the discussion is with a rationalist.
The post What Bayesianism Taught Me is similar to this one; your post has some elements that that one doesn't have, and that one has a few that you don't have. Combining the two, you end up with quite a nice list.
I think "seems like a cool idea" covers that; it doesn't say anything about expected results (people could specify).
I don't see how, because the barriers aren't clearly defined, they become irrelevant. There might not be a specific point where a mind is sentient or not, but that doesn't mean all living things are equally sentient (Fallacy of Grey).
I think Armstrong 4, rather than make his consideration for all living things uniform, would make himself smarter and try to find an alternate method to determine how much each living creature should be valued in his utility function.
How about a sentient AI whose utility function is orthogonal to yours? You care nothing about anything it cares about and it cares about nothing you care about. Also, would you call such an AI sentient?
Ok, I see what your concern is, with the hype around Soylent everyone's opinion is skewed (even if they're not among the fanboys).
You decided above that it wasn't worth your time to try your own self-experiments with it. What if someone else were to take the time to do it? I like the concept but agree with the major troubles you listed above, and I have no experience with designing self-experiments. But maybe I'll take the time to try and do it properly, long-term, with regular blood tests, noting what I've been eating for a couple months before starting, taking data about my fitness levels, etc. Of course, I would need to analyze the risk to myself beforehand.
What would you like to see done differently? You mentioned the more thorough self-experimentation he could have done (really should have done), but there's still someone else who could step up to the plate and do some self-testing.
Thorough studies? Those might also be done some time in the future, whether or not they're funded by Rob (not sure about this point, there might not be an incentive to do so once it's being sold).
Sure, Rob jumped the gun and hyped it up. But most of the internet is already a giant circle-jerk. Doesn't stop people from generating real information, right?
I don't have enough experience to even give an order of magnitude, but maybe I can give an order of magnitude of the order of magnitude:
Right now, the probability of Christianity specifically might be somewhere around 0.0000001% (that's probably too high even). One hour post judgement-day, it might rise to somewhere around 0.001% (several orders of magnitude).
Now let's say the world continues to burn, I see angels in the sky, get to talk to some of them, see dead relatives (who have information that allows me to verify that they're not my own hallucinations), and so on...the probability could bring the hypothesis to one of the top spots in the ranking of plausible explanations.
...assuming that I'm still free to experiment with reality and not chained and burning. Also assuming that I actually take the time to do this as opposed to run and hide.
The continuation of the burning makes the hallucination hypothesis less probable, for as long as it continues. Also, if it continues past the laws of physics, as you point out.
What do you expect will happen? Do you think lots of people are going to get very sick by going on a Soylent-only diet immediately, not monitoring their health closely, and ending up with serious nutritional deficiencies? That's one of the more negative scenario, but I honestly don't know how likely that is. I think people are likely to do at least one of three things:
- Monitor their health more closely (especially on a soylent-only diet),
- Only replace a few meals with Soylent (not more than, say, 75%),
- Return to normal food or see a doctor if a serious deficiency occurs.
Then again, I may have too much confidence in people's common sense. Rob is definitely marketing it as a finished product and a miracle solution.
The concept is good, but the methodology could have been significantly better. It has lots of potential, and the real danger is limited to those that will be consuming ONLY Soylent for extended periods. Using it to replace a meal or two a day, and having a complete meal every day, shouldn't be dangerous (I think).
What confuses me about the negativity is, what's so bad about the current situation? The earliest of adopters will serve as a giant trial, and if there are problems they'll come up there.
Also: people who intend to switch to JUST soylent should be monitored by a doctor or a nutritionist, at least for the first while. And post it either here or on the Solyent board. I am very interested to hear some anecdata.
Beware of identifying in general. "We" are all quite different. Few if any of "us" can be considered reasonably rational by the standards of this site.
That's a good point, which I'll watch out for in the future.
With a sizable minority of theists here, why is this even an issue, except maybe for some heavily religious newcomers?
One thing I didn't specify is that this applies to discussions with non-LessWrongers about religion (or about LessWrong). On the site, there's no point in bothering with this identification process, because we're more likely to notice that we're generalizing and ask for an elaboration.
I'm thinking of making a Discussion post about this, but I'm not sure if it has already been mentioned.
We're not atheists - we're rationalists.
I think it's worth distinguishing ourselves from the "atheist" label. On the internet, and in society (what I've seen of it, which is limited), the label includes a certain kind of "militant atheist" who love to pick fights with the religious and crusade against religion whenever possible. The arguments are, obviously, the sames ones being used over and over again, and even people who would identify as atheists don't want to associate themselves to this vocal minority that systematically makes everyone uncomfortable.
I think most LessWrongers aren't like that, and don't want to attach a label to themselves that will sneak in those connotations. Personally, I identify as a rationalist, not an atheist. The two things that distinguish me from them:
Social consequentialism. I know conversations about religion are often not productive, so I'm quick to tap out of such discussions. Unlike a lot of atheists, I could, in principle, be persuaded to believe otherwise (given sufficient evidence). If judgement day comes and I see the world burning around me, I will probably first think that I've gone insane; but the probability I assign to theism will increase, as per Bayes' Theorem.
Note that this feeling is dependent on who you know, so I might be a minority in the label I see attached to atheism.
What do people think? I wrote this pretty quickly, and could take the time to a more coherent text to post in discussion.
A real-world adblock would be great; you could also use this type of augmented reality to improve your driving, walk through your city and see it in a completely different era, use it for something like the Oculus Rift...the possibilities are limitless.
Companies will act in their own self-interest, by giving people what it is they want, as opposed to what they need. Some of it will be amazingly beneficial, and some of it will be...not in a person's best interest. And it will depend on how people use it.
This is a community of intellectuals who love learning, and who aren't afraid of controversy. So for us, it wouldn't be a disaster. But I think we're a minority, and a lot of people will only see what they specifically want to see and won't learn very much on a regular basis.
A post from the sequences that jumps to mind is Interpersonal Entanglement:
When I consider how easily human existence could collapse into sterile simplicity, if just a single major value were eliminated, I get very protective of the complexity of human existence.
If people gain increased control of their reality, they might start simplifying it past the point where there are no more sufficiently complex situations to allow your mind to grow, and for you to learn new things. People will start interacting more and more with things that are specifically tailored to their own brains; but if we're only exposed to things we want to be exposed to, the growth potential of our mind becomes very limited. Basically an extreme version of Google filtering your search results to only show you what it thinks you'll like, as opposed to what you should see.
Seems like a step in the wrong direction.
I just realized I generalized too much. In Canada, you require a four-year Bachelor's of Education specifically (same as for being an engineer, and more than most trades). The average salary seems to be about the same as in the US.
Read the Sequences.
How did you find the site?
Why aren't teachers as respected as other professionals? It's too bad that the field is lower paid and less respected than other professional fields, because the quality of the teachers (probably) suffers in consequence. There's a vicious cycle: teachers aren't highly respected --> parents and others don't respect their experience -->no one wants to go into teaching and teachers aren't motivated to excel --> teachers aren't highly respected.
It's almost surprising that I had so many excellent teachers through the years. The personal connection between teachers and their students must be particularly strong, because the environment doesn't seem to be very motivating for teachers to want to be excellent at what they do.
Based on anecdotal evidence. I just think it's too bad.
Upvote for meetups!
1) Job searching. It forces you to really size yourself up and compare yourself to everyone around you.
2) I really like your example. My own: feeling pressured to hang out with certain friends (because they'll feel neglected). Rather than realize that friends that guilt you into seeing them need to be see LESS, my brain just makes me feel bad.
I use a very similar formula, and it works pretty well.. One thing i also do: spend 5 minutes clearing my mind before jumping into a 25-minute Pomodoro cycle. It helps shut down the feelings triggered by an Ugh field and I find myself more concentrated.
Additionally, lazy-me LOVES having an excuse to do nothing for 5 more minutes.
I fully agree with the last paragraph. When it comes to valuing my time, the less free time I have, the more valuable it is, and the less reluctant I am to spend it (the value of an hour of my time isn't constant).
If I'm busy at a given time, it might not take much for me to go out of my way to help a friend; but if I'm really busy and they ask, there had better be some kind of incentive.
...upon reflection, that would be why people get paid time and a half for overtime.
What career paths are open to programmers? Do a lot of programmers go into management (head of a programming team), or specialize in something harder to learn? You seem to be saying that a programmer with 20 years of experience wouldn't have that much of an edge over someone with only 2 or 3 years experience.
As an engineer, two of the popular paths are going into project management or similar, or gaining a high amount of technical proficiency in certain domains. Either way, these types of positions really do require the extra experience.
Depending on your current career path, up to a certain age it's entirely possible to switch careers entirely, even to something that makes concrete use of skills acquired from another career (so it's not a complete restart). Built-up capital or some other means of remaining self-sustainable for a period of time could allow you to return to school in something completely different.
I'm not speaking from experience though; I can guess that this type of situation would be difficult. But when saving, balancing the return on investment with the accessibility of the funds seems wise.
The ev-psych reason for the "strong leader" pattern is fitness variance in the competition between men. The leader (dominant male) would be able to impregnate a substantial proportion of the women in the tribe, while the least dominant males wouldn't reproduce at all. So males are much more competitive because the prize for winning is very high (potentially hundreds of children), while the cost of losing is very low (for women, the fitness variance is smaller because of the limit on the number of pregnancies in theire lifetimes).
So it's a prisoner's dilemma where the defector has a huge advantage. If everyone is democratic about sharing their women and one person decides he wants to take them all, he wins and his genes spread.
There are also ev-psych reasons why dictators tend to be corrupted: when you have power, you want to use it to give the advantage to YOUR offspring (or your group maybe?). So even if you have noble intentions at first, there will be a tendency to hoard resources for yourself or others you consider as part of your group.
The autopilot problem seems to arise in the transition phase between the two pilots (the human and the machine). If just the human does the task, he remains sufficiently skilled to handle the emergency situations. Once the automation is powerful enough to handle all but the situations that even a fully-trained human wouldn't even know how to handle, then the deskilling of the human just allows him to focus on more important tasks.
To take the example of self-driving cars: the first iterations might not know how to deal with, say, a differently-configured zone due to construction or some other hazard (correct me if I'm wrong, I don't know much about self-driving car AI). So it's important that the person in the driver's seat can take over; if the person is blind, or drunk, or has never ever operated a car before, we have a problem. But I can imagine that at some point self-driving cars will handle almost any situation better than a person.
I was, but hadn't delved too deeply into it until just now. There actually is a pretty good structure there that i'll look at more closely.
I've been lurking for almost a year; I'm a 25 year old mechanical engineer living in Montreal.
Like several people I've seen on the welcome thread, I already had figured out the general outline of reductionism before I found LW. A friend had been telling me about it for a while, but I only really started paying attention when I found it independently while reading up on transhumanism (I was also a transhumanist before finding it here). Reading the sequences did a few things for me:
- It filled in the gaps in my world-model (and fleshed out my transhumanist ideas much more thoroughly, among many other things)
- It showed me that my way of seeing the world is actually the "correct" way (it yields the best results for achieving your goals).
Since then, I've helped a friend of mine organize the Montreal LessWrong meetups (which are on temporary hiatus due to several members being gone for the summer, but will start again in the fall) and have begun actively trying to improve myself in a variety of ways along with the group.
I can't think of anything else in particular to say about myself...I like what I've seen of the community here and think I can learn a lot from everyone here and maybe contribute something worthwhile every now and again.
There's a lot of great information on Less Wrong, but some of it is hard to find. Are there any efforts for organizing the information here in progress? If so, can anyone let me know where?
Absence of Evidence is directly tied to having a probabilistic model of reality. There might be an inferential gap when people refer you to it, because on its own the argument doesn't seem strong. But it's a direct consequence of Bayesian reasoning, which IS a strong argument.
(Just to clarify: I didn't mean to accuse you of ignorance, and I sympathize with having everyone spam you with links to the same material, which must be aggravating.)
Remember, your post has (at the time of this comment at least) a score of 4. Subjects that are "taboo" on LessWrong are taboo because people tend to discuss them badly. You asked some legitimate questions, and some people provided you with good responses.
If you're willing to consider changing your mind, the next step would be to read the sequences. A lot of what you mention is answered there, such as:
Absence of evidence is evidence of absence The Fallacy of Grey (specifically, when you mention that because we don't know the whole truth, we can't objectively evaluate evidence) 0 and 1 are not probabilities This one actually supports what you were saying, where you were entirely right that you can't assign a probability of 0 to the existence of God. But you still don't know if this probability is 0.9, 0.1, 0.01 or 0.0000001. See http://lesswrong.com/lw/ml/but_theres_still_a_chance_right/
Was there something in particular you were hoping to learn from them? I don't think the point of the exercise was to get an accurate profile of the female demographic on LessWrong, but to give people who wanted to speak up, a chance/incentive to do so. The submitters would probably not have posted the submissions on their own without the prompt, but they did submit these when they saw were prompted.
The anecdotes may be more useful when you consider that someone felt like she should say it. If nothing else, the contradiction in the anecdotes hints that there is no universal element among women that drives them away from LW.
One useful thing I've noticed helps a lot when discussing LW-style topics with non-rationalists (or rationalists who have not declared Crocker's Rules) is to reiterate parts of their message that you agree with. It shows that you're actually listening to what they're saying, and not being confrontational for its own sake. As in:
Non-rationalist: "I believe X, and therefore Y and Z"
Instead of "Z doesn't follow from X because ..." Respond with "I agree with X, and also Y. But Z doesn't follow because..."
Even if, by using the first response, you're implying that you agree with X (and maybe with Y because you didn't say anything about it), people who are not explicitly correcting for it might see the first response as confrontational and the second as more friendly. In general, people seem to be more sensitive to how something is said rather than what is being said.