Posts
Comments
This basically means they are perfectly achieving their goal, right? Wirecutter's goal isn't to find the best product, it's to find the best product at a reasonable price. If you're a power user, you'll be willing to buy better and more expensive stuff.
Feature request: Q&A posts show a sidebar with all top-level answers and the associated usernames (example). Would be nice if the Anti-Kibitzer could hide these usernames.
The script works well on individual posts, but I find that on the lesswrong.com homepage, it displays names and vote counts for about 3 seconds before it finishes executing. Perhaps there's some way to make it run faster, or failing that, to block the page from rendering until the script finishes running?
Somewhat debatable whether this is a desirable feature, but right now the ordering of comments leaks information about their vote counts. Perhaps it would be good to randomize comment order.
A different paper but in the same vein: Markets are efficient if and only if P= NP
Now that April 17 has passed, how much did you end up making on this bet?
I know more about StarCraft than I do about AI, so I could be off base, but here's my best attempt at an explanation:
As a human, you can understand that a factory gets in the way of a unit, and if you lift it, it will no longer be in the way. The AI doesn't understand this. The AI learns by playing through scenarios millions of times and learning that on average, in scenarios like this one, it gets an advantage when it performs this action. The AI has a much easier time learning something like "I should make a marine" (which it perceives as a single action) than "I should place my buildings such that all my units can get out of my base", which requires making a series of correct choices about where to place buildings when the conceivable space of building placement has thousands of options.
You could see this more broadly in the Terran AI where it knows the general concept of putting buildings in front of its base (which it probably learned via imitation learning from watching human games), but it doesn't actually understand why it should be doing that, so it does a bad job. For example, in this game , you can see that the AI has learned:
1. I should build supply depots in front of my base.
2. If I get attacked, I should raise the supply depots.
But it doesn't actually understand the reasoning behind these two things, which is that raising the supply depots is supposed to prevent the enemy units from running into your base. So this results in a comical situation where the AI doesn't actually have a proper wall, allowing the enemy units to run in, and then it raises the supply depots after they've already run in. In short, it learns what actions are correlated with winning games, but it doesn't know why, so it doesn't always use these actions in the right ways.
Why is this AI still able to beat strong players? I think the main reason is because it's so good at making the right units at the right times without missing a beat. Unlike humans, it never forgets to build units or gets distracted. Because it's so good at execution, it can afford to do dumb stuff like accidentally trapping its own units. I suspect that if you gave a pro player the chance to play against AlphaStar 100 times in a row, they would eventually figure out a way to trick the AI into making game-losing mistakes over and over. (Pro player TLO said that he practiced against AlphaStar many times while it was in development, but he didn't say much about how the games went.)
At some point, all traders with this belief will have already bought the stock and the price will stop going up at that point, thus making the price movement anti-inductive.
I'm tempted to correct my past self's grammar by pointing out that "e.g." should be followed by a comma.
Is it possible to self-consistently believe you're poorly calibrated? If you believe you're overconfident then you would start making less confident predictions right?
The survey has been taken by me.
The question "How Long Since You Last Posted On LessWrong?" is ambiguous--I don't know if posting includes comments or just top-level posts.
And here we are one year later!
Can you imagine a Hollywood movie in which the hero did that, instead of coming up with some amazing clever way to save the civilians on the ship?
Jack Bauer might do it.
This is really remarkable to read six years later, since, although I don't know you personally, I know your reputation as That Guy Who Has Really Awesome Idyllic Relationships.
It may be theoretically possible to increase my mental capacity in some way such that I can distinguish mental capacity from hallucination. I cannot conceive of how that would be done, but it may be possible.
P.S. I love when people reply to comments that are two and a half years old. It feels like we're talking to the past.
It probably just computes it as a float and then prints the whole float.
(I do recognize the silliness of replying to a three-year old comment that itself is replying to a six-year old comment.)
Sort-of related question: How do you compute calibration scores?
And then check if the "rationality improvement" people do better on calibration. (I'm guessing they don't.)
We send out a feedback survey a few days after the workshop which includes the question "0 to 10, are you glad you came?" The average response to that question is 9.3.
I've seen CFAR talk about this before, and I don't view it as strong evidence that CFAR is valuable.
- If people pay a lot of money for something that's not worth it, we'd expect them to rate it as valuable by the principle of cognitive dissonance.
- If people rate something as valuable, is it because it improved their lives, or because it made them feel good?
For these ratings to be meaningful, I'd like to see something like a control workshop where CFAR asks people to pay $3900 and then teaches them a bunch of techniques that are known to be useless but still sound cool, and then ask them to rate their experience. Obviously this is both unethical and impractical, so I don't suggest actually doing this. Perhaps "derpy self-improvement" workshops can serve as a control?
I answered that I'm cis by default, but I would freak out if I woke up in a woman's body.
I think it's totally reasonable to consider that freaky for reasons other than that you now have to live as a woman. I think the spirit of the question was more, "If you were a woman but had the same personality, would you be okay with that?"
Most people do worse at calibration than they expect, but you can improve with practice. http://predictionbook.com/
Survey complete!
I'm not sure what you mean. I personally have a mental category of "mythical beings that don't exist but some people believe exist", which includes God, the tooth fairy, Santa, unicorns, etc. This girl appears to have the same mental category, even though she believes in God but doesn't believe in the tooth fairy.
Interesting that she seems to mentally classify God and the tooth fairy in the same category.
Are there volunteers to test this program for 4 months and report the results?
I've been doing Starting Strength for about 3 months. My legs are noticeably larger--jeans that used to fit loosely are now tight around my thighs, and I no longer need to wear a belt. My posture has improved as well. I haven't noticed a visible change in my arms, probably because (a) arms are smaller; (b) Starting Strength emphasizes legs and back more; (c) I haven't been as consistent about increasing the weight I'm lifting with the arms exercises.
Where is a good place to buy weightlifting shoes? What stores carry them?
As army1987 said, only a small percentage of experienced rationalists sign up for cryonics, so I wouldn't expect there to be social pressure. I think a more likely explanation is that experienced rationalists feel less social pressure against signing up for cryonics.
This might just be a personal quirk, but I don't really get hungry—I have no instinct telling me "you need to eat right now." If I don't plan my meals, I end up way undereating.
I currently eat about 2000-2500 calories a day. If I started lifting weights, wouldn't I need more like 4000 calories? That's a pretty big jump.
Questions about nutrition:
Question 1.
Don't try to implement a new diet and a new exercise plan at the same time.
If you are underweight or normal weight, you'll need to eat more when you start exercising.
Don't these two statements contradict each other? If I'm on the light side (which I am) and start exercising without changing my diet first, won't I have a calorie deficit?
Question 2.
I'm vegan and in college. These make it harder to get adequate nutrition because the dining halls don't usually have calorie-dense plant-based foods. It's my understanding that I need to eat about 4000 calories a day while gaining muscle mass, but if I eat at the dining hall, that basically means eating tons of rice, beans, and pasta. What other options do I have? Right now my plan is to drink a lot of Vega Sport, which I can order from Amazon and store easily.
- I have no idea where you got the idea that Less Wrongers tend to believe in natural rights. This seems to have come out of nowhere. I don't understand how you infer it from the evidence you presented.
- Many LWers believe that FAI is important because an unfriendly AI would likely lead to negative consequences, not because it would violate any natural rights.
- In general, your arguments seem completely disconnected from the beliefs conveyed in the sequences and other prominent writings on LW.
When I read this story, I became emotionally invested in Nate (So8res). I empathized with him. He's the protagonist of the story. Therefore, I have to accept his ideas because otherwise I'd be rejecting his status as protagonist.
According to 80000 Hours, law is still one of the highest-earning careers.
For any given company, you'll be able to get them to up their offer at least once and potentially thrice.
How do you assess when a company isn't going to up their offer anymore? It seems hard to distinguish between "We're saying this is our highest offer but it's actually not" and "This is our highest offer."
The links to the public data given at the end appear to be broken. They give internal links to Less Wrong instead of redirecting to Slate Star Codex. These links should work:
In case anyone's curious, here are the highest-grossing films, adjusted for inflation.
I don't deny that you feel freaked out by this experience, but it isn't all that surprising. When calculating the probability of an unlikely event, you must also consider all the other events that could have happened and that you would have found equally weird.
Of the trillions of other equally-unlikely coincidences that didn't occur, here are a few examples:
- You take a round-trip flight and the two flight numbers concatenated make your social security number.
- As a child, you had a pet cat and dog named Milly and Rex that seemed to behave like a married couple. Later, you meet a married couple named Milly and Rex who like to cosplay as a cat and a dog.
- About a hundred years ago, a polyamorous journalist with an interest in human rationality wrote a newspaper column. The name of the column was an anagram for the journalist's name. This man was also friends with your great-great grandmother.
For more on this subject, I'd recommend Innumeracy, and especially Chapter 2: Probability and Coincidence.
I don't think this is so much "treating children as pets" as it is "treating children like not your peers". When your boss asks you to do something, does she say "Hey, would you mind helping me out with X? I'd really like to get it done this week."? More than likely, she says "I need you to finish X by Friday."
You only need to give justifications to peers. A person in a higher position of authority can make a request of a subordinate without justification. So it is with officers/privates in the military, managers/employees, and parents/children.
It also costs debt collectors some amount of money to collect debt. Presumably, if a business buys debt from a bank, it's because they think they can collect the debt for a non-trivially lower cost. Otherwise, the bank wouldn't be willing to sell the debt.
I don't know how significant this is.
I hear the relationship between units and workload is pretty tenuous, though, so it might be possible to take lots of units without doing as much more work.
The unit-workload correlation is predictable, but not entirely straightforward. In particular:
- IntroSems and other similar freshman/sophomore classes are usually easier than their units would suggest.
- Humanities classes usually have less work per unit than sciences.
- Almost every higher-level math class is 3 units, no matter how much work it is. You can usually expect 5 units worth of work for a 3-unit math class. (This also means that even though a math major takes fewer units than most other majors, it's more work.)
- For people who aren't particularly fast at programming, CS classes can take an extraordinary amount of time (20-30 hours a week for a 5-unit class).
I did notice the "venture-funded" clause. I mention it at the end of my comment. Perhaps I should have specified at the beginning.
I'd be interested to know how many startups get VC funding. Of course, at that point, you have to decide what qualifies as a startup. If a couple of guys make a website in their spare time and never seriously work on it, does that count as a startup?
I think the point of public speaking classes isn't to do networking, but to improve communication skills and therefore skill at networking.
According to this page, three graduates with a Mathematical & Computational Sciences degree (an undergraduate degree similar to CS) work at financial institutions: JP Morgan, Goldman Sachs and Morgan Stanley. Keep in mind that these are graduates from the class of 2011, so they've only been out of school for 2 years; and the degree program only has about 15 graduates per year, so three alumni make a sizable fraction.
What I'm trying to say is, it's probably feasible to get a job in finance with only an undergraduate degree from Stanford.
From this paper, the average startup exits with $10 million, lasts 4 years until exit, and has 1.4 founders. Extrapolating from this gives about $1.5 million annual income per founder. (I think it's actually somewhat less than that because I'm not accounting for e.g. the fact that investors own a portion of the company.)
(EDIT: This 80,000 Hours post cites $1.4 million.)
I would expect the median monetary return from starting a start-up to be negative.
I think you're right. According to the same source, about 70% of startups that receive funding never make a profit.
I second the advice on startups. Starting a startup has a higher expected monetary return than anything else you can do (as far as I know); and if you do want to start a startup, Stanford is the place to do it.
Stanford sophomore here. I can offer some Stanford-specific advice. In fairness, I've only been here for a year, so you'd probably figure this stuff out pretty soon anyway, but hopefully it'll help.
- 18.8 units per quarter is a lot. I only know a few people who are taking that much. However, I've found that taking 16 or 17 units is pretty feasible (assuming you don't have any other major undertakings such as research or a part-time job).
- This may be obvious to you, but I wish someone had told me this: Go to career fairs. At Stanford, unlike at most universities, you actually have a pretty good chance of getting an internship your first year.
- If you're going to Stanford, you should absolutely take CS106A at the very least. If it goes well, take more CS classes. I'd suggest taking CS106B and CS107 even if you don't end up majoring or minoring in computer science.
Non-Stanford-specific advice:
If you're looking to maximize future earnings via a job, you should probably look at the highest-paying graduate majors, not undergraduate. You can make more money, as Peter Hurford said, in law or finance than in most any job you could get with just a Bachelor's degree.
EDIT: Also, Stanford has a chapter of The High Impact Network. You should join us! I'll PM you the President's email.
EDIT 2: Based on personal experience, I'd recommend against becoming an actuary. My dad was an actuary for 14 years, and he hated it. If you like mathy work, you'll probably find actuarial work terribly dull. Of course, you might have a different experience.
Although he had the right idea, I think this author's analysis was rather poor. I don't think he did a good job of modelling the importance of different kinds of typing strains. I like Colemak a lot better.
Is there actually evidence that the traditional method of touch typing, where each finger is assigned a keyboard column and returns to the "Home Row" after striking a key, is at all faster, more efficient, or ergonomically sound than just typing intuitively?
I don't know of any studies (although they probably exist), but (a) touch typists I know are much faster than touch typists I don't know, and (b) the world's fastest typists are, as far as I know, all touch typists. Sean Wrona, currently the world's fastest typist, uses touch typing. So did Barbara Blackburn, the previous world's fastest typist.
your chance of beating index fund performance over the long term is tiny.
Isn't it more like 50%?