Posts
Comments
I have made bootleg PDFs in LaTeX of some of my favorite SSC posts, and gotten him to sign printed out and bound versions of them. At some point I might make my SSC-to-LaTeX script public...
I feel exactly the same way about the controversial opinions.
I used to work at App Academy, and have written about my experiences here and here.
You will have a lot of LW company in the Bay Area (including me!) There will be another LWer who isn't Ozy in that session too.
I'm happy to talk to you in private if you have any more questions.
Zipfian Academy is a bootcamp for data science, but it's the only non web dev bootcamp I know about.
I work at App Academy, and I'm very happy to discuss App Academy and other coding bootcamps with anyone who wants to talk about them with me.
I have previously Skyped LWers to help them prepare for the interview.
Contact me at bshlegeris@gmail.com if interested (or in comments here).
I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me.
That's a moral disagreement, not a factual disagreement. Alicorn is a deontologist, and you guys probably wouldn't be able to reach consensus on that no matter how hard you tried.
I interpreted that bit as "If you're the kind of person who is able to do this kind of thing, then self-administered CBT is a great idea."
Of the people who graduated more than 6 months ago and looked for jobs (as opposed to going to university or something), all have jobs.
About 5% of people drop out of the program.
It will probably be fine. See here.
You make a good point. But none of the people I've discussed this with who didn't want to do App Academy cite those reasons.
I don't think that they're thinking rationally and just saying things wrong. They're legitimately thinking wrong.
If they're skeptical about whether the place teaches useful skills, the evidence that it actually gets people jobs should remove that worry entirely. Their point about accreditation usually came up after I had cited their jobs statistics. My impression was that they were just looking for their cached thoughts about dodgy looking training programs, without considering the evidence that this one worked.
I suspect that most people don't think of making the switch.
Pretty much all of them, yes. I should have phrased that better.
My experience was unusual, but if they hadn't hired me, I expect I would have been hired like my classmates.
I did, but the job I got was being a TA for App Academy, so that might not count in your eyes.
Their figures are telling the truth: I don't know anyone from the previous cohort who was dissatisfied with their experience of job search.
They let you live at the office. I spent less than $10 a day. Good point though.
ETA: Note that I work for App Academy. So take all I say with a grain of salt. I'd love it if one of my classmates would confirm this for me.
Further edit: I retract the claim that this is strong evidence of rationalists winning. So it doesn't count as an example of this.
I just finished App Academy. App Academy is a 9 week intensive course in web development. Almost everyone who goes through the program gets a job, with an average salary above $90k. You only pay if you get a job. As such, it seems to be a fantastic opportunity with very little risk, apart from the nine weeks of your life. (EDIT: They let you live at the office on an air mattress if you want, so living expenses aren't much of an issue.)
There are a bunch of bad reasons to not do the program. To start with, there's the sunk cost fallacy: many people here have philosophy degrees or whatever, and won't get any advantage from that. More importantly, it's a pretty unusual life move at this point to move to San Francisco and learn programming from a non-university institution.
LWers are massively overrepresented at AA. There were 4/40 at my session, and two of those had higher karma than me. I know other LWers from other sessions of AA.
This seems like a decent example of rationalists winning.
EDIT:
My particular point is that for a lot of people, this seems like a really good idea: if there's a 50% chance of it being a scam, and you're making $50k doing whatever else you were doing with your life, then if job search takes 3 months, you're almost better off in expectation over the course of one year.
And most of the people I know who disparaged this kind of course didn't do so because they disagreed with my calculation, but because it "didn't offer real accreditation" or whatever. So I feel that this was a good gamble, which seemed weird, which rationalists were more likely to take.
I took the survey.
I'm a computer science student. I did a course on information theory, and I'm currently doing a course on Universal AI (taught by Marcus Hutter himself!). I've found both of these courses far easier as a result of already having a strong intuition for the topics, thanks to seeing them discussed on LW in a qualitative way.
For example, Bayes' theorem, Shannon entropy, Kolmogorov complexity, sequential decision theory, and AIXI are all topics which I feel I've understood far better thanks to reading LW.
LW also inspired me to read a lot of philosophy. AFAICT, I know about as much philosophy as a second or third year philosophy student at my university, and I'm better at thinking about it than most of them are, thanks to the fantastic experience of reading and participating in discussion here. So that counts as useful.
The famous example of a philosopher changing his mind is Frank Jackson with his Mary's Room argument. However, that's pretty much the exception which proves the rule.
Not only do I use that, it means that your comment renders as:
Hermione's body should now be at almost exactly five degrees Celsius [≈ recommended for keeping food cool] [≈ recommended for keeping food cool].
to me.
Basically, the busy beaver function tells us the maximum number of steps that a Turing machine with a given number of states and symbols can run for. If we know the busy beaver of, for example, 5 states and 5 symbols, then we can tell you if any 5 state 5 symbol Turing machine will eventually halt.
However, you can see why it's impossible to in general find the busy beaver function- you'd have to know which Turing machines of a given size halted, which is in general impossible.
Are you aware of the busy beaver function? Read this.
Basically, it's impossible to write down numbers large enough for that to work.
The most upvoted post of all time on LW is Holden's criticism of SI. How many pageviews has that gotten?
It's a kind of utilitarianism. I'm including act utilitarianism and desire utilitarianism and preference utilitarianism and whatever in utilitarianism.
What do you mean by "utilitarianism"? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses "total happiness" as a utility function. This sentence appears to be designed to confuse the two meanings.
Yeah, my mistake. I'd never run across any other versions of consequentialism apart from utilitarianism (except for Clippy, of course). I suppose caring only for yourself might count? But do you seriously think that the majority of those consequentialists aren't utilitarian?
I edited my comment to include a tiny bit more evidence.
This seems like it has makings of an interesting poll question.
I agree. Let's do that. You're consequentialist, right?
I'd phrase my opinion as "I have terminal value for people not suffering, including people who have done something wrong. I acknowledge that sometimes causing suffering might have instrumental value, such as imprisonment for crimes."
How do you phrase yours? If I were to guess, it would be "I have a terminal value which says that people who have caused suffering should suffer themselves."
I'll make a Discussion post about this after I get your refinement of the question?
Here's an old Eliezer quote on this:
4.5.2: Doesn't that screw up the whole concept of moral responsibility?
Honestly? Well, yeah. Moral responsibility doesn't exist as a physical object. Moral responsibility - the idea that choosing evil causes you to deserve pain - is fundamentally a human idea that we've all adopted for convenience's sake. (23).
The truth is, there is absolutely nothing you can do that will make you deserve pain. Saddam Hussein doesn't deserve so much as a stubbed toe. Pain is never a good thing, no matter who it happens to, even Adolf Hitler. Pain is bad; if it's ultimately meaningful, it's almost certainly as a negative goal. Nothing any human being can do will flip that sign from negative to positive.
So why do we throw people in jail? To discourage crime. Choosing evil doesn't make a person deserve anything wrong, but it makes ver targetable, so that if something bad has to happen to someone, it may as well happen to ver. Adolf Hitler, for example, is so targetable that we could shoot him on the off-chance that it would save someone a stubbed toe. There's never a point where we can morally take pleasure in someone else's pain. But human society doesn't require hatred to function - just law.
Besides which, my mind feels a lot cleaner now that I've totally renounced all hatred.
It's pretty hard to argue about this if our moral intuitions disagree. But at least, you should know that most people on LW disagree with you on this intuition.
EDIT: As ArisKatsaris points out, I don't actually have any source for the "most people on LW disagree with you" bit. I've always thought that not wanting harm to come to anyone as an instrumental value was a pretty obvious, standard part of utilitarianism, and 62% of LWers are consequentialist, according to the 2012 survey. The post "Policy Debates Should Not Appear One Sided" is fairly highly regarded, and it esposes a related view, that people don't deserve harm for their stupidity.
Also, what those people would prefer isn't nessecarily what our moral system should prefer- humans are petty and short-sighted.
Harry's failing pretty badly to update sufficiently on available evidence. He already knows that there are a lot of aspects of magic that seemed nonsensical to him: McGonagall turning into a cat, the way broomsticks work, etc. Harry's dominant hypothesis about this is that magic was intelligently designed (by the Atlanteans?) and so he should expect magic to work the way neurotypical humans expect it to work, not the way he expects it to work.
I disagree. It seems to me that individual spells and magical items work in the way neurotypical humans expect them to work. However, I don't think that we have any evidence that the process of creating new magic or making magical discoveries works in an intuitive way.
Consider by analogy the Internet. It's not surprising that there exist sites such as Facebook which are really well designed and easy to use for humans, rendering in pretty colors instead of being plain HTML. However, these websites were created painstakingly by experts dealing with irritating low level stuff. It would be surprising that the same website had a surpassingly brilliant data storage system as well as an ingenius algorithm for something else.
Yeah, I'm pretty sure I (and most LWers) don't agree with you on that one, at least in the way you phrased it.
The author doesn't want to write sports stories. The girls get comic stories about relationships, but the boys don't get comic stories about Quidditch.
This is a very good point. As a reader, I think those 'silly young boy' conversations would probably get old for me faster than the girl ones.
I'm pretty sure we exactly agree on this. Just out of curiosity, what did you think I meant?
I mostly agree with ShardPhoenix. Actually learning a language is essential to learning the mindset which programming teaches you.
I find it's easiest to learn programming when I have a specific problem I need to solve, and I'm just looking up the concepts I need for that. However, that approach only really works when you've learned a bit of coding already, so you know what specific problems are reasonable to solve.
Examples of things I did when I was learning to program: I wrote programs to do lots of basic math things, such as testing primality and approximating integrals. I wrote a program to insert "literally" into sentences everywhere where it made grammatical sense. I used regular expressions to search through a massive text file for the names of people who were doing the same course as me. Having the goal made it easier to learn the syntax and concepts.
It depends on how much programming knowledge you currently have. If you want to just learn how to program, I recommend starting with Python, or Haskell if you really like math, or the particular language which lets you do something you want to be able to do (eg Java for making simple games, JavaScript for web stuff). Erlang is a cool language, but it's an odd choice for a first language.
In my opinion as a CS student, Python and Haskell are glorious, C is interesting to learn but irritating to use too much, and Java is godawful but sometimes necessary. The other advantage of Python is that it has a massive user base, so finding help for it is easier than for Erlang.
If I were you, I'd read Learn Python the Hard Way or Learn You a Haskell For Great Good- the second of those is how I started learning Haskell.
I love what this poll reveals about LW readers. Many sympathise with Batman, because of his tech/intellectual angle. The same with Iron Man, but he's a bit less cool. Then two have heard of superman, and most LWers are male. And most of us don't care.
It would be lovely if you'd point that kind of thing out to the nerdy guy. One problem with being a nerdy guy is that a lack of romantic experience creates a positive feedback loop.
So yeah, it's great to point out what mistakes the guy made. See Epiphany's comment here.
(I have no doubt that you personally would do this, I'm just pointing this out for future reference. You might not remember, but I've actually talked to you about this positive feedback loop over IM before. I complimented you for doing something which would go towards breaking the cycle.)
How many people actually have that?
Wouldn't that be a lack of regulation on emigration, not immigration?
How do you mean?
I wonder why it is that so many people get here from TV Tropes.
Also, you're not the only one to give up on their first LW account.
You're right. My mistake. The standard "that doesn't really apply for real world situations" argument of course applies, with the circular preferences and so on.
I just read some of your comment history, and it looks like I wrote that a bit below your level. No offense intended. I'll leave what I wrote above there for reference of people who don't know.
In case you're wondering why everyone is downvoting you, it's because pretty much everyone here disagrees with you. Most LWers are consequentialist. As one result of this, we don't think there's much of a difference between killing someone and letting them die. See this fantastic essay on the topic.
(Some of the more pedantic people here will pick me up on some inaccuracies in my previous sentence. Read the link above, and you'll get a more nuanced view.)
Do these systems avoid the strategic voting that plagues American elections? No. For example, both Single Transferable Vote and Condorcet voting sometimes provide incentives to rank a candidate with a greater chance of winning higher than a candidate you prefer - that is, the same "vote Gore instead of Nader" dilemma you get in traditional first-past-the-post.
In the case of the Single Transferable Vote, this is simply wrong. If my preferences are Nader > Gore > Bush, I should vote that way. If neither Bush nor Gore have a majority, and Nader has the least number of first preferences, my vote contributes towards Gore's total. In no way does voting Gore > Nader > Bush instead help Gore (in the case where Nader obviously has a small number of votes), but it does make it less likely that Nader will get elected, which I presumably don't want.
The link describes how if your preferences are A > B > C > D, it is sometimes best to vote C > A > B > D because this will help get A elected, which is different to voting Gore ahead of Nader to get Gore elected.
You're confusing a few different issues here.
So your utility decreases when theirs increases. Say that your love or hate for the adult is L1, and your love or hate for the kid is L2. Utility change for each as a result of the adult hitting the kid is U1 for him and U2 for the kid.
If your utility decreases when he hits the kid, then all we've established is that -L2U2 > L1U1. You may love them both equally, but think that hitting the kid messes him up more than it makes the adult happy, you'd still be unhappy when the guy hits a kid. But we haven't established that you hate the adult.
If the only thing that makes Person X happy is hitting kids, and you somehow find out that his utility function has increased directly, then you can infer from that that he's hit a kid, and that makes you sad. However, this can happen even if you have a positive multiplier for his utility function in yours.
So I think your mistake is saying "I hate Person X, because I know they like to hit kids." You might hate them, but the given definitions don't force you to hate them just because they hit kids.
Put another way, you might not be happy if you heard that they had horrible back pain. You can care for someone, but not like what they're doing.
(Your comment still deserves commendation for presenting an argument in that form.)
What are you trying to do with these definitions? The first three do a reasonable job of providing some explanation of what love means on a slightly simpler level than most people understand it.
However, the "love=good, hate=evil" can't really be used like that. I don't really see what you're trying to say with that.
Also, I'd argue that love has more to do with signalling than your definition seems to imply.
He used the opening paragraph as one of the example strings for something you were testing your regular expressions on.
This might be a really good idea.
I don't mean attractiveness just in the sense of physical looks. I mean the whole thing of my social standing, confidence and perceived coolness.
But thanks for the advice.