Posts

LINK: In favor of niceness, community, and civilisation 2014-02-24T04:13:10.620Z
Meetup : Inaugural Canberra meetup 2014-01-30T17:03:10.593Z
Wes Weimer from Udacity presents his list of things you should learn 2012-06-23T07:04:43.203Z
A simple web app to create aversion to unhealthy food 2012-05-05T06:07:27.916Z
Does anyone know any kid geniuses? 2012-03-28T12:03:45.155Z
Anyone have any questions for David Chalmers? 2012-03-10T21:57:53.248Z
My summary of Eliezer's position on free will 2012-02-28T05:53:10.432Z
Hard philosophy problems to test people's intelligence? 2012-02-15T04:57:39.960Z
The utility of information should almost never be negative 2012-01-29T05:43:30.930Z
ICONN 2012 nanotechnology conference in Perth 2012-01-17T02:34:47.334Z
Procedural knowledge gap: public key encryption 2012-01-12T07:35:58.669Z
A variant on the trolley problem and babies as unit of currency 2012-01-08T08:13:01.850Z
An argument that animals don't really suffer 2012-01-07T09:07:53.775Z
I'd like to talk to some LGBT LWers. 2011-12-30T10:39:10.249Z
Is quantum physics (easily?) computable? 2011-10-18T08:44:12.732Z
The effects of religion (draft) 2011-09-29T01:11:21.517Z
Your favorite pdfs? 2011-09-18T11:18:38.184Z
For fiction: How could alien minds differ from human minds? 2011-08-21T10:37:50.151Z

Comments

Comment by Solvent on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-28T02:47:54.452Z · LW · GW

I have made bootleg PDFs in LaTeX of some of my favorite SSC posts, and gotten him to sign printed out and bound versions of them. At some point I might make my SSC-to-LaTeX script public...

Comment by Solvent on State of the Solstice 2014 · 2014-12-25T06:32:04.633Z · LW · GW

I feel exactly the same way about the controversial opinions.

Comment by Solvent on Open thread, Oct. 6 - Oct. 12, 2014 · 2014-10-28T10:40:03.863Z · LW · GW

I used to work at App Academy, and have written about my experiences here and here.

You will have a lot of LW company in the Bay Area (including me!) There will be another LWer who isn't Ozy in that session too.

I'm happy to talk to you in private if you have any more questions.

Comment by Solvent on [link] Guide on How to Learn Programming · 2014-04-23T00:52:33.287Z · LW · GW

Zipfian Academy is a bootcamp for data science, but it's the only non web dev bootcamp I know about.

Comment by Solvent on Open Thread April 8 - April 14 2014 · 2014-04-09T21:45:18.405Z · LW · GW

I work at App Academy, and I'm very happy to discuss App Academy and other coding bootcamps with anyone who wants to talk about them with me.

I have previously Skyped LWers to help them prepare for the interview.

Contact me at bshlegeris@gmail.com if interested (or in comments here).

Comment by Solvent on Self-Congratulatory Rationalism · 2014-03-03T01:32:07.046Z · LW · GW

I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me.

That's a moral disagreement, not a factual disagreement. Alicorn is a deontologist, and you guys probably wouldn't be able to reach consensus on that no matter how hard you tried.

Comment by Solvent on Methods for treating depression · 2014-02-17T20:23:45.881Z · LW · GW

I interpreted that bit as "If you're the kind of person who is able to do this kind of thing, then self-administered CBT is a great idea."

Comment by Solvent on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-02-12T22:44:35.701Z · LW · GW

Of the people who graduated more than 6 months ago and looked for jobs (as opposed to going to university or something), all have jobs.

About 5% of people drop out of the program.

Comment by Solvent on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-02-01T21:54:31.783Z · LW · GW

It will probably be fine. See here.

Comment by Solvent on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-30T01:23:19.537Z · LW · GW

You make a good point. But none of the people I've discussed this with who didn't want to do App Academy cite those reasons.

Comment by Solvent on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-30T01:21:29.212Z · LW · GW

I don't think that they're thinking rationally and just saying things wrong. They're legitimately thinking wrong.

If they're skeptical about whether the place teaches useful skills, the evidence that it actually gets people jobs should remove that worry entirely. Their point about accreditation usually came up after I had cited their jobs statistics. My impression was that they were just looking for their cached thoughts about dodgy looking training programs, without considering the evidence that this one worked.

Comment by Solvent on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-29T23:20:43.553Z · LW · GW

I suspect that most people don't think of making the switch.

Comment by Solvent on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-29T23:19:53.343Z · LW · GW

Pretty much all of them, yes. I should have phrased that better.

My experience was unusual, but if they hadn't hired me, I expect I would have been hired like my classmates.

Comment by Solvent on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-29T22:42:28.958Z · LW · GW

I did, but the job I got was being a TA for App Academy, so that might not count in your eyes.

Their figures are telling the truth: I don't know anyone from the previous cohort who was dissatisfied with their experience of job search.

Comment by Solvent on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-29T22:42:04.671Z · LW · GW

They let you live at the office. I spent less than $10 a day. Good point though.

Comment by Solvent on Rationalists Are Less Credulous But Better At Taking Ideas Seriously · 2014-01-29T20:05:39.684Z · LW · GW

ETA: Note that I work for App Academy. So take all I say with a grain of salt. I'd love it if one of my classmates would confirm this for me.

Further edit: I retract the claim that this is strong evidence of rationalists winning. So it doesn't count as an example of this.

I just finished App Academy. App Academy is a 9 week intensive course in web development. Almost everyone who goes through the program gets a job, with an average salary above $90k. You only pay if you get a job. As such, it seems to be a fantastic opportunity with very little risk, apart from the nine weeks of your life. (EDIT: They let you live at the office on an air mattress if you want, so living expenses aren't much of an issue.)

There are a bunch of bad reasons to not do the program. To start with, there's the sunk cost fallacy: many people here have philosophy degrees or whatever, and won't get any advantage from that. More importantly, it's a pretty unusual life move at this point to move to San Francisco and learn programming from a non-university institution.

LWers are massively overrepresented at AA. There were 4/40 at my session, and two of those had higher karma than me. I know other LWers from other sessions of AA.

This seems like a decent example of rationalists winning.

EDIT:

My particular point is that for a lot of people, this seems like a really good idea: if there's a 50% chance of it being a scam, and you're making $50k doing whatever else you were doing with your life, then if job search takes 3 months, you're almost better off in expectation over the course of one year.

And most of the people I know who disparaged this kind of course didn't do so because they disagreed with my calculation, but because it "didn't offer real accreditation" or whatever. So I feel that this was a good gamble, which seemed weird, which rationalists were more likely to take.

Comment by Solvent on 2013 Less Wrong Census/Survey · 2013-11-26T05:47:45.393Z · LW · GW

I took the survey.

Comment by Solvent on College courses versus LessWrong · 2013-09-16T05:06:32.167Z · LW · GW

I'm a computer science student. I did a course on information theory, and I'm currently doing a course on Universal AI (taught by Marcus Hutter himself!). I've found both of these courses far easier as a result of already having a strong intuition for the topics, thanks to seeing them discussed on LW in a qualitative way.

For example, Bayes' theorem, Shannon entropy, Kolmogorov complexity, sequential decision theory, and AIXI are all topics which I feel I've understood far better thanks to reading LW.

LW also inspired me to read a lot of philosophy. AFAICT, I know about as much philosophy as a second or third year philosophy student at my university, and I'm better at thinking about it than most of them are, thanks to the fantastic experience of reading and participating in discussion here. So that counts as useful.

Comment by Solvent on P/S/A - Sam Harris offering money for a little good philosophy · 2013-09-02T06:11:20.385Z · LW · GW

The famous example of a philosopher changing his mind is Frank Jackson with his Mary's Room argument. However, that's pretty much the exception which proves the rule.

Comment by Solvent on Harry Potter and the Methods of Rationality discussion thread, part 20, chapter 90 · 2013-07-02T10:49:03.382Z · LW · GW

Not only do I use that, it means that your comment renders as:

Hermione's body should now be at almost exactly five degrees Celsius [≈ recommended for keeping food cool] [≈ recommended for keeping food cool].

to me.

Comment by Solvent on Second-Order Logic: The Controversy · 2013-01-06T00:48:25.507Z · LW · GW

Basically, the busy beaver function tells us the maximum number of steps that a Turing machine with a given number of states and symbols can run for. If we know the busy beaver of, for example, 5 states and 5 symbols, then we can tell you if any 5 state 5 symbol Turing machine will eventually halt.

However, you can see why it's impossible to in general find the busy beaver function- you'd have to know which Turing machines of a given size halted, which is in general impossible.

Comment by Solvent on Second-Order Logic: The Controversy · 2013-01-05T09:34:27.586Z · LW · GW

Are you aware of the busy beaver function? Read this.

Basically, it's impossible to write down numbers large enough for that to work.

Comment by Solvent on 2012: Year in Review · 2013-01-03T09:49:23.543Z · LW · GW

The most upvoted post of all time on LW is Holden's criticism of SI. How many pageviews has that gotten?

Comment by Solvent on Morality Isn't Logical · 2012-12-29T00:02:18.261Z · LW · GW

It's a kind of utilitarianism. I'm including act utilitarianism and desire utilitarianism and preference utilitarianism and whatever in utilitarianism.

Comment by Solvent on Morality Isn't Logical · 2012-12-28T23:47:36.500Z · LW · GW

What do you mean by "utilitarianism"? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses "total happiness" as a utility function. This sentence appears to be designed to confuse the two meanings.

Yeah, my mistake. I'd never run across any other versions of consequentialism apart from utilitarianism (except for Clippy, of course). I suppose caring only for yourself might count? But do you seriously think that the majority of those consequentialists aren't utilitarian?

Comment by Solvent on Morality Isn't Logical · 2012-12-28T04:27:58.826Z · LW · GW

I edited my comment to include a tiny bit more evidence.

Comment by Solvent on Morality Isn't Logical · 2012-12-28T04:27:19.325Z · LW · GW

This seems like it has makings of an interesting poll question.

I agree. Let's do that. You're consequentialist, right?

I'd phrase my opinion as "I have terminal value for people not suffering, including people who have done something wrong. I acknowledge that sometimes causing suffering might have instrumental value, such as imprisonment for crimes."

How do you phrase yours? If I were to guess, it would be "I have a terminal value which says that people who have caused suffering should suffer themselves."

I'll make a Discussion post about this after I get your refinement of the question?

Comment by Solvent on Morality Isn't Logical · 2012-12-27T22:38:43.957Z · LW · GW

Here's an old Eliezer quote on this:

4.5.2: Doesn't that screw up the whole concept of moral responsibility?

Honestly? Well, yeah. Moral responsibility doesn't exist as a physical object. Moral responsibility - the idea that choosing evil causes you to deserve pain - is fundamentally a human idea that we've all adopted for convenience's sake. (23).

The truth is, there is absolutely nothing you can do that will make you deserve pain. Saddam Hussein doesn't deserve so much as a stubbed toe. Pain is never a good thing, no matter who it happens to, even Adolf Hitler. Pain is bad; if it's ultimately meaningful, it's almost certainly as a negative goal. Nothing any human being can do will flip that sign from negative to positive.

So why do we throw people in jail? To discourage crime. Choosing evil doesn't make a person deserve anything wrong, but it makes ver targetable, so that if something bad has to happen to someone, it may as well happen to ver. Adolf Hitler, for example, is so targetable that we could shoot him on the off-chance that it would save someone a stubbed toe. There's never a point where we can morally take pleasure in someone else's pain. But human society doesn't require hatred to function - just law.

Besides which, my mind feels a lot cleaner now that I've totally renounced all hatred.

It's pretty hard to argue about this if our moral intuitions disagree. But at least, you should know that most people on LW disagree with you on this intuition.

EDIT: As ArisKatsaris points out, I don't actually have any source for the "most people on LW disagree with you" bit. I've always thought that not wanting harm to come to anyone as an instrumental value was a pretty obvious, standard part of utilitarianism, and 62% of LWers are consequentialist, according to the 2012 survey. The post "Policy Debates Should Not Appear One Sided" is fairly highly regarded, and it esposes a related view, that people don't deserve harm for their stupidity.

Also, what those people would prefer isn't nessecarily what our moral system should prefer- humans are petty and short-sighted.

Comment by Solvent on Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87 · 2012-12-27T12:10:56.078Z · LW · GW

Harry's failing pretty badly to update sufficiently on available evidence. He already knows that there are a lot of aspects of magic that seemed nonsensical to him: McGonagall turning into a cat, the way broomsticks work, etc. Harry's dominant hypothesis about this is that magic was intelligently designed (by the Atlanteans?) and so he should expect magic to work the way neurotypical humans expect it to work, not the way he expects it to work.

I disagree. It seems to me that individual spells and magical items work in the way neurotypical humans expect them to work. However, I don't think that we have any evidence that the process of creating new magic or making magical discoveries works in an intuitive way.

Consider by analogy the Internet. It's not surprising that there exist sites such as Facebook which are really well designed and easy to use for humans, rendering in pretty colors instead of being plain HTML. However, these websites were created painstakingly by experts dealing with irritating low level stuff. It would be surprising that the same website had a surpassingly brilliant data storage system as well as an ingenius algorithm for something else.

Comment by Solvent on Morality Isn't Logical · 2012-12-27T12:02:20.765Z · LW · GW

Yeah, I'm pretty sure I (and most LWers) don't agree with you on that one, at least in the way you phrased it.

Comment by Solvent on Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87 · 2012-12-23T04:57:35.421Z · LW · GW

The author doesn't want to write sports stories. The girls get comic stories about relationships, but the boys don't get comic stories about Quidditch.

This is a very good point. As a reader, I think those 'silly young boy' conversations would probably get old for me faster than the girl ones.

Comment by Solvent on Programming Thread · 2012-12-10T00:40:05.778Z · LW · GW

I'm pretty sure we exactly agree on this. Just out of curiosity, what did you think I meant?

Comment by Solvent on Programming Thread · 2012-12-07T02:35:41.979Z · LW · GW

I mostly agree with ShardPhoenix. Actually learning a language is essential to learning the mindset which programming teaches you.

I find it's easiest to learn programming when I have a specific problem I need to solve, and I'm just looking up the concepts I need for that. However, that approach only really works when you've learned a bit of coding already, so you know what specific problems are reasonable to solve.

Examples of things I did when I was learning to program: I wrote programs to do lots of basic math things, such as testing primality and approximating integrals. I wrote a program to insert "literally" into sentences everywhere where it made grammatical sense. I used regular expressions to search through a massive text file for the names of people who were doing the same course as me. Having the goal made it easier to learn the syntax and concepts.

Comment by Solvent on Programming Thread · 2012-12-07T02:27:40.140Z · LW · GW

It depends on how much programming knowledge you currently have. If you want to just learn how to program, I recommend starting with Python, or Haskell if you really like math, or the particular language which lets you do something you want to be able to do (eg Java for making simple games, JavaScript for web stuff). Erlang is a cool language, but it's an odd choice for a first language.

In my opinion as a CS student, Python and Haskell are glorious, C is interesting to learn but irritating to use too much, and Java is godawful but sometimes necessary. The other advantage of Python is that it has a massive user base, so finding help for it is easier than for Erlang.

If I were you, I'd read Learn Python the Hard Way or Learn You a Haskell For Great Good- the second of those is how I started learning Haskell.

Comment by Solvent on 2012 Less Wrong Census/Survey · 2012-11-04T08:35:48.062Z · LW · GW

Done.

Comment by Solvent on Less Wrong Polls in Comments · 2012-09-20T11:13:46.685Z · LW · GW

I love what this poll reveals about LW readers. Many sympathise with Batman, because of his tech/intellectual angle. The same with Iron Man, but he's a bit less cool. Then two have heard of superman, and most LWers are male. And most of us don't care.

Comment by Solvent on Call for Anonymous Narratives by LW Women and Question Proposals (AMA) · 2012-09-20T11:10:57.968Z · LW · GW

It would be lovely if you'd point that kind of thing out to the nerdy guy. One problem with being a nerdy guy is that a lack of romantic experience creates a positive feedback loop.

So yeah, it's great to point out what mistakes the guy made. See Epiphany's comment here.

(I have no doubt that you personally would do this, I'm just pointing this out for future reference. You might not remember, but I've actually talked to you about this positive feedback loop over IM before. I complimented you for doing something which would go towards breaking the cycle.)

Comment by Solvent on [META] Karma for last 30 days? · 2012-08-30T15:35:53.024Z · LW · GW

How many people actually have that?

Comment by Solvent on Is Politics the Mindkiller? An Inconclusive Test · 2012-07-29T06:25:11.242Z · LW · GW

Wouldn't that be a lack of regulation on emigration, not immigration?

Comment by Solvent on Welcome to Less Wrong! (July 2012) · 2012-07-29T01:18:07.144Z · LW · GW

How do you mean?

Comment by Solvent on Welcome to Less Wrong! (July 2012) · 2012-07-28T01:20:30.725Z · LW · GW

I wonder why it is that so many people get here from TV Tropes.

Also, you're not the only one to give up on their first LW account.

Comment by Solvent on Imperfect Voting Systems · 2012-07-23T10:39:10.746Z · LW · GW

You're right. My mistake. The standard "that doesn't really apply for real world situations" argument of course applies, with the circular preferences and so on.

Comment by Solvent on Mass-murdering neuroscience Ph.D. student · 2012-07-23T10:36:12.858Z · LW · GW

I just read some of your comment history, and it looks like I wrote that a bit below your level. No offense intended. I'll leave what I wrote above there for reference of people who don't know.

Comment by Solvent on Mass-murdering neuroscience Ph.D. student · 2012-07-23T10:34:41.925Z · LW · GW

In case you're wondering why everyone is downvoting you, it's because pretty much everyone here disagrees with you. Most LWers are consequentialist. As one result of this, we don't think there's much of a difference between killing someone and letting them die. See this fantastic essay on the topic.

(Some of the more pedantic people here will pick me up on some inaccuracies in my previous sentence. Read the link above, and you'll get a more nuanced view.)

Comment by Solvent on Imperfect Voting Systems · 2012-07-23T07:11:16.604Z · LW · GW

Do these systems avoid the strategic voting that plagues American elections? No. For example, both Single Transferable Vote and Condorcet voting sometimes provide incentives to rank a candidate with a greater chance of winning higher than a candidate you prefer - that is, the same "vote Gore instead of Nader" dilemma you get in traditional first-past-the-post.

In the case of the Single Transferable Vote, this is simply wrong. If my preferences are Nader > Gore > Bush, I should vote that way. If neither Bush nor Gore have a majority, and Nader has the least number of first preferences, my vote contributes towards Gore's total. In no way does voting Gore > Nader > Bush instead help Gore (in the case where Nader obviously has a small number of votes), but it does make it less likely that Nader will get elected, which I presumably don't want.

The link describes how if your preferences are A > B > C > D, it is sometimes best to vote C > A > B > D because this will help get A elected, which is different to voting Gore ahead of Nader to get Gore elected.

Comment by Solvent on Morality open thread · 2012-07-09T03:03:28.022Z · LW · GW

You're confusing a few different issues here.

So your utility decreases when theirs increases. Say that your love or hate for the adult is L1, and your love or hate for the kid is L2. Utility change for each as a result of the adult hitting the kid is U1 for him and U2 for the kid.

If your utility decreases when he hits the kid, then all we've established is that -L2U2 > L1U1. You may love them both equally, but think that hitting the kid messes him up more than it makes the adult happy, you'd still be unhappy when the guy hits a kid. But we haven't established that you hate the adult.

If the only thing that makes Person X happy is hitting kids, and you somehow find out that his utility function has increased directly, then you can infer from that that he's hit a kid, and that makes you sad. However, this can happen even if you have a positive multiplier for his utility function in yours.

So I think your mistake is saying "I hate Person X, because I know they like to hit kids." You might hate them, but the given definitions don't force you to hate them just because they hit kids.

Put another way, you might not be happy if you heard that they had horrible back pain. You can care for someone, but not like what they're doing.

(Your comment still deserves commendation for presenting an argument in that form.)

Comment by Solvent on Morality open thread · 2012-07-09T02:53:32.023Z · LW · GW

What are you trying to do with these definitions? The first three do a reasonable job of providing some explanation of what love means on a slightly simpler level than most people understand it.

However, the "love=good, hate=evil" can't really be used like that. I don't really see what you're trying to say with that.

Also, I'd argue that love has more to do with signalling than your definition seems to imply.

Comment by Solvent on Wes Weimer from Udacity presents his list of things you should learn · 2012-06-23T14:41:16.730Z · LW · GW

He used the opening paragraph as one of the example strings for something you were testing your regular expressions on.

Comment by Solvent on Help please! · 2012-06-06T23:05:44.909Z · LW · GW

This might be a really good idea.

Comment by Solvent on Have you changed your mind lately? On what? · 2012-06-05T09:19:54.772Z · LW · GW

I don't mean attractiveness just in the sense of physical looks. I mean the whole thing of my social standing, confidence and perceived coolness.

But thanks for the advice.