Posts

Comments

Comment by bbarth on Problematic Problems for TDT · 2012-06-06T19:55:40.178Z · LW · GW

Sorry. It didn't seem rude to me. I'm just frustrated with where I see folks spending their time.

My apologies to anyone who was offended.

Comment by bbarth on Problematic Problems for TDT · 2012-06-05T18:53:19.526Z · LW · GW

Yeah, I might, but here I was just surprised by the down-voting for contrary opinion. It seems like the thing we ought to foster not hide.

Comment by bbarth on Problematic Problems for TDT · 2012-06-05T16:49:47.763Z · LW · GW

I'm interested in general purpose optimizers, but I bet that they will be evolved from AIs that were more special purpose to begin with. E.g., IBM Watson moving from Jeopardy!-playing machine to medical diagnostic assistant with a lot of the upfront work being on rapid NLP for the J! "questions".

Also, there's no reason that I've seen here to believe that Newcomb-like problems give insights into how to develop to decision theories that allow us to solve real-world problems. It seems like arguing about corner cases. Can anyone establish a practical problem that TDT fails to solve because it fails to solve these other problems?

Beyond this, my belief is that without formalization and programming of these decision frameworks, we learn very little. Asking what does xDT do in some abstract situation, so far, seems very handy-wavy. Furthermore, it seems to me that the community is drawn to these problems because they are deceptively easy to state and talk about online, but minds are inherently complex, opaque, and hard to reason about.

I'm having a hard time understanding how correctly solving Newcomb-like problems is expected to advance the field of general optimizers. It seems out of proportion to the problems at hand to expect a decision theory to solve problems of this level of sophistication when the current theories don't seem to obviously "solve" questions like "what should we have for lunch?". I get the feeling that supporters of research on these theories assume that, of course, xDT can solve the easy problems so let's do the hard ones. And, I think evidence for this assumption is very lacking.

Comment by bbarth on Problematic Problems for TDT · 2012-06-05T14:43:15.070Z · LW · GW

Harsh crowd.

It might be nice to be able to see the voting history (not the voters' names, but the number of up and down votes) on a comment. I can't tell if my comments are controversial or just down-voted by two people. Perhaps even just the number of votes would be sufficient (e.g. -2/100 vs. -2/2).

Comment by bbarth on Problematic Problems for TDT · 2012-06-04T12:54:49.779Z · LW · GW

Seems unlikely to work out to me. Humans evolved intelligence without Newcomb-like problems. As the only example of intelligence that we know of, it's clearly possible to develop intelligence without Newcomb-like problems. Furthermore, the general theory seems to be that AIs will start dumber than humans and iteratively improve until they're smarter. Given that, why are we so interested in problems like these (which humans don't universally agree about the answers to)?

I'd rather AIs be able to help us with problems like "what should we do about the economy?" or even "what should I have for dinner?" instead of worrying about what we should do in the face of something godlike.

Additionally, human minds aren't universal (assuming that universal means that they give the "right" solutions to all problems), so why should we expect AIs to be? We certainly shouldn't expect this if we plan on iteratively improving our AIs.

Comment by bbarth on Problematic Problems for TDT · 2012-06-04T02:57:36.166Z · LW · GW

i don't see how your example is apt or salient. My thesis is that Newcomb-like problems are the wrong place to be testing decision theories because they do not represent realistic or relevant problems. We should focus on formalizing and implementing decision theories and throw real-world problems at them rather than testing them on arcane logic puzzles.

Comment by bbarth on Problematic Problems for TDT · 2012-06-03T18:35:06.164Z · LW · GW

Given the week+ delay in this response, it's probably not going to see much traffic, but I'm not convinced "reading" source code is all that helpful. Omega is posited to have nearly god-like abilities in this regard, but since this is a rationalist discussion, we probably have to rule out actual omnipotence.

If Omega intends to simply run the AI on spare hardware it has, then it has to be prepared to validate (in finite time and memory) that the AI hasn't so obfuscated its source as to be unintelligible to rational minds. It's also possible that the source to an AI is rather simple but it is dependent a large amount of input data in the form of a vast sea of numbers. I.e., the AI in question could be encoded as an ODE system integrator that's reliant on a massive array of parameters to get from one state to the next. I don't see why we should expect Omega to be better at picking out the relevant, predictive parts of these numbers than we are.

If the AI can hide things in its code or data, then it can hide functionality that tests to determine if it is being run by Omega or on its own protected hardware. In such a case it can lie to Omega just as easily as Omega can lie to the "simulated" version of the AI.

I think it's time we stopped positing an omniscient Omega in these complications to Newcomb's problem. They're like epicycles on Ptolemaic orbital theory in that they continue a dead end line of reasoning. It's better to recognize that Newcomb's problem is a red herring. Newcomb's problem doesn't demonstrate problems that we should expect AI's to solve in the real world. It doesn't tease out meaningful differences between decision theories.

That is, what decisions on real-world problems do we expect to be different between two AIs that come to different conclusions about Newcomb-like problems?

Comment by bbarth on Toward "timeless" continuous-time causal models · 2012-01-31T15:41:14.077Z · LW · GW

If LW would update the page template to have the script in the html header, I think we'd be set. Isn't there a site admin for this?

I think this is critical, because rationality in the end needs mathematical support, and MathJax is really the de facto way of putting math in web posts at this point.

Comment by bbarth on Toward "timeless" continuous-time causal models · 2012-01-31T04:24:10.071Z · LW · GW

Wouldn't the right solution be to use MathJax?

Comment by bbarth on The Singularity Institute is hiring an executive assistant near Berkeley · 2012-01-24T04:26:59.907Z · LW · GW

As one of the folks who made this argument in the other job thread, I'm going to disagree with you. Paying an assistant $36k/yr seems low to me for the Bay Area, but $100k/yr is probably out of line. These all seem like assistanty things that draw more modest salaries. Indeed.com puts the average for administrative assistants in SF at $43k/yr, so given that it's non-profit, it's certainly in range. Do SIAI jobs come with health insurance?

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-21T04:35:44.959Z · LW · GW

Seriously? This place already has a rep for being a personality cult. Let's not purposefully reinforce it. ;)

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-20T02:24:57.260Z · LW · GW

Most grad students work half time! We pay ~$45k/yr full time rate (so most students get around $28k/yr) plus insurance and tuition. How much is a cool story worth?

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-19T21:35:57.143Z · LW · GW

No. Same here.

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-19T21:28:05.046Z · LW · GW

It's not a question of SIAI not being good enough for Yvain, it's a question of whether they might both do even better if he pursues something else. It clearly sounds like he's pursing a different path than joining SIAI now, so he must have done at least some of the math. He's in med school according to his webpage, so I suspect his prospects for helping the cause might be higher if he does well as a doctor and sends every dime he doesn't need (say his salary as a doctor less $36k/yr) to SIAI. It certainly seems like it might be a waste of his current efforts to drop his medical aspirations and become a curriculum producer at SIAI, but I might be suffering from a form of the Sunk Cost Fallacy here.

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-19T21:12:53.355Z · LW · GW

There's no indication that this is entry-level. Also, if you look further on that page, you'll see that the median full-time employed person over 25 years of age with a Bachelor's degree in the US makes $56k/yr. My read of the position description leans towards college grads given some of the qualifications that they want. If you look at overall median household incomes in the Bay Area, you'll see that they top $74k/yr depending on the county of choice. Given the way that full-time vs. part-time seems to skew the data, I still say they're undershooting for their area of the country.

Don't sell yourself short. Unless you're willing to forgo income now for the possibility that the movement needs you now, perhaps it will be better off if you go and make a lot more money with your skills, improve the world through some different work, and give as much of your income as you don't want or need to SIAI instead.

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-19T20:30:51.994Z · LW · GW

Here's how typical people read typical job ads (typically), especially ones that are this long: Read the title. Scan for a dollar sign or the words "salary" or "salary range". If both are good enough, scan for the first bulleted list of qualifications. Most ads call these "required qualifications". If the reader meets enough of these, they scan for the second bulleted list of qualifications which is usually called "preferred qualifications". Then, if they meet enough of both of these, they'll go back and start reading in detail to understand the position better before they consider sending in an application or contacting the hiring entity for more information.

I suspect that most people expected your job ad to follow this form since it almost does. Your sections are labeled, effectively "needed" and "bonus". It's not until you get to reading the now-bolded details that you find out that not all of the "needed" stuff is required of the applicant and that essentially any one of the needed qualifications will be sufficient. Basically, you don't have any required qualifications, but you do have a general description of the sort of person you're interested in and a list of preferred qualifications. In this regard, the ad is defective as it fails to comport with the usual format of a typical ad.

Non-standard forms get experienced people's hackles up. It often indicates that there's something unprofessional about the organization.

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-19T19:37:39.043Z · LW · GW

Like I said in response to Anna, I'm not offended. I just think you could have done better in setting expectations.

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-19T18:41:23.424Z · LW · GW

I'm sorry if I came across as overly critical. I had a flashback to the job ad that EY promoted in September of '10 which came off in a similar way to me (though, clearly, this one has much more detail), and that probably drove the tone of my posts. I'm certainly not offended.

Now, that being said, I've noticed that there are a number of young idealists in this community, and I think it would be good if we could help them understand what they're getting into. We have a responsibility to help the up and coming among us to make good decisions. Making it clear that the SIAI "standard" salary is way under market for skillful people and that applicants should understand the opportunity costs associated with working for a non-profit for a period of time should be part of the job description when it comes a rationalist source to this audience. I presume that EY knows this, and so I attribute the lack of it to something being fishy. If nothing's fishy, then this discussion let us clear the air.

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-19T17:10:44.266Z · LW · GW

Agreed.

We pay grad students ~$45k for 40 hours a week. Most of them only work half time, so they take home a lot less than that. Of course they also get health insurance. Also, this doesn't appear to be seeking a student.

Edited to add: We pay their tuition, too.

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-19T16:27:18.016Z · LW · GW

It's also possible, for example, that they don't actually want people with work experience doing these things and would settle for folks who are decent at them but have so far only done these activities as a hobby/self-training exercise. If that's the case, then $36k/yr might be OK, and it might be a good opportunity for someone to get these skills on their resume for a later job search in a relevant industry. If that's what they're really looking for, they should state it as such. Otherwise, I remain highly skeptical of the position.

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-19T15:57:53.560Z · LW · GW

I don't know what others think (besides myself and thomblake, clearly), but I think it's between 3 and 4x under market for a person with those skills in the Bay Area. It's between 2 and 3x under market in a place like Austin, TX, depending on experience.

People with experience doing the things listed above make high 5 and low 6-figure salaries plus benefits (medical, 401k with some matching, etc.) in industry jobs, or they are university or secondary school teachers who have reasonable salaries, health care, and other benefits like tenure not available to industry workers.

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-19T15:38:37.181Z · LW · GW

I guess my points were a little too obtuse. People with even a handful of these skills get paid a lot more than you're offering (e.g. school teachers have curriculum design and teaching experience, and generally make much more than $36k/yr). Clearly, stating the salary is "upfront" about the salary, but that wasn't my complaint. My complaint was that it appears that by offering a well below market salary you're looking for a fellow traveler/true believer/movement participant who is so highly dedicated to the cause that they are willing to sacrifice a good chunk of their potential earnings to advance SIAI's goals. If that's the case, then you should state it directly. If it's not the case, then another possibility that comes to mind is that you're hoping to exploit the passion of a young person who feels strongly about the cause but doesn't realize what they're worth on the open market.

My concern is that by not stating anything about this obviously (to me) below market salary, you're leaving your motivations open to serious question. I think it better to lay out some sort of reasoning behind it than to leave it ambiguous.

Comment by bbarth on POSITION: Design and Write Rationality Curriculum · 2012-01-19T14:03:28.918Z · LW · GW

The salary for this position seems off by a factor of between 3 and 4 given the sort of background you want. You're asking for someone with professional level design skills coupled to the skills of a university professor or really good high school teacher or video game designer (depending or your perspective). People with these skills get paid a lot more than you're offering. $36k/yr isn't going to get you a bright recent college grad, especially if they have to live in the Bay Area.

It seems to me that you're more interested in hiring folks that are deeply dedicated to the movement so that you can pay them a sub-market salary than hiring the best person you can find. Which is fine, but you should be upfront about it.

Comment by bbarth on Build Small Skills in the Right Order · 2011-04-19T13:47:39.260Z · LW · GW

Agreed. I just sounded like this discussion was trending into hyperbole about the dangers of smoking.

Comment by bbarth on Build Small Skills in the Right Order · 2011-04-19T02:36:50.813Z · LW · GW

I don't think people become addicted by TRYING a cigarette. It takes several if not dozens or more. The physical dependence is acquired and comes by degrees.

Comment by bbarth on LW's first job ad · 2010-09-19T15:38:43.449Z · LW · GW

My apologies, Anna, I didn't know that you worked for SIAI until I was browsing the site this morning for a better hint about this job. I didn't realize that you were likely operating on inside information.

Comment by bbarth on LW's first job ad · 2010-09-18T04:48:08.688Z · LW · GW

That's awfully parochial of you. Also, that puts me firmly in the "this shouldn't have been promoted camp."

If the rationality community is going to grow, it would behoove it to be more open not less. It's a bit surprising that you would advocate for insular and incestuous hiring practices given the hurdles that this community has to overcome if it wants attract more members.

Comment by bbarth on LW's first job ad · 2010-09-17T22:44:40.164Z · LW · GW

Also, did a rationalist just ask me to take something on faith? ;)

Comment by bbarth on LW's first job ad · 2010-09-17T22:22:23.969Z · LW · GW

Here's the thing: Consider the circumstances of a potential applicant who makes $X and live in Texas. If applying to this job is going to be worth their time, they need to know that it's worth at least f$X where f (greater than unity) is a conversion factor for the cost of living in Texas vs. the Bay Area. If the only job pays, say 0.5 $X or less, then it's probably not even worth the applicant's time to update their resume. Additionally, if the applicant is already employed, then they'd need to have some confidence that the application process would be handled confidentially lest they be exposed to their current employer and put in a difficult situation.

Nothing in EY's post gives any confidence for either of these factors. He's made no effort to signal that this is on the up and up. There's no way to know whether there's positive utility to be gained by applying. It's a complete and utter crapshoot. The ad says no experience required, but is that their preference? It reads partly as though they're looking for a visionary but partly as though they're looking for a newbie. How is anyone supposed to make out what's wanted from the ad?

Additionally, as best I can tell, most people on this forum don't know EY personally. Saying trust him, he's a good guy is like asking you to trust me. I haven't given you any reason to do so, and (especially to a person new to the site) the thread of comments here about whether or not to promote this story might make one think that EY is a bit of a loose cannon.

It's clearly within EY's power to update the job posting with a better description of the job and a salary range. He should also state some anonymous facts about the company in question (order of magnitude number of employees, industry, public or private, order of magnitude market capitalization, etc.). Finally, he could also state that he is personally in control of yaunshotfirst@gmail.com so that folks know that they're giving their info over to him and not some random entity on the net.

Edited to remove asterisks which apparently put the font into italics....

Comment by bbarth on LW's first job ad · 2010-09-17T14:06:34.625Z · LW · GW

And you would guess that why? The post is almost entirely evidence free. If you know something that can shed some light on the situation, please share it! Anything else is rank speculation.

There's no data in this post that makes it clear that it's at all safe to send my resume (with some personal data on it) to what appears to be a throwaway gmail account. Job descriptions usually come with more data. Even if there's a recruiter in the middle, at least the recruiter has you contact them directly. Here, EY is asking us to contact an anonymous email address. This makes it seem really fishy.

Comment by bbarth on LW's first job ad · 2010-09-17T01:52:27.219Z · LW · GW

I'm not sure why. It suggests that people out of high school could apply if they have participated in math-Olympiad-type events or are a polymath (which for high school grads might cover some high-end calculus and maybe some number theory or analysis). That being said, the job is looking for an ideas person of some sort, which doesn't scream recent high school grad to me. Thus the question.

Let me rephrase. Does it pay more or less than $100k starting?

Comment by bbarth on LW's first job ad · 2010-09-16T13:19:38.052Z · LW · GW

Can you define "pay[s] well"? I.e. does it pay well for someone straight out of college, or does it pay well for someone with 10 years of research experience?

Comment by bbarth on Money: The Unit of Caring · 2009-03-31T14:24:36.681Z · LW · GW

Salaried professionals often cannot do an extra hour of work in order to donate the proceeds to charity. My employer basically prohibits me from moonlighting/consulting/etc. Even many hourly employees can't get extra hours at work as that would be higher-rate overtime that their employer is unwilling to pay. Monetary charitable giving takes away from my current bottom line, but charitable working just eats into my leisure hours.

Since I cannot do extra paid work without fear of consequences at my primary job, my non-work time may be practically worthless. I can only use it to do things that I might otherwise pay someone else to do. If I can do work around the house, then I can save the cost of paying the plumber. Suppose I make $100/hr (nominally) and the plumber charges $50/hr. Assuming we can do the same job in the same time, I haven't lost $50/hr by doing it myself instead of paying the plumber, I've simply lost the utility of those hours which I may not rate highly if I'd have otherwise laid on the couch watching Simpsons reruns.

Some units of caring cost more than others. I can donate $100 to charity, or I can do 100 hours of work for that charity using hours that only cost me $1/hr (presuming that I rate the utility of those hours otherwise spent low enough).

Clearly, people shouldn't be derided for donating excess money (the "overemployed"?) to charity rather than their time, but I think the calculus is a little more complex than what you describe in your post. For those living near their means (neither under- or overemployed), there are additional economic factors that make donation of time heavily favored over donation of money. That a culture of valuing this has arisen to justify/rationalize such behavior shouldn't be terribly surprising.