Open Thread: March 2010, part 2
post by RobinZ · 2010-03-11T17:25:40.695Z · LW · GW · Legacy · 342 commentsContents
342 comments
The Open Thread posted at the beginning of the month has exceeded 500 comments – new Open Thread posts may be made here.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
342 comments
Comments sorted by top scores.
comment by knb · 2010-03-11T22:59:25.074Z · LW(p) · GW(p)
Anybody else think the modern university system is grossly inefficient? Most of the people I knew in undergrad spend most of their time drinking to excess and skipping classes. In addition, barely half of undergraduates get their B.A in 6 years after starting. The whole system is hugely expensive in both direct subsidies and opportunity costs.
I think that society would benefit from switching to computer based learning systems for most kinds of classes. For example, I took two economics courses that incorporated CBL elements, and I found them vastly more engrossing and much more time-efficient than the lecture sections. Instead of applying to selective universities (which gain status by denying more students entry than others) people could get most of their prerequisites out of the way in a few months with standard CBL programs administered at a marginal cost of $0.
Replies from: wedrifid, SilasBarta, mattnewport, SilasBarta, Douglas_Knight, Strange7, Kevin, RobinZ↑ comment by SilasBarta · 2010-03-11T23:27:17.855Z · LW(p) · GW(p)
Anybody else think the modern university system is grossly inefficient?
Yep. They mainly persist as a way to sort workers: those that can get through, and with a degree in X at university Y, are good enough to be trusted to job Z (even though, as is usually the case, nothing in X actually pertains to Z -- you're just signaling your general qualifications for being taken on to do job Z).
Having the degree is a good proxy for certain skills like intelligence, diligence, etc. Why not test for intelligence directly? Because in the US and most industrialized countries, it's illegal, so they have to test you by proxy -- let the university give you an IQ test as a standard for admission, but not call it that.
Shifting to a system that actually makes sense is going to require overcoming a lot of inertia.
Replies from: thomblake↑ comment by thomblake · 2010-03-12T20:31:12.961Z · LW(p) · GW(p)
I agree with this analysis to some extent. I'm not sure I'm willing to grant that the primary purpose of universities is a way to sort workers, but that is a major thing they're used for, and I tend to argue at length that they should get out of that business. I argue as much as possible against student evaluation, grading, and granting degrees. One of the first arguments that pops up tends to be, "But how will people know who to hire / let into grad school?"
But I don't think it's the University's job to answer that question.
↑ comment by mattnewport · 2010-03-11T23:05:53.385Z · LW(p) · GW(p)
Clearly universities are grossly inefficient at teaching, but as Robin Hanson would say, School isn't about Learning.
The education system in general in most Western countries is grossly inefficient but that is largely because it is not structured in a way that rewards educating efficiently, and that is exactly how most of the participants want it.
↑ comment by SilasBarta · 2010-03-12T19:20:42.330Z · LW(p) · GW(p)
Oh, and to add to my earlier comment, another major problem with the system is the difficulty with which you can dismiss employees, which extends through most industrialized countries. This makes it much harder to take a chance on anyone, significantly restricting the set of who has a chance at any job, and thus requiring much more proof in advance.
And what frustrates me the most is that most such regulations/legal environments are called "pro-worker" and the debate on them framed from the assumption that if you want to help workers you must want these laws. No, no, no! These laws make it labor markets much more rigid.
Remember, whatever requirement you force on employers as a surprise, they will soon take into account when looking to hire their next albatross. There's no free lunch! These benefits can only be transient and favor only people lucky enough working at a particular time. As time goes by, you just see more and more roundabout, wasteful ways to get around the restrictions. (Note the analogy to "push the fat guy off the trolley" problems...)
↑ comment by Douglas_Knight · 2010-03-12T01:05:51.835Z · LW(p) · GW(p)
Are you talking about the US? The statistic suggests that you're talking about somewhere specific. I'll assume the US.
You have several claims that are not obviously related. That's not to say that I disagree with any of them, though I probably would disagree with the implicit claims that relate them, if I had to guess what they were. One red flag is the conflation of public and private schools, which have different goals and methods. The 6 year graduation rate is really about public schools, right? But then you invoke selective schools in the last paragraph.
Replies from: knb↑ comment by knb · 2010-03-12T03:39:35.414Z · LW(p) · GW(p)
The six year rate is is a nationwide average for the united states.
Replies from: Kaj_Sotala, Douglas_Knight, SilasBarta↑ comment by Kaj_Sotala · 2010-03-12T12:35:43.324Z · LW(p) · GW(p)
Thank you, this was a quite useful link for me. (Finnish colleges currently charge no tuition fees, and some are arguing for their introduction on the basis that this would make people graduate faster; those statistics show that US students don't really graduate any much better than Finnish ones.)
↑ comment by Douglas_Knight · 2010-03-12T05:47:49.048Z · LW(p) · GW(p)
I stand by my statement.
↑ comment by SilasBarta · 2010-03-12T19:00:42.814Z · LW(p) · GW(p)
Well, then I guess I'm triple special for getting a degree straight from high school in 2.5 years. In engineering. [/toots horn]
↑ comment by Strange7 · 2010-03-11T23:16:38.190Z · LW(p) · GW(p)
I certainly agree that CBL is useful, and the system as a whole is riddled with inefficiencies and perverse incentives.
However, I think a lot of the problem there is actually a matter of cultural context. Prior to entering college, those undegrads learned that drinking is something fun grownups are allowed to do, whereas listening to the teacher and doing homework are trials to be either grimly endured, or minimized by good behavior in other areas.
↑ comment by Kevin · 2010-03-12T01:23:29.575Z · LW(p) · GW(p)
College is often a way for 18 year olds to delay social adulthood for 4-6 years. This American Life did a very good episode on the drinking culture at the USA's #1 party school, Penn State, that proves this point beyond a reasonable doubt. Time and time again binge drinking students say that the reason they are doing it and the reason they love Penn State is because this is the only chance in their lives they are going to have to live this lifestyle.
TAL sells the MP3 of the show or it's widely available on torrent sites with a simple Google search.
Replies from: knb↑ comment by knb · 2010-03-12T03:41:45.337Z · LW(p) · GW(p)
It's very interesting that Penn State was ranked a number 1 party school, since it's probably one of America's most respected schools!
Replies from: Kevin, Daniel_Burfoot↑ comment by Kevin · 2010-03-12T04:23:19.701Z · LW(p) · GW(p)
It's not that meaningful of a ranking; Penn State was anointed the #1 party school by an online poll done by the Princeton Review. It did however prove that out of all of the schools with strong school spirit and insane binge drinking cultures, the students at Penn State are the best at rigging online polls. In other words, Penn State is the #1 party school because the students decided they wanted to be considered the #1 party school.
↑ comment by Daniel_Burfoot · 2010-03-12T04:18:13.244Z · LW(p) · GW(p)
I think you are confusing Penn State with the University of Pennsylvania.
Replies from: knb, Kevin↑ comment by Kevin · 2010-03-12T04:21:45.380Z · LW(p) · GW(p)
Penn is more respected than Penn State, but Penn State is one of the top public schools in the USA -- #15 on US News's rather controversial list. http://colleges.usnews.rankingsandreviews.com/best-colleges/national-top-public
↑ comment by RobinZ · 2010-03-11T23:40:54.269Z · LW(p) · GW(p)
Do you have a statistic to back up the 6-years figure? The graduation rate appears higher than that to me.
Replies from: knb, Douglas_Knight↑ comment by knb · 2010-03-12T03:45:35.219Z · LW(p) · GW(p)
This is the figure I was referencing. 53% graduate in 6 years. Charles Murray (of The Bell Curve fame believes that most people just aren't smart enough for college level work. Based on my experience, "college level work" isn't very difficult, so I remain skeptical.
↑ comment by Douglas_Knight · 2010-03-12T01:16:00.110Z · LW(p) · GW(p)
You're from Illinois, right? Its graduation rate of 59% is barely higher than the US average of 56%. UIUC's rate is 80%, ISU 60%, and NEIU 20%. NEIU isn't very big, but there might be lots of similar schools. (ETA: actually NEIU+CSU are already pretty close to canceling out UIUC.)
Replies from: RobinZ↑ comment by RobinZ · 2010-03-12T01:34:39.337Z · LW(p) · GW(p)
Am I from Illinois? No, actually - Maryland. Checking the data, it seems I'm in a very strange statistical anomaly: 82% in 6 years. At a state university.
No wonder my impressions were skewed.
Replies from: Karl_Smith↑ comment by Karl_Smith · 2010-03-12T17:04:02.743Z · LW(p) · GW(p)
You are at the state flagship. 82% at College Park is roughly equal to Urbana-Champaign's 80%. The point is that top schools pick students who can get through and/or do a better job of getting students through.
comment by Scott Alexander (Yvain) · 2010-03-11T22:49:31.344Z · LW(p) · GW(p)
Repost from last open thread in the desperate hope that the lack of interest was only due to people not seeing it all the way at the bottom:
Replies from: arundelo, RokoI'll be in London on April 4th and very interested in meeting any Less Wrongers who might be in the area that day. If there's a traditional LW London meetup venue, remind me what it is; if not, someone who knows the city suggest one and I'll be there. On an unrelated note, sorry I've been and will continue to be too busy/akratic to do anything more than reply to a couple of my PMs recently.
↑ comment by Roko · 2010-03-12T18:52:26.535Z · LW(p) · GW(p)
If I am around I'll come. CipherGoth will probably be interested too.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2010-03-12T19:47:28.665Z · LW(p) · GW(p)
Why don't you PM me your phone number and/or email address and we can try to arrange something?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-03-16T23:46:45.415Z · LW(p) · GW(p)
I have email addresses for a few UK people. Mail me - paul at ciphergoth.org - and I'll send an email that copies everyone in.
comment by Nick_Tarleton · 2010-03-17T15:09:34.227Z · LW(p) · GW(p)
Replies from: JoshuaZ, Perplexed, simplicio↑ comment by JoshuaZ · 2010-09-07T23:01:47.360Z · LW(p) · GW(p)
The results of that study seem to be a bit more complicated in that they suggest that part of the cause of the distinction is that there's a common belief in the population that is very close to witchcraft (where intending harm or desiring can cause harm) and thus intent matters as much as action and there's isn't a clear dividing line between the two.
↑ comment by Perplexed · 2010-09-07T23:00:20.060Z · LW(p) · GW(p)
A rather ironic take on Hauser's Trolley Problem.
↑ comment by simplicio · 2010-09-07T22:44:31.234Z · LW(p) · GW(p)
These "act" trolley problems have the same difficulty as the original.
It's so implausible that the only way to stop a runaway truck/trolley would be to make it run over a person, that one doesn't know if one's intuition is reacting against the sheer implausibility or the moral dimension.
IMO, telling the subject that "pushing the fat man is the only way" is not helpful. We can't imagine ourselves in that epistemic position.
The best "fat man" scenario is the Unwilling Transplant Donor, but sadly it does not have a good omission counterpart.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-09-07T23:25:06.480Z · LW(p) · GW(p)
The best "fat man" scenario is the Unwilling Transplant Donor, but sadly it does not have a good omission counterpart.
Suppose the potential organ donor is choking to death, and you have the opportunity to perform the heimlich manuever and save him.
comment by SilasBarta · 2010-03-11T23:43:42.827Z · LW(p) · GW(p)
Request for help: I can do classroom programming, but not "real-world" programming. If the problem is to, e.g. take in a huge body of text, collect aggregate statistics, and generate new output based on those stats, I can write it. (My background is in C++.)
However, in terms of writing apps with a graphical user interface, take input in real-time, make use of existing code libraries, etc., I'm at a loss. I'd like to know what would be a good introduction to this more practical level.
To better explain where I am, here is what I have tried so far: I've downloaded a lot of simple open source programs that have a lot of source files. But strangely, whenever I compile them myself and get them to run, it just runs on the command screen blindingly fast and then closes, as if I'm missing some important step. (How are you normally expected to compile open-source programs?)
I've also worked with graphics libraries and read a book (IIRC, Zen and the Art of Direct3D Game Programming) and was able to use that for writing algorithms that determine the motion of 3D objects, given particular user inputs, but it was pretty limited in domain.
I've downloaded Visual C# Express, which was actually pretty helpful in terms of showing how you can create GUIs and then jump to the corresponding code that it calls. I wrote simple programs with that and even bought a book on how to use it, but it turned out to require very circuitous routes to do simple things.
Finally, becuase it's so highly recommended, and I've read Douglas Hofstadter's introduction to it, I thought about programming in Lisp, but the only programming environment for it that I could get to work was the plain old b/w command line, when I figured I'd need to have more functionality than that, and also the libraries to do more than just computation. (I'm experienced with Mathematica, which seems similar in a lot of ways to Lisp.)
So, an specific suggestions on where I should go from here?
Replies from: cousin_it, CronoDAS, wedrifid, Kevin, mattnewport, djcb, None, CannibalSmith, Morendil, Psy-Kosh, ciphergoth↑ comment by cousin_it · 2010-03-12T00:17:37.000Z · LW(p) · GW(p)
You want to do user-facing stuff? Then don't bother with desktop programming, write webapps. HTML and JavaScript are much easier than C++. You don't even have to learn the server side at first, a lot of useful stuff can be written as a standalone html file with no server support. For example you could make your own draggable interface to the free map tiles from http://openstreetmap.org - basically it's all just cleverly positioned image elements named like this, within a rectangular element that responds to mouse events. Or, if a little server-side coding doesn't scare you, you could make a realtime chat webpage. Stuff like that.
If you need any help at all, my email is vladimir.slepnev at gmail and I'm often online in gtalk.
Replies from: Morendil, SilasBarta↑ comment by Morendil · 2010-03-12T01:01:32.224Z · LW(p) · GW(p)
HTML and JavaScript are much easier than C++.
"Easy" is one goal you can have when learning to program. "Soundly written and maintainable" is another. Unfortunately these two goals are sometimes at odds.
Language and platform don't really matter a whole lot, in the grand scheme of things; learning how to write maintainable programs does matter. Having had lots of experience extending or modifying source code written by others, I wish more novice programmers would make that their number one goal.
Replies from: cousin_it↑ comment by cousin_it · 2010-03-12T01:24:02.017Z · LW(p) · GW(p)
I disagree!
A (real) novice programmer's number one worry should be getting paid. Why should they divert their attention and spend extra effort on writing maintainable code, just so you have an easier time afterward? That's awfully selfish advice.
You might claim writing maintainable code will pay off for them, but to properly evaluate that we need to weigh the marginal utilities. What's better, an extra hour improving the maintainability of your code, or an extra hour spent empathizing with the client? Ummm... And you can't say both of those things are first priority, that's not how it works. I've been coding for money for half of my life so listen to my words, ye lemmings: ship the thing, make the client happy, get paid. That's number one. Maintainability ain't number one, it ain't even in the top ten.
Replies from: Morendil, Morendil, mattnewport, BenAlbahari↑ comment by Morendil · 2010-03-12T03:56:14.975Z · LW(p) · GW(p)
number one worry should be getting paid
(Edited)
For one thing, that doesn't sound like something that's actionable for Silas in the context of his request for advice, compared to advising him to learn some specific techniques, such as MVC, which make for more maintainable code.
For another, your worry should be "getting paid" after you have reached a reasonable level of proficiency. A medical student's first concern isn't getting paid, it's learning how not to harm patients. Similarly if you're learning programming, as opposed to confident enough of your chops to go on the market, you have a responsibility to learn how not to harm future owners of your code through negligent design practices. That a majority of programmers today fail to fulfill that basic responsibility doesn't absolve you of it.
Replies from: cousin_it↑ comment by cousin_it · 2010-03-12T12:18:52.762Z · LW(p) · GW(p)
Programming is different from medicine. All the good programmers I know have learned their craft on the job. Silas doesn't have to wait and learn without getting paid, his current skill level is already in demand.
But that's tangential. More importantly, whenever I hear the word "maintainability" I feel like "uh oh, they wanna sell me some doctrinaire bullshit". Maintainability is one of those things everyone has a different idea of. In my opinion you should just try to solve each problem in the most natural manner, and maintainability will happen automatically.
Allow me to illustrate with an example. One of my recent projects was a user interface for IPTV set-top boxes. Lots and lots of stuff like "my account", "channels I've subscribed to", et cetera. Now, the natural way to solve this problem is to have a separate file (a "page") for each screen that the user sees, and ignore small amounts of code duplication between pages. If you get this right, it's pretty much irrelevant how crappily each individual page is coded, because it's only five friggin' kilobytes and a maintenance programmer will easily find and change any functionality they want. On the other hand, if you get this wrong... and it's really fucking distressing how many experienced programmers manage to get this wrong... making a Framework with a big Architecture that separates each page into small reusable chunks, perfectly Factored, with shiny and impeccable code... maintenance tasks become hell. And so it is with other kinds of projects too, in fact with most projects I've faced in my life. Focus on finding the stupid, straightforward, natural solution, and it will be maintainable with no effort.
Replies from: thomblake, wnoise, Morendil, SilasBarta↑ comment by thomblake · 2010-03-12T19:56:32.588Z · LW(p) · GW(p)
In my opinion you should just try to solve each problem in the most natural manner, and maintainability will happen automatically.
I wasn't with you on the importance of maintainability until you said this. Yes, programming well and naturally is automatically maintainable.
Replies from: cousin_it↑ comment by cousin_it · 2010-03-12T20:40:49.154Z · LW(p) · GW(p)
Right on. Another way to put it: if you have to spend extra effort on maintainability, you've probably screwed up somewhere.
My name for this kind of behavior is "fetish". For example, some people have a Law of Demeter fetish. Some people have a short function fetish. And so on, all kinds of little cargo cults around.
Allow me to illustrate with another example. One of my recent projects is mostly composed of small functions, but there's this one function that is three screens long. What does it do? It draws a pie chart with legend. The only pie chart in the whole application. There's absolutely no use refactoring it because it's all unique code that doesn't repeat and isn't used anywhere else in the app. Pick the colors, draw the slices, draw the legend, stop. All very clear and straightforward, very easy to read and modify. A fetishist would probably throw a fit and start factoring it into small chunks, giving them descriptive names, maybe making it a "class" with some bullshit "parameters" that actually only ever take one value, etc, etc.
Replies from: Morendil, CronoDAS↑ comment by Morendil · 2010-03-13T02:40:58.968Z · LW(p) · GW(p)
A fetishist would probably throw a fit
Well, that's merely labeling, not actually advancing an argument. What kind of predictions are we talking about here? Where is our substantial disagreement, if any?
When I talk about maintainability I'm referring to specific sequences of events. In one of the most common negative scenarios, I'm asked to make one change to the functionality of a program, and I find that it requires me to make many coordinated edits in distinct source chunks (files, classes, functions, whatever). This is called "coupling" and is a quantifiable property of a program relative to some functional change specification.
"Maintainable" relative to that change means (among other things) low coupling. You want to change the pie chart to use dotted lines instead of solid inside the pie, and you find that this requires a change in only one code location - that's low coupling.
Now what often happens is that someone needs a program that's able to do both dotted-line pies and solid-line pies. And many times the "most natural" thing (by which I only mean, "what I see many programmers do") is then to copy the pie-chart function, paste it elsewhere with a different name, and change the line style from solid to dotted.
That copy-paste programming "move" has introduced coupling, in the sense that if you want to make a change that affects all pie charts (dotted and solid alike) you'll have to make the corresponding source change twice.
Someone who programs that way is eventually going to drive coupling through the roof (by repeated applications of this maneuver). At this point the program has become so difficult to change that it has to be rewritten from scratch. Plus, high coupling is also correlated with higher incidence of defects.
Now you may call a "fetishist" someone whose coding style discourages copy-paste programming, that doesn't change the fact that it is a style which results in lower overall costs for the same total quantity of "delta functionality" integrated over the life of the program.
My contention is that functions which are three screens long are, other things equal, more likely to result in copy-paste parametrizations than smaller functions. (More generally, code that exhibits a higher degree of composability is less susceptible to design mistakes of this kind, at the cost of being slightly harder to understand for a novice programmer.)
I'd probably look hard at this pie chart thingy and consider chopping it up, if I felt the risk mitigation was worth the effort. Or I might agree with you and decide to leave it alone. I would consider it stupid to have a "corporate policy" or a "project rule" or even a "personal preference" of keeping all functions under a screenful. That wouldn't work, because more forces are in play than just function length.
Rather, I assess all the code I write against the criterion of "a small functional change is going to result in a small code change", and improve the structure as needed. I have a largish bag of tricks for doing that, in several languages and programming paradigms, and I'm always on the lookout for more tricks to pick up.
What, specifically, do you disagree with in the above?
Replies from: cousin_it↑ comment by cousin_it · 2010-03-13T10:59:10.406Z · LW(p) · GW(p)
I agree with most of your comment, except the idea that you can anticipate in what directions your software is going to grow. That's never actually worked for me. Whenever I tried designing for future requirements instead of current simplicity, clients found a way to throw me a curveball that made me go "oops, this new request screws up my whole design!"
If my program ever needs a second pie chart, it's better to factor the functionality out then instead of now. Less guesswork, plus a three-screen-long function is way easier to factor than a set of small chunks is to refactor.
Replies from: Morendil↑ comment by Morendil · 2010-03-13T14:19:23.295Z · LW(p) · GW(p)
except the idea that you can anticipate in what directions your software is going to grow
It's ironic that I should be suspected of claiming that. Let me reassure that you on this point, we agree as well. (It's looking more and more as if we have no substantial disagreement.)
My point is that the risk is perhaps lowest if you are going to add the second pie chart, but if someone else is, the three-screens-long function could be riskier than a slightly more factored version. Or not: there is no general rule involving only length.
If you want to make a pastie with that function I could give you an actual opinion. ;)
↑ comment by wnoise · 2010-03-12T15:09:47.941Z · LW(p) · GW(p)
Programming is different from medicine. All the good programmers I know have learned their craft on the job. Silas doesn't have to wait and learn without getting paid, his current skill level is already in demand.
I wouldn't say "on the job", necessarily. But it is only learned by programming, not by thinking about programming, attending lectures on programming, etc. Programming for class assignments can count for this.
Well, there is some benefit to reading good code, but you have to already have a reasonable idea what good code is for that to help.
↑ comment by Morendil · 2010-03-12T13:24:54.441Z · LW(p) · GW(p)
try to solve each problem in the most natural manner
That happens to take a significant amount of skill and learning. Read a site like the Daily WTF and you see what too often comes out of letting untrained, untaught programmers do what they're naturally inclined to do. One could learn a lot about programming simply by thinking about why the examples on that site are bad, and what principles would avoid them.
In practice you're right: people have different ideas of maintainability. That is precisely the problem.
Replies from: cousin_it↑ comment by cousin_it · 2010-03-12T14:24:51.503Z · LW(p) · GW(p)
try to solve each problem in the most natural manner
That happens to take a significant amount of skill and learning.
But I don't know of any way to acquire this "programming common sense" except on the job. Do you?
One could learn a lot about programming simply by thinking about why the examples on that site are bad, and what principles would avoid them.
Oh, no. What a terrible idea. If you do this without actually pushing through real-world projects of your own, you'll come up with a lot of bullshit "principles" that will take forever to dislodge. In general, the ratio of actual work to abstract thinking about "principles" should be quite high.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-03-12T18:51:25.624Z · LW(p) · GW(p)
But I don't know of any way to acquire this "programming common sense" except on the job. Do you?
Open source.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-03-17T16:54:16.447Z · LW(p) · GW(p)
Another case of "let them eat cake". The very gap in my understanding is the jump between writing input once/output once algorithms, to multi-resource complex-UI programs, when existing open source applications have source files that don't make sense to me and no one on the project finds it worth their time to bring me up to speed.
Replies from: wnoise↑ comment by wnoise · 2010-03-17T17:19:31.135Z · LW(p) · GW(p)
Between one-input, one-output programs and complex UIs are simple UIs, such as a program that loops in reading input and output, and maintains state while doing so.
The complex UIs are mostly a matter of wrapping this sort of "event loop" around a given framework or UI library. Some frameworks instead have their own event loop that does this, and instead you write callbacks and other code that the event loop calls at the appropriate times.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-03-19T14:54:28.480Z · LW(p) · GW(p)
Thanks, that helps. Now I just need to learn the nuts-and-bolts of particular libraries.
↑ comment by SilasBarta · 2010-03-17T16:49:58.280Z · LW(p) · GW(p)
Sorry for not reading the follow-up discussion earlier.
Silas doesn't have to wait and learn without getting paid, his current skill level is already in demand.
What do you mean by this? How can I be hired for programming based on just what I have now? Who hires people at my level, and how would they know whether I'm lying about my abilities? (Yes, I know, interviews, but to they have to thin the field first.) Is there some major job finding trick I'm missing?
My degree isn't in comp sci (it's in mech. engineering and I work in structural), and my education in C++ is just high school AP courses and occasional times when I need automation.
Also, I've looked at the requests on e.g. rent-a-coder and they're universally things I can't get to a working .exe (though of course could write the underlying algorithms for).
Replies from: thomblake, Morendil↑ comment by thomblake · 2010-03-17T17:46:48.188Z · LW(p) · GW(p)
Is there some major job finding trick I'm missing?
The best 'trick' for job-finding is to get one from someone you know. I'm not sure what you can do with that.
Generally speaking, there are a lot of people who aren't good at thinking but have training in programming, and comparatively not a lot of people who are good at thinking but not good at programming, and the latter are more valuable than the former. If I were looking for someone entry-level for webdev (and I'm not), I'd be likely to hire you over a random person with a master's degree in computer science and some experience with webdev.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-03-17T19:27:51.350Z · LW(p) · GW(p)
The best 'trick' for job-finding is to get one from someone you know. I'm not sure what you can do with that.
Heh, that's what I figured, and that's my weak point. (At least you didn't say, "Pff, just find one on the internet!" as some have been known to do.)
If I were looking for someone entry-level for webdev (and I'm not), I'd be likely to hire you over a random person with a master's degree in computer science and some experience with webdev.
Thanks. I don't doubt people would hire me if they knew me, but there is a barrier to overcome.
↑ comment by Morendil · 2010-03-17T17:55:26.906Z · LW(p) · GW(p)
I'm sorry to be the one to break the news to you, but the IT industry has appallingly low standards for hiring.
For instance, you may be able to get a programming job without at any point being asked to produce a code portfolio or to program in front of an interviewer.
I'd still be keen, by the way, to help you through a specific example that's giving you trouble compiling. I believe that when smart people get confused by things which their designers ought to have made simple, it's an opportunity to learn about improving similar designs.
Replies from: bogus, JGWeissman, wnoise, wedrifid, ata, RobinZ↑ comment by bogus · 2010-03-18T17:51:26.605Z · LW(p) · GW(p)
A quick solution to the FizzBuzz quiz:
HAI
CAN HAS STDIO?
I HAS A VAR
IM IN YR LOOP
UP VAR!!1
IZ VAR LEFTOVER 15 LIEK 0?
YARLY VISIBLE "FizzBuzz"
NOWAI IZ VAR LEFTOVER 3 LIEK 0?
YARLY VISIBLE "Fizz"
NOWAI IZ VAR LEFTOVER 5 LIEK 0?
YARLY VISIBLE "Buzz"
NOWAI VISIBLE VAR
KTHX
IZ VAR NOT SMALR THAN 100? KTHXBYE
IM OUTTA YR LOOP
KTHXBYE
Replies from: AdeleneDawner, Morendil↑ comment by AdeleneDawner · 2010-03-18T17:57:49.431Z · LW(p) · GW(p)
*in ur LessWrong, upvotin' ur memez*
↑ comment by Morendil · 2010-03-18T17:56:44.359Z · LW(p) · GW(p)
For the first time here I'm having a Buridan moment - I don't know whether to upvote or downvote the above.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2010-03-18T18:33:21.474Z · LW(p) · GW(p)
It might help to note that dialects - and I don't see any reason not to consider both the various kinds of 'netspeak and the various programming languages as such, in most cases of human-to-human interaction - are almost exclusively used as methods of signaling cultural affiliation. In this case, I parsed Bogus' use of 'netspeak as primarily an avoidance of affiliation with formal programming culture (which tends to linger even when programs are set out in standard English, in my experience), and secondarily a way of bringing in the emotional affect of the highly-social 'netspeak culture.
It is 'mammal stuff', but it seems to be appropriate in this instance, to me.
Replies from: Morendil↑ comment by Morendil · 2010-03-18T18:35:54.803Z · LW(p) · GW(p)
Thanks. I was mostly kidding, but I appreciate the extra perspective.
(Signalling my own affiliation as a true geek, I actually attempted to download a LOLCODE interpreter and run it on the above, but the ones I could get my hands on seem to be broken. I would upvote it if I could run it, and it gave the right answer.)
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2010-03-18T18:49:26.389Z · LW(p) · GW(p)
integer var
while(1)
{
++var
if (var % 15 == 0)
output "FizzBuz"
else if (var % 3 == 0)
output "Fizz"
else if (var % 5 ==0)
output "Buzz"
else
output var
if !(var<100)
return
}
Looks right to me, though I wound up reformatting the loop a little. That's most likely a result of me being in the habit of using for loops for everything, and forgetting the proper formatting for other kinds, rather than being an actual flaw in the code - I'm willing to give bogus the benefit of the doubt about it, in any case.
Replies from: gregconen↑ comment by gregconen · 2010-03-18T19:14:20.729Z · LW(p) · GW(p)
Pretty much. Both you and bogus apparently forget to put an initial value into var (unless your language of choice automatically initializes them as 0).
Using while(1) with a conditional return is a little bizarre, when you can just go while(var<100).
Of course, my own draft used if(var % 3 == 0 && var % 5 == 0) instead of the more reasonable x%15.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2010-03-18T19:25:56.281Z · LW(p) · GW(p)
Pretty much. Both you and bogus apparently forget to put an initial value into var (unless your language of choice automatically initializes them as 0).
Mine does, but I'm aware that it's good coding practice to specify anyway. I was maintaining his choice.
Using while(1) with a conditional return is a little bizarre, when you can just go while(var<100).
Yep, but I don't remember how else to signify an intrinsically infinite loop, and bogus' code seems to use an explicit return (which I wanted to keep for accuracy's sake) rather than checking the variable as part of the loop.
My method of choice would be for(var=0; var<100; ++var){} (using LSL format), which skips both explicitly returning and explicitly incrementing the variable.
↑ comment by JGWeissman · 2010-03-19T03:48:57.428Z · LW(p) · GW(p)
Jeff Atwood also makes this meta point about blogging about fizzbuzz:
Evidently writing about the FizzBuzz problem on a programming blog results in a nigh-irresistible urge to code up a solution. The comments here, on Digg, and on Reddit-- nearly a thousand in total-- are filled with hastily coded solutions to FizzBuzz. Developers are nothing if not compulsive problem solvers.
It certainly wasn't my intention, but a large portion of the audience interpreted FizzBuzz as a challenge. I suppose it's like walking into Guitar Center and yelling 'most guitarists can't play Stairway to Heaven!'* You might be shooting for a rational discussion of Stairway to Heaven as a way to measure minimum levels of guitar competence.
But what you'll get, instead, is a blazing guitarpocalypse.
Somehow, the other responses to this comment reminded me of that.
Replies from: RobinZ↑ comment by wedrifid · 2010-03-19T03:29:55.739Z · LW(p) · GW(p)
If all you have is regex s/.+/nail/
until(m/j{100}/){s/(j*)$/\1\n\1j/};
s/^(j{15})*$/fizzbuzz/gm;
s/^(j{3})*$/fizz/gm;
s/^(j{5})*$/buzz/gm;
s/^(j+)$/length($1)/gme;
print;
Warning: Do not try this (or any other perl coding) at home!
↑ comment by ata · 2010-03-19T03:42:40.727Z · LW(p) · GW(p)
I think anyone who applies to a programming job and can't write this (in whatever language) deserves something worse than being politely turned down.
for i in range(1, 101):
if i % 15 == 0: print 'fizzbuzz'
elif i % 3 == 0: print 'fizz'
elif i % 5 == 0: print 'buzz'
else: print i
↑ comment by RobinZ · 2010-03-18T14:30:14.615Z · LW(p) · GW(p)
I tested myself with MATLAB (which makes it quite easy) out of some unnecessary curiosity - it took me about seven minutes, a fair part of which was debugging.
I feel rather ashamed of that, actually.
Replies from: RobinZ, Morendil↑ comment by RobinZ · 2010-03-18T23:53:36.581Z · LW(p) · GW(p)
As everyone else seems to be posting their code:
% FizzBuzz - print all numbers from 1 to 100, replacing multiples of 3 with
% "fizz", multiples of 5 with "buzz", and multiples of 3 and 5 with
% "fizzbuzz".
clear
clc
for i = 1:100
fb = '';
if length(find(factor(i)==3)) > 0
fb = [fb 'fizz'];
end
if length(find(factor(i)==5)) > 0
fb = [fb 'buzz'];
end
if length(fb) > 0
fprintf([fb '\n'])
else
fprintf('%5.0f\n', i)
end
end
A better program (by which I mean "faster", not "clearer" or "easier to modify" or "easier to maintain") would replace the tests with something less intensive - for example, incrementing two counters (one for 3 and one for 5) and zeroing them when they hit their respective desired factors.
↑ comment by Morendil · 2010-03-18T14:56:00.535Z · LW(p) · GW(p)
I feel rather ashamed of that, actually.
I wouldn't be; I'd take it as (anecdotal) evidence that the craft of programming is systematically undertaught. By which I mean, the tiny, nano-level rules of how best to interact with this strange medium that is code.
(Recently added to my growing backlog of possibly-top-level-post-worthy topics is "how and why programming may be a usefull skill for rationalists to pick up"...)
Replies from: RobinZ↑ comment by RobinZ · 2010-03-18T15:24:58.152Z · LW(p) · GW(p)
I have to admit, I was looking up functions in the docs, too - I would have been a bit faster working in pseudocode on paper.
Edit: Also, my training is in engineering, not comp. sci. - the programming curriculum at my school consists of one MATLAB course.
(Recently added to my growing backlog of possibly-top-level-post-worthy topics is "how and why programming may be a usefull skill for rationalists to pick up"...)
Querying my brain for cached thoughts:
Programming encourages clear thinking - like evolution, it is immune to rationalization.
Thinking in terms of algorithms, rather than problem-answer pairs, and the former generalize.
↑ comment by mattnewport · 2010-03-12T01:36:44.165Z · LW(p) · GW(p)
That depends on your incentive structure. You may well be right if you work as a contract programmer. If you work as a salaried employee in a large company the calculation could look different.
Replies from: cousin_it↑ comment by cousin_it · 2010-03-12T12:42:48.848Z · LW(p) · GW(p)
Yes, absolutely. The former path (working or contracting for many small companies) is the one I'd heartily recommend to novices. The latter path... scares me.
Replies from: murat↑ comment by murat · 2010-03-12T13:51:12.695Z · LW(p) · GW(p)
Maybe you are scared because you are aware that writing maintainable code is harder than writing code without that constraint?
Replies from: cousin_it↑ comment by cousin_it · 2010-03-12T14:10:02.888Z · LW(p) · GW(p)
I write maintainable code anyway, and I'm friends with several people who maintain my past code and don't seem to complain. No, working at BigCo scares me because it tends to be a very one-sided activity. Employees at small companies and contractors face much more variety in what they have to do every day.
↑ comment by BenAlbahari · 2010-03-12T06:23:12.698Z · LW(p) · GW(p)
Bad design choices are much more expensive to fix down the road than when they were were created. You seem to be saying that any time spent addressing this issue is worthless in comparison to spending more time empathizing with the customer.
↑ comment by SilasBarta · 2010-03-12T19:21:45.750Z · LW(p) · GW(p)
Thanks for the advice and generous offer of help!
↑ comment by CronoDAS · 2010-03-13T01:41:10.934Z · LW(p) · GW(p)
But strangely, whenever I compile them myself and get them to run, it just runs on the command screen blindingly fast and then closes, as if I'm missing some important step.
If you're doing them in Windows, open the command prompt using "cmd" and run them from the command line. They'll run in the CMD window, which will stay open after the program finishes doing whatever it does, leaving the output visible.
↑ comment by wedrifid · 2010-03-12T00:23:49.714Z · LW(p) · GW(p)
So, an specific suggestions on where I should go from here?
Find a specific programming problem you need (want) to solve. That, for me at least, makes the task of learning almost automatic.
I (also) recommend Ruby+Rails for practical purposes. If you want to learn how to program, for example, 3D games then I have no particular recommendations. I only got as far as 2D bitbliting on that path! ;)
↑ comment by Kevin · 2010-03-12T00:14:15.365Z · LW(p) · GW(p)
Try Python+Django, Ruby+Rails, or PHP+CakePHP depending on your preference, but the pragmatic difference is much smaller than language zealots pretend. If you plan on making something with millions of users, PHP is faster than Python or Ruby.
Graphic programming is harder than using generated HTML for your GUI, and there seem to be a lot more real world applications with a web GUI than anything that uses local OS graphics.
↑ comment by mattnewport · 2010-03-12T00:13:04.825Z · LW(p) · GW(p)
but it turned out to require very circuitous routes to do simple things.
Unfortunately this about sums up the current state of 'real world' programming.
It is helpful to have a concrete goal to work towards rather than merely coding for the sake of learning. Learning 'on the job' is helpful in this regard as there is usually a somewhat defined set of requirements and there is added motivation and supervision that comes with being paid to write code.
If you are trying to learn on your own I'd suggest trying to set yourself the task of writing a simple program to do something fairly clearly defined and then work towards that. Simply reading through open source code (or any third party code) is not something I've found terribly helpful as a learning exercise. More useful is to set yourself the task of fixing a specific bug or adding a specific feature as this will help direct your investigation.
Learning how to use the debugging tools available to you is also important. Understanding how software is put together can be greatly aided by stepping through code in a good debugger.
C# is pretty good for 'real world'/GUI development. Personally I think it is the best option overall at the moment for that kind of programming but you will find language choice is a bit of a religious war issue.
Replies from: wedrifid, SilasBarta↑ comment by wedrifid · 2010-03-12T00:52:50.613Z · LW(p) · GW(p)
C# is pretty good for 'real world'/GUI development. Personally I think it is the best option overall at the moment for that kind of programming but you will find language choice is a bit of a religious war issue.
I second that recommendation for (non-web) GUI development. Even as someone who had never programmed in C# I found learning the language the simplest option when I needed to create a visual desktop application. (Of course, given that I knew both Java and C++ it wasn't exactly a steep learning curve.)
Replies from: Furcas↑ comment by SilasBarta · 2010-03-12T19:10:19.210Z · LW(p) · GW(p)
Unfortunately this about sums up the current state of 'real world' programming.
Well, I don't think I described it correctly. "Circuitous", I can actually handle -- I thrive on it, in fact. But e.g. setting text in a box to bold, when the package is designed to make that easy, following the book's exact instructions, and getting plain text ... that part bothers me, especially when it's followed up with all the alternate methods that don't work, etc. But it was a long time ago so I don't remember all the details.
If you are trying to learn on your own I'd suggest trying to set yourself the task of writing a simple program to do something fairly clearly defined and then work towards that. Simply reading through open source code (or any third party code) is not something I've found terribly helpful as a learning exercise. More useful is to set yourself the task of fixing a specific bug or adding a specific feature as this will help direct your investigation.
The task I was working on was to have a WYSIWIG html editor but all redefinition and addition of tags, and add features html can't currently do. (Examples: 1. A tag that adds a specified superscript to the tagged text. 2. A tag that generates an arrow that points to some other text.)
I eventually hired someone to write it, but still couldn't understand from it how the code works, and the Visual C# book only touched on the outlines of this, and I ran into the problems I listed earlier.
I also tried to work through some of their existing program examples, like the blackjack one, but i don't remember where that went.
↑ comment by djcb · 2010-03-12T13:35:44.978Z · LW(p) · GW(p)
If you want to write UIs, Lisp and friends would probably not be the first choice, but since you mentioned it...
For Lisp, you can of course install Emacs, which (apart from being an editor) is a pretty convenient way to play around with Lisp. Emacs-Lisp may not be a start of the art Lisp implementation, but it is certainly good enough to get started. And because of the full integration with the editor, there is instant-gratification when you can use some Lisp to glue to existing things together into something useful. Emacs is available for just about any self-respecting computer system.
You can also try Scheme (a Lisp dialect); there is the excellent freely available Structure and Interpretation of Computer Programs, which uses Scheme as the vehicle to explain many programming concepts. Guile is nice, free-software implementation.
When you're really into a more mathematical approach, Haskell is pretty nice. For UI stuff, I find it rather painful though (same is true for Lisp and to some extent, Scheme).
↑ comment by [deleted] · 2010-03-12T09:41:06.575Z · LW(p) · GW(p)
To better explain where I am, here is what I have tried so far: I've downloaded a lot of simple open source programs that have a lot of source files. But strangely, whenever I compile them myself and get them to run, it just runs on the command screen blindingly fast and then closes, as if I'm missing some important step. (How are you normally expected to compile open-source programs?)
Most open-source programs are made to be easy to compile on Unix platforms. If you're using OS X or Linux, great; if you're on Windows, download Cygwin and you'll have a Unix environment. Given all that, read the INSTALL file; it should give you step-by-step instructions for compiling and installing. Most commonly, you run ./configure, then make, then (as root) make install.
That said, platforms with package managers are really nice because you can download, build, and install many programs in a single step; Debian has APT, OS X has MacPorts and Fink, and Haskell (a programming language, not an operating system) has the Cabal.
In general, if running something causes a terminal to open and immediately close, try running it on a command line instead of double-clicking it. For Windows, open Command Prompt, drag the executable onto the terminal window, and hit enter.
Replies from: arundelo↑ comment by CannibalSmith · 2010-03-12T06:13:51.510Z · LW(p) · GW(p)
Got Skype, microphone, etc?
Replies from: SilasBarta↑ comment by SilasBarta · 2010-03-12T19:02:19.695Z · LW(p) · GW(p)
Yes.
Replies from: CannibalSmith↑ comment by CannibalSmith · 2010-03-18T14:36:52.166Z · LW(p) · GW(p)
ಠ_ಠ ....ashdkfrflguhhhhhhhhh
Debug output: when I first saw your request, I was in a very, what's the word, eager(?) mood and started writing, then realized it would be very long, then I wanted to chat and brag about coding skills, then later my mood was lower than average, and you said "yes", and I was like, groan, and... aaanyway, my Skype is cannibalsmith. If you catch me, I'll probably will be delighted to talk about programming. Yeah, so... uh...
Replies from: SilasBarta↑ comment by SilasBarta · 2010-03-18T14:48:57.939Z · LW(p) · GW(p)
:-) Thanks!
↑ comment by Morendil · 2010-03-12T00:44:04.511Z · LW(p) · GW(p)
What category of app are you looking to write, narrowing down the class "app with a GUI" a little?
How are you normally expected to compile open-source programs?
Can you name a specific example of one you've tried to compile and run, and you've been confused at the result?
One general hint is that a good way to learn how to code up significant programs from scratch is to, first, get a significant program that works and modify or extend it in some way.
Also, be aware that there are several competing design philosophies when it comes to writing GUI programs, with very different outcomes in terms of maintainability and adherence to sound design principles. The "Visual" approach exemplified by the Microsoft line of tools leaves much to be desired in my experience, leading to spaghetti code too easily.
I prefer approaches in which graphical components are created programmatically, and where design principles such as MVC then serve to further structure the resulting code and drive the design toward high levels of abstraction. The various Smalltalk environments are a good illustration of that philosophy.
Replies from: BenAlbahari, SilasBarta↑ comment by BenAlbahari · 2010-03-12T04:37:09.936Z · LW(p) · GW(p)
The "Visual" approach exemplified by the Microsoft line of tools leaves much to be desired in my experience, leading to spaghetti code too easily.
Spaghetti code is a primarily a function of the programmer, not the tools. This isn't to say the tools don't matter; they do; but the various competing tools each have their pros and cons, and it's a bit glib to suggest the Microsoft stack is obviously behind here. ASP.NET MVC, which you can use for web development in C#, is quite orthogonality-friendly.
↑ comment by SilasBarta · 2010-03-12T19:14:12.900Z · LW(p) · GW(p)
What category of app are you looking to write, narrowing down the class "app with a GUI" a little?
I don't think this should matter for your answer, since it's just a barrier toward a broad class of programming I'm trying to overcome.
Can you name a specific example of one you've tried to compile and run, and you've been confused at the result?
All of them ;-) but I'll give you a specific example when I get back to my home computer.
One general hint is that a good way to learn how to code up significant programs from scratch is to, first, get a significant program that works and modify or extend it in some way.
Well, that's kind of hard when they don't run even when you compile them. But on top of that, I haven't found any multi-source-code-file in which it's easy to jump to just the part of the code that implements a particular feature, usually because of poor documentation.
↑ comment by Psy-Kosh · 2010-03-17T00:17:27.974Z · LW(p) · GW(p)
Well, if you want to play with a lisp, maybe consider PLT-Scheme?. That one has a really nice environment, etc etc... (it's a Scheme rather than a Common Lisp, though.)
↑ comment by Paul Crowley (ciphergoth) · 2010-03-16T23:50:26.836Z · LW(p) · GW(p)
I've found wxPython a relatively pleasant way to write GUI programs.
comment by JamesAndrix · 2010-03-13T09:14:45.048Z · LW(p) · GW(p)
How should rationalists do therapy?
As a community, we should have resources to help people who might otherwise be helped by clerics, quacks, or psychics. We should certainly cover things like minor depression and grief at the death of a loved one.
Should we just look at what therapies have the best outcome for various situations and recommend those?
Should we use what we know about cognition to suggest new therapies? Should we make a "Grief Sequence"?
Replies from: Rain, Morendil, zero_call, Thomas↑ comment by Rain · 2010-03-14T17:54:52.882Z · LW(p) · GW(p)
When I expressed problems that I have with my life, I experienced that this community is not very well versed in the emotional aspect of the situation. At least, that is how I felt (heh) when they swarmed and attacked in an effort to other-optimize. I'm sure they wanted to help, but it was a very direct, blunt experience, with little regard for the difficulties inherent in the situation, or the knowledge I already possessed.
"Get therapy" is a solution, but one that I've known about for a very long time. Alicorn's post on problems vs. tasks comes to mind. It felt almost tautological: "You're depressed? You should take an action which cures depression." At least, until it ended with me telling people to please stop, and getting called sad, pitiful, and a jerk.
Replies from: Morendil↑ comment by Morendil · 2010-03-14T18:03:43.226Z · LW(p) · GW(p)
I'm sure they wanted to help, but it was a very direct, blunt experience
The key observation is that, as far as I can tell, you never actually asked for help.
I call the behaviour you're commenting on "inflicting help". This is a very, very common mistake that even very smart people make. One of the basic tools in a good consultant's toolkit is to be able to recognize actual requests for help, and fulfill those strictly within the bounds of what has been requested.
The good news is, this is a community of people who want to be skilled at updating on the evidence. Hopefully this negative result will be counted as evidence and people here will, in future, tend to refrain from inflicting help.
Replies from: Rain↑ comment by Rain · 2010-03-14T18:30:56.627Z · LW(p) · GW(p)
My favorite part was how the person who most directly insulted me was voted up (2 rating as of now), whereas my requests to stop were voted down (-1 and 0 now, both were -2). It was very strong fuel for my martyrdom complex. I actually laughed aloud.
Replies from: FAWS↑ comment by FAWS · 2010-03-14T18:59:30.260Z · LW(p) · GW(p)
I started writing a devils advocate sort of reply after reading the first link, but for the life of me I can't think of any good reason to vote "No, thank you. I'd rather suffer where I am." down in context. If I was voting based on the current score (I try my best not to do that) I'd vote it back up to 0.
↑ comment by Morendil · 2010-03-14T18:22:13.129Z · LW(p) · GW(p)
To take a stab at what I know of that topic:
- offer help, but don't inflict help that isn't requested
- verify that the helpee is "serious" about using your help: help can't be for free
- an intervention is also a a test of a hypothesis: update on the results
- as a corollary, effective help requires forming a theory or model of the situation
- the best way to get entangled with the situation is to listen to the "helpee"
- listening requires an open mind (i.e. often changing your mind)
- the helpee's situation is a system, with many entangled components, which can include other people
- your help and intentions in helping can become part of that system, for good or ill
- your help, intentions, approach and results should always be a legitimate topic of discussion with the helpee
- you should always be clear about why you're helping
- because of that, it's often a good idea to have someone in turn helping you help others
That's from my general approach to consulting, i.e. helping people, or more precisely "influencing people at their request". It's not specific to grief or depression counseling, and thus should perhaps be taken with a grain of salt.
comment by Alex Flint (alexflint) · 2010-03-12T09:04:11.361Z · LW(p) · GW(p)
If I have no memory of some period in my past, then should I be pleased to discover that was happy during that period? Or is it that past experiences are valuable only through the pleasure their memories give us in the present?
Replies from: kpreid, grouchymusicologist, BenPS, timtyler, RobinZ, Nick_Tarleton, scav, Kevin↑ comment by grouchymusicologist · 2010-03-12T16:53:00.831Z · LW(p) · GW(p)
If you are a utilitarian, I think you should be pleased.
Imagine you happened to find out that a person on the other side of the world, whose life has never and will never affect yours in any way, is happy right now. You'd be pleased about that, right? Now imagine you knew instead that that person was happy last week. Since this affects you not at all, there's no real difference between these: you're just pleased about the fact of someone's happiness at some point in time.
If you buy my argument up to this point, then you may as well be pleased if that mystery person from the past was actually your own past self. And that's not even to mention Kevin's argument which does take into account the ways in which your past self influences your future self.
↑ comment by BenPS · 2010-03-13T02:40:38.723Z · LW(p) · GW(p)
Here is one possible reason for being pleased to discover that one was unhappy in the past:
Times of apparent unhappiness can lead to great personal growth. For instance, the hardest, most stressful time of my life was studying for my physics honors exams. However, now that the exams are over, I am glad to have both the knowledge I gained in studying, and the self knowledge that I am capable of pushing myself as hard as I did. (Would skills learned during the missing time be retained? Even if they weren't, the latter reason above would still apply).
It would be devastating to lose the memory of any part of ones life, but I think there would be some satisfaction in learning that one had spent the missing time doing something difficult but worthwhile, even if one was not happy during that time.
↑ comment by timtyler · 2010-03-12T09:53:36.842Z · LW(p) · GW(p)
It sounds as though you now have some information about those past events. Hopefully, it is a sign that your goals were being met during that period. Also, if you managed to learn that, maybe you will also learn something more useful about the period. So: I would say it is normally a good sign.
↑ comment by RobinZ · 2010-03-12T12:33:08.087Z · LW(p) · GW(p)
I vote "pleased", for the rather weak reason that this makes my preferences time-symmetric*.
* Edit: This is poorly-worded - what I was referring to was time shift symmetry.
Replies from: grouchymusicologist↑ comment by grouchymusicologist · 2010-03-12T16:58:42.839Z · LW(p) · GW(p)
But nothing else about the universe is time-symmetric, manifestly including our own revealed preferences -- I would rather be happy in the future but not in the past than be happy in the past but not in the future, if you gave me the choice right now. So this is the only argument I can think of to vote "not pleased" (of course, not displeased either) about one's past, but unremembered, happiness.
(I actually do vote "pleased," though, for the reason I argued here.)
Replies from: RobinZ↑ comment by RobinZ · 2010-03-12T18:49:21.349Z · LW(p) · GW(p)
I'm not sure that I'd prefer unrecalled happiness in the past to in the future, but I was thinking of (and should have named) time-shift symmetry, which the fundamental laws of physics are.
I actually agree with your argument for voting "pleased", though, so we might be simply in agreement.
Replies from: grouchymusicologist↑ comment by grouchymusicologist · 2010-03-12T19:21:15.128Z · LW(p) · GW(p)
I was thinking of (and should have named) time-shift symmetry
Well then, I'm sure that addresses my objection. But a couple of minutes' googling isn't giving me a good sense of what time-shift symmetry is -- and my physics background is lousy. Could you give me a quick definition?
Replies from: RobinZ↑ comment by RobinZ · 2010-03-12T20:20:59.950Z · LW(p) · GW(p)
The laws of physics are invariant in time.
Edit: Clarification - if you write the laws of physics, nowhere do you invoke the absolute time; only changes in time. The outcome of any experiment cannot change just because the time coordinate changes; it can only change because other parameters in the situation change.
Replies from: grouchymusicologist, Jordan, SilasBarta↑ comment by grouchymusicologist · 2010-03-12T21:26:31.219Z · LW(p) · GW(p)
Thanks for that.
↑ comment by Jordan · 2010-03-12T21:34:28.435Z · LW(p) · GW(p)
I remember hearing that there have been some hints that physical constants have changed over time. If they have then the laws of physics wouldn't be time invariant.
Anyone else recall anything along those lines? Wikipedia isn't terrible helpful.
Replies from: RobinZ↑ comment by RobinZ · 2010-03-12T21:46:27.860Z · LW(p) · GW(p)
I have not heard of any such theory becoming a credible candidate for acceptance, although I see no logical contradiction in such - my impression is that discovering a time-varying term would be as surprising as discovering energy is not conserved. For fairly fundamental reasons, actually.
Replies from: wnoise↑ comment by wnoise · 2010-03-12T23:14:25.623Z · LW(p) · GW(p)
Note that in GR defining energy consistently is tough. Doing it so it is globally conserved is even harder. We only really have local conservation, and the changing background of GR in cosmology is in some sense effectively the same thing as changing physical law.
↑ comment by SilasBarta · 2010-03-12T20:34:03.091Z · LW(p) · GW(p)
Yes, they tend to be invariant in factors that don't exist ;-P
↑ comment by Nick_Tarleton · 2010-03-13T03:17:17.650Z · LW(p) · GW(p)
Or is it that past experiences are valuable only through the pleasure their memories give us in the present?
This seems very unlikely. If the experience of remembering pleasurable events is valuable in itself, why can't other experiences be valuable in themselves?
comment by JustinShovelain · 2010-03-14T08:23:08.088Z · LW(p) · GW(p)
Poll: Do you have older siblings or are an only child?
Replies from: JustinShovelain, JustinShovelain, JustinShovelain, steven0461, JustinShovelain↑ comment by JustinShovelain · 2010-03-14T08:24:25.521Z · LW(p) · GW(p)
Vote this up if you are the oldest child with siblings.
↑ comment by JustinShovelain · 2010-03-14T08:23:32.478Z · LW(p) · GW(p)
Vote this up if you have older siblings.
↑ comment by JustinShovelain · 2010-03-14T08:23:46.614Z · LW(p) · GW(p)
Vote this up if you are an only child.
↑ comment by steven0461 · 2010-03-31T23:47:22.034Z · LW(p) · GW(p)
I'm pretty sure that in the general population, there are at least as many people with older siblings as there are people with only younger siblings. But in this poll, it's 6 vs 19. That looks like a humongous effect (which we also found in SIAI-associated people, and which this poll was intended to further check). I could see some sort of self-selection bias and the like, and supposedly oldest children have slightly higher IQs on average, but on the whole I'm stumped for an explanation. Anyone?
ETA: Here's a claim that "it is consistently found that being first-born is particularly favourable to high levels of scientific creativity". See also this.
↑ comment by JustinShovelain · 2010-03-14T08:24:46.590Z · LW(p) · GW(p)
Vote this down for karma balance.
comment by Wei Dai (Wei_Dai) · 2010-03-13T19:36:28.689Z · LW(p) · GW(p)
This is a draft of a post I'm planning to send to my everything-list, partly to invite them to join Less Wrong. I'd appreciate comments and feedback on it.
Recently I heard the news that Max Tegmark has joined the Advisory Board of SIAI (The Singularity Institute for Artificial Intelligence, see http://www.singinst.org/blog/2010/03/03/mit-professor-and-cosmologist-max-tegmark-joins-siai-advisory-board/). This news was surprising to me, but in retrospect perhaps shouldn't have been. Out of the three authors of papers I cited in the original everything-list charter/invitation, two others had already effectively declared themselves to be Singularitarians (see http://en.wikipedia.org/wiki/Singularitarianism): Nick Bostrom has been on SIAI's Advisory Board for a while, and Juergen Schmidhuber spoke at the Singularity Summit 2009. I was also recently invited to visit SIAI for a decision theory mini-workshop, where I found the ultimate ensemble idea to be very well-received. It turns out that many SIAI people have been following the everything-list for years.
There seems to be a very strong correlation between interest in the kind of ideas we discuss here, and interest in the technological singularity. (I myself have been interested in the Singularity even before starting this mailing list.) So the main point of this post is to let the list members who are not already familiar with the Singularity know that there is another set of ideas out there that they are likely to find fascinating.
Another reason for this post is to let you know that I've been spending most of my online discussion time at Less Wrong (http://lesswrong.com/lw/1/about_less_wrong/, "a community blog devoted to refining the art of human rationality" which is sponsored by the Future Humanity Institute, founded by Nick Bostrom, and effectively "owned" by Eliezer Yudkowsky, founder of SIAI). There I wrote a sequence of posts summarizing my current thoughts about decision theory, interpretations of probability, anthropic reasoning, and the ultimate ensemble theory.
- http://lesswrong.com/lw/15m/towards_a_new_decision_theory/
- http://lesswrong.com/lw/175/torture_vs_dust_vs_the_presumptuous_philosopher/
- http://lesswrong.com/lw/182/the_absentminded_driver/
- http://lesswrong.com/lw/1a5/scott_aaronson_on_born_probabilities/
- http://lesswrong.com/lw/1b8/anticipation_vs_faith_at_what_cost_rationality/
- http://lesswrong.com/lw/1cd/why_the_beliefsvalues_dichotomy/
- http://lesswrong.com/lw/1fu/why_and_why_not_bayesian_updating/
- http://lesswrong.com/lw/1hg/the_moral_status_of_independent_identical_copies/
- http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/
I initially wanted to reach a difference audience with these ideas, but found that the Less Wrong format has several of advantages: both posts and comments can be voted upon, the site's members uphold fairly strict standards of clarity and logic, and the threaded presentation of comments makes discussions much easier to follow. So I plan to continue to spend most of my time there, and invite other everything-list members to join me. But please note that the site has a different set customs and emphases in topics. New members are also expected to have a good grasp of the current state of the art in human rationality in general (Bayesianism, heuristics and biases, Aumann agreement, etc., see http://wiki.lesswrong.com/wiki/Sequences) before posting, and especially before getting into disagreements and arguments with others.
Replies from: Document, RobinZ, Eliezer_Yudkowsky↑ comment by Document · 2010-03-14T00:19:51.430Z · LW(p) · GW(p)
I'm still with Jack that pointing new readers to the entirety of the sequences is non-optimal. I'm waiting for the day when we can at least say "Start here (link) and keep clicking Next, and skim as much as you like", but you probably don't want to wait that long to send the post, so I don't know.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-14T01:49:45.916Z · LW(p) · GW(p)
Looks fine to me.
comment by Risto_Saarelma · 2010-03-27T17:19:40.098Z · LW(p) · GW(p)
Frank Lantz: The Truth in Game Design
Players keep complaining about the random number generators being "unfair" in games that involve randomness, so game developers have started tweaking the generators to behave according to gambler's fallacy. Now results that are adverse to the player increase the chance of beneficent future results. Lantz notes that making games systems to conform to common fallacies might not be that good an idea, since games could also be used as great teaching devices on how all sorts of complex systems really work. Of course the reasoning is a bit different when your bottom line depends on players not canceling their subscriptions when they think they are being shafted by unfair game code.
Messing with the random number generator feels unpleasant, both making the game less real and more able to keep the player in a dull trance without unexpected novelty. On the other hand you could say that platform games where the main character can move back and forth in air after jumping and do a double jump off thin air teach the players a wrong model of physics, but these features seem to generally make platform games more fun. So why would one be bad and the other not? One difference is that the physically improbable jumping capabilities give the players more options, while the gambler's fallacy rng just affects events independent of player actions. Another idea is that probability feels like a more fundamental aspect of a reality than how the laws of physics work.
This other article called Truth in Game Design by Scott Brodie linked from the comments of the first one is also interesting.
comment by Morendil · 2010-03-18T18:26:11.738Z · LW(p) · GW(p)
This Nature article ("Quantum ground state and single-phonon control of a mechanical resonator") is making headlines in various media, and seems to be about large-scale quantum superposition, but it's always hard to tell what's getting lost in translation when you're not an expert. I'd prefer to put my trust in people here who think they're qualified to comment. Anyone?
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-03-19T01:59:35.216Z · LW(p) · GW(p)
I was about to post this here. If they actually verified that the resonator was in superposition, if they actually got interference effects out of it, well, that's it then, collapse isn't just dead, it's, well... I need a word for "dead" that's more emphatic than "dead".
It's... ahem... collapsed. :P
(at least such is my thought.)
Replies from: prasecomment by ata · 2010-03-14T23:54:30.453Z · LW(p) · GW(p)
Hearing that Max Tegmark joined SIAI's board reminded me of a top-level post I was thinking of doing. In it, I would present what I think is a very strong but heretofore underemphasized argument for the Mathematical Universe/Level IV Multiverse hypothesis — specifically, an argument for why it is actually a satisfactory answer to the ultimate question of why anything bothers to exist at all — particularly targeted at people who aren't familiar with it or are skeptical of it (I was in the latter category when I first learned of it, remained there for a couple years, forgot about it, and then unexpectedly convinced myself of it). However: Are there enough people here in either of those categories that this would make for worthwhile discussion, and would that be considered sufficiently on-topic?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-03-15T00:20:55.216Z · LW(p) · GW(p)
I think it's an aesthetically appealing way of looking at what's going on, but that it doesn't help with understanding what's going on (or what to do with it) in any way.
Replies from: ata↑ comment by ata · 2010-03-15T02:18:37.587Z · LW(p) · GW(p)
If you're referring to the fact that it doesn't give us any useful information about the contents or laws of this universe, then I agree completely. (If I write this post, then I do intend to acknowledge that, and to discourage calling it a "theory of everything" for that reason.)
Shall I take this as a "no" vote for the "would that be considered sufficiently on-topic?" question?
In its defense in that respect, it could be taken as a discussion about the outer limits of what anybody/anything anywhere can understand, and aside from that, it raises some interesting questions about anthropic reasoning.
comment by [deleted] · 2010-03-12T17:55:52.860Z · LW(p) · GW(p)
Is anyone familiar with a possible evolutionary explanation of the placebo effect? It seems strange to me that the body would have a limit to the degree it heals itself, and that this limit gets bypassed by the belief that one is receiving treatment.
The only explanation I could string together is that the body limits how much it heals itself because it's conserving energy/resources/whatever it might need for other things (periods of scarcity, danger, etc.) Receiving medicine sends the signal that the person is being taken care of and thus at a much lower risk of needing to use it's 'reserves', so the body goes ahead and diverts them to repairing whatever is wrong with it.
However, this would suggest that a self-administered placebo would be ineffective, whereas treatment but no medicine by a doctor/caregiver would be effective. As far as I know, this isn't how the placebo effect works, but I'm not exactly up to date on the subject.
Has anyone seen a better explanation?
Replies from: wedrifid, Strange7, NancyLebovitz↑ comment by wedrifid · 2010-03-13T06:57:02.170Z · LW(p) · GW(p)
Has anyone seen a better explanation?
Yes, that the original papers advocating the placebo effect were misleading in their reports and the popularisations thereof grossly exageratted.
Placeobo's can be shown to reliably have an effect on:
- Experience of symptoms.
- Even more so on reports of symptoms (that is, the presence of an expectant experimentor messes with people's heads big time.)
- Psychological state.
- Things that are significantly influenced by psychological state. The main two actual physical conditions that I can recall being genuinely altered by placebo (as opposed from being perceived to be altered) are ulcers and herpes virus (cold sores). Basically, two conditions that you more or less get from being stressed.
(I am not criticising the use of placebo controls here. But I am asserting that the primary benefit from such controls is in 'balancing out' other biases rather than because of direct effect of placebos on healing.)
Replies from: Kevin↑ comment by Strange7 · 2010-03-12T21:07:32.955Z · LW(p) · GW(p)
A self-administered placebo might still be effective for evolutionary reasons. It would signal that a reduced activity level is related to tending your injuries, rather than, say, waiting in ambush or 'freezing' to avoid notice by motion-sensitive predators, so it's safe to divert resources toward repair or antibody production at the expense of sensory and muscular readiness.
Same reason people have a hard time getting to sleep in unfamiliar circumstances, but focusing on a token reminder of home dispels the feeling.
↑ comment by NancyLebovitz · 2010-03-12T19:27:38.110Z · LW(p) · GW(p)
People are very much affected by what they imagine is going on. For the unbendable arm you don't tell people to extend their arm effiiciently, you have them imagine the arm extending out to infinity, or imagine the arm as a firehose.
I'm not sure why any of this works-- it may have something to do with activating one's own mirror neurons, but I do think the placebo effect should be viewed as a special case rather than a thing in itself.
comment by Richard_Kennaway · 2010-03-21T20:53:45.368Z · LW(p) · GW(p)
This will be preaching to the converted here, but worthy of note: "Odds Are, It's Wrong".
It's about the continued use of frequentist significance tests.
ETA: I've found the web site flaky today. Here's Google's cached copy.
Replies from: Richard_Kennaway, Jack↑ comment by Richard_Kennaway · 2010-03-23T08:47:33.610Z · LW(p) · GW(p)
More -- much more! -- discussion of the article by statisticians here.
comment by mattnewport · 2010-03-17T22:14:36.463Z · LW(p) · GW(p)
I haven't seen this posted yet and it seems it might be of interest, from a link on Hacker News:
Science fails to face the shortcomings of statistics
For better or for worse, science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings.
During the past century, though, a mutant form of math has deflected science’s heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
Follow the link for the full article, there's even mention of Bayes' Theorem.
comment by ata · 2010-03-15T06:35:36.116Z · LW(p) · GW(p)
A survey on cryonics laws:
Should it become legal for a person with a degenerative disease (Alzheimer's, etc.) to choose to be cryonically preserved before physiological death, so as to preserve the brain's information before it deteriorates further? Should a patient's family be able to make such a choice for them, if their mind has already degenerated enough that they are incapable of making such a decision, or if they are in a coma or some other unconscious or uncommunicative state?
Should it become legal for a person to choose to be cryonically preserved before physiological death regardless of medical circumstances?
Should hospitals be required to cryonically preserve unidentified dead bodies, assuming cryonics is still possible given whatever condition the patient's body is in? Should the default be neuropreservation or whole-body suspension?
Should your country's national health care system (if it has one; if not, imagine it does, and that its existence is not up for debate) cover cryonics for anyone who wants it? Should it be opt-in or opt-out (or not optional)?
Should laws against mishandling human remains be more severe in the case of cryonics patients?
Should murder/homicide/manslaughter laws result in more severe punishment if the victim cannot be cryonically preserved (whether because the body was not found for too long, they were shot repeatedly in the head, they were drowned or burned or buried in a ditch, etc.)? Assume the victim would have been preserved otherwise.
How would greater legal recognition of cryonics interact with the death penalty? For example, if you are for the death penalty: what should happen if a death-row inmate is signed up for cryonics (or living in a country with a national health system that covers it, per #4)? If you are against it, but living in a country that has it, could you support any cryonics-based compromise (e.g. replacing execution with cryonic suspension until, hypothetically, our understanding of psychology has advanced enough that it is possible to rehabilitate even the most evil of criminals)?
Finally, a question about social and medical attitudes rather than laws: When cryonics is widely known and relatively socially acceptable, and the evidence for its possibility is well-accepted in the mainstream (or when people have already started being revived), should opting out of it be viewed as comparable or equivalent to being suicidal?
↑ comment by Morendil · 2010-03-15T10:42:50.369Z · LW(p) · GW(p)
Yes to 1, 2, 3, 5, 6. Undecided on 4.
I've been wondering about 7 for some time now. I'm against the death penalty, but given that some countries have it, it seems so obvious that people who are now being executed should be preserved instead. The probablity of a wrongful conviction being non-trivial, $30K seems like a paltry sum to invest in the possibility, however slight, of later reviving someone who was wrongfully executed. I have looked at the figures for the cost to society of the legal process leading to execution, and it is shockingly high. People on death row should at least have the option, given how much is otherwise spent on them.
↑ comment by ata · 2010-03-15T06:37:00.761Z · LW(p) · GW(p)
My answers:
Yes and yes.
I know what this is like because my grandmother spent the last years of her life with Alzheimer's, in a nursing home. When she finally died, my mom didn't cry; she explained to me that she had already done her mourning years ago. It made sense, insofar as it can ever make sense to "get over" the annihilation of a loved one: my grandmother, the person, had already effectively died long before her body did. None of us knew about cryonics at the time, and we likely wouldn't have done it even if we had known about it, but I know that people are in this situation all the time, and as awareness and acceptance of cryonics grows, people should definitely have this option.I'm inclined to say that it should be discouraged but legal as an individual choice. A person could already achieve a similar (though riskier) effect by calling an ambulance, making sure their bracelet and necklace are prominently visible, and killing themselves in a relatively non-destructive way.
Yes. I don't know much about the pros and cons of neuro/whole-body other than the cost, but I think I'd go with the latter, to err on the side of caution.
Yes. I'd say it should be opt-out, or opt-in if that is absolutely necessary for getting the law passed.
Yes. The laws should treat it like an instance of killing someone in a coma, which are presumably the same as the laws for killing someone in general. Of course it should vary depending on whether it is accidental, negligent, or intentional.
I'm not quite sure. I'd think that if you cause someone's bodily death, and they are able to be preserved perfectly, then it should be treated as a non-fatal assault or accident or whatever, but something doesn't seem right about that. I think, rather than having the laws against causing someone's bodily-but-not-information-theoretic death less severe, I'd prefer to have laws against causing someone's information-theoretic death more severe.
I'd prefer to abolish the death penalty altogether. If the compromise I gave as an example were politically feasible, I would support it, but I doubt it would garner much more support than abolishing the death penalty; it seems like too many people in the US view the criminal justice system as a tool of punishment/revenge rather than of rehabilitation.
When it seems like a relatively mainstream thing to do (not necessarily common, but common enough that your friends don't think you're crazy for opting in to it) — when society has outgrown its rationalizations of death and its resistance to immortality (religious objections; the idea that there is some spiritual essence that will be destroyed; the luddite/"science has gone too far!" response; the idea that having people die against their will is a morally permissible means of avoiding overpopulation; et cetera) — then can we start questioning the mental health of people who still object to it.
comment by JamesAndrix · 2010-03-12T06:53:43.315Z · LW(p) · GW(p)
Mentally Subtracting Positive Events Improves People’s Affective States, Contrary to Their Affective Forecasts
comment by DanielVarga · 2010-03-24T06:01:38.550Z · LW(p) · GW(p)
Fermi's Lack-of-a-Paradox:
Replies from: Jackcomment by gwern · 2010-03-21T22:18:01.712Z · LW(p) · GW(p)
"Even if the event’s nearly $200,000 worth of tickets sell out, less than $8,000 from the sales will go to the cause."
"No hard and fast guidelines exist on how much money raised in a benefit should go for expenses, and it is not unusual for galas to raise little money or even lose it."
"Overhead at Carnegie accounts for about one-third of the expenses. The hall costs $13,785 to rent. Then there is $6,315 for ushers; $2,300 for security; and $42,535 for stagehand labor, long recognized as a major cost of doing business at the institution. Other expenses include $70 for a press representative; $100 to establish a discount ticket system; $210 to place inserts in programs; and $750 for box office operations. Some costs are estimates, the presenters emphasized, and could be less."
comment by byrnema · 2010-03-17T01:22:01.368Z · LW(p) · GW(p)
This afternoon I identified a way in that I strongly need to be more rational, and I wondered if there has been anything written about it on Less Wrong.
A few hours ago, I was picking up my two children from their school. They're at a very young age so my heuristic is: near a parking lot, hang on to them.
While we were exiting the school building, another small child ran from his mother and slipped through the door between me and my youngest child. I feebly tried to grab the boy's shirt but he tugged away and then I just watched as he ran into the parking lot. I was in the middle of a decision algorithm to chase after him when he finally settled in a safe spot beside his family's car.
After about a full minute of playing the moment over and over in my head, I felt deeply disturbed by the fact that I hadn't instinctively grabbed the boy to effectively catch him and then hadn't run after him in time to save him if there had been a car. I was fully culpable: the only reason the door was open was because I was holding the door open for my kids, I knew he was running into a parking lot, and I was standing between him and his mother. But I just didn't think fast enough. My heuristic was 'hang on to my kids', which I did.
This seems to have been a matter of not computing fast enough. How could I have thought faster, in a way that would have resulted in a useful action? There have been several times in the past year where I just want to kick myself for not doing the right thing at the right time. Is this a form of akrasia?
Replies from: Morendil, RobinZ, NancyLebovitz↑ comment by Morendil · 2010-03-17T10:06:38.715Z · LW(p) · GW(p)
If it had been me in that situation, I might have reacted pretty much as you did, because I have a heuristic to leave other people's kids alone when the parents are around. Nothing riles me quite like seeing someone else interact with my child in a bossy way, and I have noticed that others often react the same.
Near a school I would expect adults (including in cars) to be more on the lookout for kids running around and so my awareness of danger would be lowered relative to my awareness of etiquette and the rule to look after my own kids.
There have been several times in the past year where I just want to kick myself for not doing the right thing at the right time. Is this a form of akrasia?
No, the term akrasia should be reserved for when you have already computed what you want to do, and fail to carry through with the want.
What you describe seems more like a matter of doing the best with limited computing resources. Making what in retrospect appears to be the wrong decision should, if it has not had dire consequences, be good news: you get to adjust the internal "weights" you assign to the relevant rules, and so prepare yourself for right decisions in future.
Don't beat yourself up for not "thinking faster", simply reflect on your repertoire of relevant actions in similar contexts, perhaps try to expand it. For instance you may want to practice with shouting "stop" so that it works. ;)
↑ comment by RobinZ · 2010-03-17T02:17:05.167Z · LW(p) · GW(p)
It appears to me that you simply ran into a situation for which you were not prepared. If there are general rules you can implement that will work, that is good, but the only cure I can think of is anticipating and considering in advance many possible scenarios.
↑ comment by NancyLebovitz · 2010-03-17T14:00:21.601Z · LW(p) · GW(p)
Let Every Breath, Systema, and Rmax International are related systems based on the idea of learning to maintain mental focus under stress.
I haven't worked with them myself, but the approach seems safe and plausible, and probably at least worth investigating.
comment by Kevin · 2010-03-14T08:39:07.011Z · LW(p) · GW(p)
Buying someone on the internet a pizza seems to be a cheap and easy way of buying a lot of fuzzies. Behold, Mr. wongwong, the most generous man in the world.
http://www.reddit.com/r/reddit.com/comments/bd3fb/dear_reddit_can_you_order_me_a_pizza/
comment by RobinZ · 2010-03-17T17:20:11.257Z · LW(p) · GW(p)
Poll: when making a new substantive top-level post, what kinds of summary are acceptable?
This is a checkbox poll, and therefore votes for multiple options may be entered - for each option, a separate karma balance will be offered. In the event that some important option is immediately noticed to be missing, another poster may offer an option-karma balance pair without destroying the poll.
Replies from: RobinZ, RobinZ, RobinZ, RobinZ, Jack↑ comment by Jack · 2010-03-17T17:27:55.076Z · LW(p) · GW(p)
If I think style choices are best left up to the poster I should vote for all the options?
Replies from: RobinZ↑ comment by RobinZ · 2010-03-17T17:32:25.175Z · LW(p) · GW(p)
Yes.
Edit: Elaborating in light of the downvote - if there exists some large fraction which think it isn't important enough to imply a policy on, then that will be reflected by the percentage differences between the options growing small.
comment by Paul Crowley (ciphergoth) · 2010-03-16T23:43:55.758Z · LW(p) · GW(p)
QALYs and how they are arrived at. "Quality Adjusted Life Years" are the measure used by UK drug approval bodies in deciding which treatments to approve. They aim to spend no more than £30,000 per QALY.
comment by Singularity7337 · 2010-03-12T22:07:06.547Z · LW(p) · GW(p)
Has anybody else wished that the value of the symbol, pi, was doubled? It becomes far more intuitive this way--this may even affect uptake of trigonometry in school. This rates up with declaring the electron's charge as negative rather than positive.
Replies from: RobinZ, rortian, zero_call, Thomas, simplicio, Sniffnoy, wnoise↑ comment by RobinZ · 2010-03-12T22:12:41.023Z · LW(p) · GW(p)
I read an argument to that effect on the Internet, but I don't have any strong feelings - maybe if I were writing a philosophical conlang I would make the change, but not normally. You may as well argue for base four arithmetic.
Replies from: Jack↑ comment by Jack · 2010-03-12T22:24:56.632Z · LW(p) · GW(p)
You may as well argue for base four arithmetic.
Huh. Would that actually be easier? I always figured ten fingers...
Replies from: JGWeissman, RobinZ, Singularity7337↑ comment by JGWeissman · 2010-03-12T22:46:58.680Z · LW(p) · GW(p)
I always figured ten fingers...
I figure each finger can be up or down, 2 states, so binary. And then base 16 is just assigning symbols to sequences of 4 binary digits, a good, manageable, compression for speaking and writing.
(When I say I could count something on one hand, it means there are up to 31 of them.)
↑ comment by RobinZ · 2010-03-12T22:43:18.759Z · LW(p) · GW(p)
- Fewer symbols to memorize.
- Smaller multiplication table to memorize.
- Direct compatibility with binary computers.
The cost in number length is not large - 3*10^8 is roughly 1*4^14 - and the cost in factorization likewise - divisibility by 2, 3, and 5 remain simple, only 11 becomes difficult.
If you want to argue from number of fingers, though, six beats ten. ;)
Replies from: Alicorn↑ comment by Alicorn · 2010-03-12T22:48:00.679Z · LW(p) · GW(p)
If you want to argue from number of fingers, though, six beats ten. ;)
I could see eight, but why six?
Replies from: RobinZ, blogospheroid↑ comment by RobinZ · 2010-03-12T23:22:32.811Z · LW(p) · GW(p)
Six works because you don't need a figure for the base. Thus, zero to five fingers on one hand, then drop all five and raise one on the other to make six. (Plus, you get easy divisibility by seven, which beats easy divisibility by eleven.)
Edit: Binary, the logical extension of the above principle, has the problem that the ring finger and pinky have a mechanical connection, besides the obvious 132decimal issue. ;)
I don't see how eight comes in, though.
Replies from: Alicorn↑ comment by blogospheroid · 2010-03-13T04:32:31.713Z · LW(p) · GW(p)
There's are websites dedicated to making Base 12 the standard. Same principle as making Base 6.
Simplest explanation - its possible to divvy 12 up in more whole fractions than the number 10.
↑ comment by Singularity7337 · 2010-03-12T22:37:18.335Z · LW(p) · GW(p)
I don't see myself with ten fingers as a posthuman anyway.
↑ comment by rortian · 2010-03-13T15:27:59.503Z · LW(p) · GW(p)
e^(pi*i) = -1
Anything else: lame.
Replies from: Singularity7337, wnoise↑ comment by Singularity7337 · 2010-03-13T22:54:21.664Z · LW(p) · GW(p)
Uh, how is e^(pi*i) = 1 lame?
Replies from: dclayh↑ comment by dclayh · 2010-03-13T23:18:07.906Z · LW(p) · GW(p)
Maybe because e^0 = 1?
Replies from: simplicio↑ comment by simplicio · 2010-03-13T23:36:53.822Z · LW(p) · GW(p)
Well making pi=2pi would just mean the complex exponential function would repeat itself every pi radians instead of every 2pi radians. e^0 would still = 1 in either case. Note that in the current definition, e^jn(2pi) = 1 for any integer n.
↑ comment by zero_call · 2010-03-13T04:06:11.691Z · LW(p) · GW(p)
No. This is nowhere near like the metric vs. english units debate. (If you want to talk about changing units, you should put your weight on that boat instead, as it's much more of a serious issue.) Pi is already well defined, anyways. It's defined according to its historical contextual meaning, regarding diameter, for which the factor of 2 does not appear.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2010-03-13T04:39:24.793Z · LW(p) · GW(p)
Pi is well-defined, yes, and that's not going to change. But some notation is better than others. It would be better notation if we had a symbol that meant 2pi, and not necessarily any symbol that meant pi, because the number 2pi is just usually more relevant. There's all sorts of notation we have that is perfectly well-defined, purely mathematical, not dependent on any system of units, but is not optimal for making things intuitive and easy to read, write and generally process. The gamma function is another good example.
I really fail to see why metric vs. english units is much more serious; neither metric nor english units is particularly suggestive of anything these days. Neither is more natural. The quantities being measured with them aren't going to be nice clean numbers like pi/2, they're going to be messy no matter what system of units you measure them with.
Replies from: Singularity7337↑ comment by Singularity7337 · 2010-03-15T02:59:43.407Z · LW(p) · GW(p)
What about the gamma function is bad? Is it the offset relation to the factorial?
Replies from: Sniffnoy↑ comment by Sniffnoy · 2010-03-15T05:45:37.885Z · LW(p) · GW(p)
Yeah. It's artificially introduced (why the s-1 power?) and is basically just confusing. Gamma function isn't really something I've had reason to use myself, so I'm just going on the fact that I've heard lots of people complain about this and never anyone defending it, to conclude that it really is as dumb as it looks.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-03-24T03:33:39.237Z · LW(p) · GW(p)
The t^(s-1) in the gamma function should be thought of as the product of t^s dt/t. This is a standard part of the Mellin transform. The dt/t is invariant under multiplication, which is a sensible thing to ask for since the domain of integration (0,infinity) is preserved by scaling, but not by the translations that preserve dt.
In other words, dt/t = d(log t) and it's telling you to change variables: the gamma function is the Laplace (or Fourier) transform of exp(-exp(u)).
↑ comment by simplicio · 2010-03-13T15:52:53.720Z · LW(p) · GW(p)
One can dream. :) Pi relates to diameter; it'd be much nicer if it related to radius directly instead.
Personally, I want to replace the kg in the mks system with a new symbol and name: I want to go back to calling it the "grave" (as it was called at one time in France), having the symbol capital gamma. Then we wouldn't have the annoying fact of a prefixed unit as a basic unit of the system.
Replies from: RobinZ↑ comment by RobinZ · 2010-03-13T19:01:24.285Z · LW(p) · GW(p)
Embarrassingly, my first reaction was to think, "how about cgs units? Those don't use kilograms!"
Replies from: simplicio↑ comment by simplicio · 2010-03-13T19:27:38.387Z · LW(p) · GW(p)
Hehehe. Cgs units... it really amuses me that it seems to be astronomers who like them best.
Of course, if we were really uber-cool, we'd use natural units, but somehow I can't see Kirstie Alley going on TV talking about how she lost 460 million Planck-masses on Jenny.
↑ comment by wnoise · 2010-03-12T23:07:16.022Z · LW(p) · GW(p)
Meh. 2 Pi shows up a lot, but so does Pi, and so does Pi/2. I think I'd rather cut it in half, actually, as fractions are more painful than integer multiples.
Replies from: Sniffnoy, sketerpot↑ comment by Sniffnoy · 2010-03-13T00:01:13.063Z · LW(p) · GW(p)
Think about the context here, though. Having a symbol for 2pi would be much more convenient because it would make things consistent. 2pi is the number that you typically cut into fractions. Let's say we define, say, rho to mean 2pi. Then we have rho, rho/2, rho/3, rho/4... whereas with pi, we have 2pi, 2pi/2, 2pi/3, 2pi/4... the problem is those even numbers. Writing 2pi/4 looks ugly, you want to simplify, but writing pi/2 means that you no longer see the number "4" there, which is what's important, that it's a quarter of 2pi. You see the "2" on the bottom so you think it's half of 2pi. It's a mistake everyone makes every now and then - seeing pi/n and thinking it's 2pi/n. If we just had a symbol for 2pi, this wouldn't occur. Other mistakes would, sure, but as commonly as this one does?
If we were to define, say, xi=pi/2, then 4xi, 2xi, 4xi/3, xi, 4xi/5... well, that's just awful.
Replies from: zero_call↑ comment by zero_call · 2010-03-14T00:27:18.400Z · LW(p) · GW(p)
It's a mistake everyone makes every now and then - seeing pi/n and thinking it's 2pi/n.
What? Like who, 6th graders?
Replies from: LucasSloan, Sniffnoy↑ comment by LucasSloan · 2010-03-14T01:42:27.483Z · LW(p) · GW(p)
I find that unfair. I have made the mistake Sniffnoy describes many times, all of them after I was in 6th grade.
Replies from: wedrifid↑ comment by Sniffnoy · 2010-03-14T03:02:44.501Z · LW(p) · GW(p)
No, like anyone who isn't watching out for traps caused by bad notation. It's much easier to copy down numbers than it is to alter them appropriately. If you see "e^(pi i/3)", what stands out is the 3 in the denominator. Except oops, pi actually only means half a circle, so this is a sixth root of unity, not a third one. Part of why I like to just write zeta_n instead of e^(2pi i/n). Sure, this can be avoided with a bit of thought, but thought shouldn't be required here; notation that forces you to think about something so trivial, is not good notation.
Replies from: wnoisecomment by Rain · 2010-03-12T03:30:36.495Z · LW(p) · GW(p)
An amusing view of charity and utility, as told by Monty Python: Merchant Banker. I was trying to remember what thought experiment it reminded me of, but I couldn't find it...
comment by simplicio · 2010-03-22T19:03:30.097Z · LW(p) · GW(p)
This is totally irrelevant, but I just had to share it.
I use the Tony Marloshkovips system for memorizing numbers, such as phone numbers, Social Insurance Numbers, physical constants, product codes at the grocery store, etc. It's very handy.
Anyway, I had to identify myself with my SIN today on the phone for loan purposes. But there was no record of my SIN number in their database. I repeated it - still wrong. Got through finally by telling the chap on the phone my date of birth.
Turns out the number I was telling him was the speed of light in m/s (299 792 458 - "nippy back pain relief"). It's not my fault they have the same number of digits!
comment by TraditionalRationali · 2010-03-13T19:11:28.600Z · LW(p) · GW(p)
An interesting dialogue at BHTV abot transhumanism between cishumanist Massimo Pigliucci and transhumanist Mike Treder. Pigliucci is among other things blogging at Rationally Speaking. This BHTV dialogue is partly as a follow-up to Pigliucci's earlier blog-post the problems with transhumanism . As I (tonyf, July 16, 2009 8:29 PM) commented then, despite the title of his blog-post, it was more of a (I think) misleading generalisation from an article by some Munkittrick than by an actual study of the "transhumanist" community that was the basis for Pigliuccci's then rather sweeping criticism. The present BHTV dialogue was in a rather different tone, and it seemed Pigliucci and Treder understood each others rather well. (As for now I do not see any mentioning of the dialogue on Rationally Speaking, it would be interesting to see if he will make any further comment.)
I have not time to comment the dialogue in detail. But I say that both Pigliucci and Treder did not distinguish between consciousness and intelligence. Pigiliucci pointed very clearly out that the concept of "mind uploading" suppose the "computational hypothesis of consciousness" to be true, but (at least from an materialistic point of view) it is not at all clear why it should be true. But from that he tacitly draw the conclusion (it seemed to me at last after a single view of the dialogue) that also [general] intelligence is depending on that assumption. Which I cannot see how it should. Is not the connection (or not) betwen consciouness and intelligence a so-far open question?
Replies from: zero_call, zero_call↑ comment by zero_call · 2010-03-14T00:18:29.334Z · LW(p) · GW(p)
But from that he tacitly draw the conclusion (it seemed to me at last after a single view of the dialogue) that also [general] intelligence is depending on that assumption.
No, Pigliucci agrees that it might be possible to get an intelligence (e.g., that passes the Turing test) through the computer system. He just does not think that you can call it a human intelligence.
He thinks the concept of "mind uploading" is silly because the human mind (and intelligence) is therefore fundamentally different from this computer mind. He also argues that the human mind is inseparable from the biological construction. I have to admit I am not surprised that this argument is coming from a biologist. To a physicist or an engineer, almost all problems and constructs are computational, and it's just a matter of figuring out the proper model. As a biologist, it is more difficult to see how living entities follow similar sorts of fundamental rules. In objecting to the computational theory of mind, Pigliucci objects to the computational theory of reality, and in essence, he contradicts himself. He reveals himself to be a dualist. I think he is confusing the mathematical or logical abstraction of a system (not dualistic) with the physical or material abstraction (dualistic).
↑ comment by zero_call · 2010-03-13T23:31:12.555Z · LW(p) · GW(p)
Good link. Question: In one part of the discussion, Pigliucci mentions that we know how chess players seem to think (and it's not at all like chess playing computer programs.) Does anyone have any good references about how chess players think?
Replies from: zero_call↑ comment by zero_call · 2010-03-14T00:15:46.596Z · LW(p) · GW(p)
But from that he tacitly draw the conclusion (it seemed to me at last after a single view of the dialogue) that also [general] intelligence is depending on that assumption.
No, Pigliucci agrees that it might be possible to get an intelligence (e.g., that passes the Turing test) through the computer system. He just does not think that you can call it a human intelligence.
He thinks the concept of "mind uploading" is silly because the human mind (and intelligence) is therefore fundamentally different from this computer mind. I have to admit I am not surprised that this argument is coming from a biologist. To a physicist or an engineer, almost all problems and constructs are computational, and it's just a matter of figuring out the proper model. As a biologist, it is more difficult to see how living entities follow similar sorts of fundamental rules. In objecting to the computational theory of mind, Pigliucci objects to the computational theory of reality, and in essence, he contradicts himself. He reveals himself to be a dualist. I think he is confusing the mathematical or logical abstraction of a system with the physical or material abstraction.
comment by Karl_Smith · 2010-03-11T18:40:19.093Z · LW(p) · GW(p)
I'd appreciate some feedback on a brain dump I did on economics and technology. Nothing revolutionary here. Just want people with more experience on the tech side to check my thinking.
Thanks in advance
http://modeledbehavior.com/2010/03/11/the-economics-of-really-big-ideas/
Replies from: timtyler, RobinZ↑ comment by timtyler · 2010-03-12T09:48:35.553Z · LW(p) · GW(p)
Re: "Already we have computer programs which can re-write existing to programs to run faster. These programs can also re-write themselves to run faster. However, they cannot rewrite themselves to become better at re-writing themselves faster."
You mean that they can't do that alone? Refactoring programs help speed up their own development, and make it easier and faster to make improvements in a set of programs that often includes their own source code.
It's not total automation - but partial automation is still very significant progress.
Replies from: Karl_Smith↑ comment by Karl_Smith · 2010-03-12T16:52:18.345Z · LW(p) · GW(p)
Tim,
Thanks, input like this helps me try to think about the economic issues involved.
Can you talk a little about the depth of recursion already possible. How much assistance are these refactoring programs providing? Can the results the be used to speed up other programs or does can it only improve its own development, etc?
Replies from: timtyler↑ comment by timtyler · 2010-03-13T00:35:55.294Z · LW(p) · GW(p)
To quote from my essay relating to this:
"Refactoring: Refactoring involves performing rearrangements of code which preserve its function, and improve its readability and maintainability - or facilitate future improvements. Much refactoring is done by daemons - and their existence massively speeds up the production of working code. Refactoring daemons enable tasks which would previously have been intractable."
Refactoring programs are indispensable for most application programmers in Java, and other machine readable languages. They are of limited use for C/C++ because of preprocessor mangling. When refactoring hit the mainstream in Eclipse, years ago, many programmers found their productivity increased dramatically, and they also found they could easily perform refactorings that would have been practically impossible to perform manually.
Refactoring is a fairly general tool. I am not sure about your "recursion" question. Modeling this as some kind of recursive function that bottoms out somewhere does not seem particularly appropriate to me. Rather, it represents the partial automation of programming. Similarly, unit tests are the automation of testing, and compilers are the automation of assembly.
Computer programming and software development have many places where automation is possible, and the opportunities are gradually being taken up.
comment by [deleted] · 2010-03-11T18:06:02.680Z · LW(p) · GW(p)
One career path I'm sort of musing about is working to create military robots. After all, the goals in designing a military robot are similar to those in designing Friendly AI: the robot must know somehow who it's okay to harm and what "harm" is.
Does this seem like a good sort of career path for someone interested in Friendly AI?
Replies from: jimrandomh, FAWS, cousin_it, Peter_de_Blanc, SilasBarta, Vladimir_Nesov, Kevin, rwallace, Mitchell_Porter, thomblake, Daniel_Burfoot↑ comment by jimrandomh · 2010-03-12T00:21:57.651Z · LW(p) · GW(p)
If you work on AGI and you make actual progress, then you have a moral obligation to keep it away from people who can't be trusted with it. You cannot satisfy this obligation while working for a military or a military contractor.
↑ comment by cousin_it · 2010-03-11T21:11:05.370Z · LW(p) · GW(p)
Am I the only one to think that no, creating military robots isn't a "good career path" towards friendly AI, because creating military robots is inherently unfriendly to humanity? Especially if you live in the US and know that your robots will be used in aggressive wars against poorer countries. It's some kind of crazy ethical blindness that most Americans seem to have for some reason, where "our guys" are human beings, but arbitrarily chosen foreigners deserve whatever they get... Just like this incident I saw on HN when one guy asked about career prospects working for the occupation force in Iraq, and another answered that it'll be an "amazing and unique experience". You'll note my reply there was much more concise.
Replies from: Jack, Rain, Rain, thomblake, JGWeissman↑ comment by Jack · 2010-03-12T07:57:53.340Z · LW(p) · GW(p)
It's some kind of crazy ethical blindness that most Homo sapiens seem to have for some reason, where "our guys" are human beings, but arbitrarily chosen foreigners deserve whatever they get
Fixed it for you.
And the reason is evolved psychological instincts with pretty obvious selection benefits.
Replies from: FAWS↑ comment by FAWS · 2010-03-12T12:46:49.965Z · LW(p) · GW(p)
I don't think that's an accurate correction. Because America is the current hegemonic power Americans can get away with feeling that other nations aren't "real" in the sense the USA are. For example when considering some hypothetical situation that would concern the whole planet an American might only consider how the USA would react, while anyone else in the same situation would in addition to the reaction of their own nation at the very leasts also have to consider how the USA reacts, and might even consider other nations since their situation is more obviously symmetrical to their own.
Replies from: Jack↑ comment by Jack · 2010-03-12T13:23:33.264Z · LW(p) · GW(p)
Because America is the current hegemonic power Americans can get away with feeling that other nations aren't "real" in the sense the USA are.
I'm afraid I don't know what this means.
For example when considering some hypothetical situation that would concern the whole planet an American might only consider how the USA would react, while anyone else in the same situation would in addition to the reaction of their own nation at the very leasts also have to consider how the USA reacts, and might even consider other nations since their situation is more obviously symmetrical to their own.
There might be pragmatic realities that force non-Americans to consider the reactions of foreigners more than Americans must. Americans have two oceans and the world's strongest military to keep a lot foreign troubles far away, other people do not. But this isn't evidence that Americans care less about foreigners than those from other countries do. It sounds like you're talking about a political blindness instead of an ethical blindness. Besides, there is equally good reason to think America's hegemonic status makes Americans more worried about foreign goings-on since American lives and American business concerns are more often at stake.
Replies from: FAWS↑ comment by FAWS · 2010-03-12T14:16:18.938Z · LW(p) · GW(p)
I'm afraid I don't know what this means.
Not "real" is the best description I have. You could say having the same sort of attitude towards other nations you might have towards Oz, Middle Earth or the Empire from Star Wars even though you intellectually know that they really exist, but that only comes close to what I mean. I must stress that not all Americans have this attitude, but some seem to do, and thats enough to influence the discourse.
But this isn't evidence that Americans care less about foreigners than those from other countries do. It sounds like you're talking about a political blindness instead of an ethical blindness.
I was thinking more of e. g. first contact situations in SF stories and things like that, not necessarily normal international politics, but I think it extends to all fields: Domestic politics (the amount and the kind of consideration the fact that a policy seems to work well somewhere else gets), pop culture, sports, science, language learning, wherever one might consider other nations Americans have more leeway not to do so. This doesn't by necessity have to extend to ethical considerations, but when cousin_it observes that it appears to it seems inappropriate to me to "correct" that out.
Replies from: Jack, FAWS↑ comment by Jack · 2010-03-12T16:16:46.463Z · LW(p) · GW(p)
I must stress that not all Americans have this attitude, but some seem to do, and thats enough to influence the discourse.
Exactly zero evidence has been presented that Americans have this ill-defined attitude at a higher rate that non-Americans.
wherever one might consider other nations Americans have more leeway not to do so.
No reason given to think this is the case on balance.
This doesn't by necessity have to extend to ethical considerations, but when cousin_it observes that it appears to it seems inappropriate to me to "correct" that out.
The obvious and straight forward interpretation of cousin it's comment was that he was referring to American nationalism. A real and quite common phenomenon in which Americans don't give a lick about people who don't live their country (in civilized places this is referred to as racism). I've met plenty of people with this view. It is a disgusting and immoral attitude. That said, it is a near ubiquitous attitude. Humans have been killing humans from other groups and not giving a shit for as long as there have been humans. We're good at it. Really good. We do it like it's our job. In no way is this unique to residents or citizens of the United States of America. If cousin_it meant something else he can clarify. He's been commenting elsewhere throughout this conversation anyway.
(Not my downvote, btw)
Replies from: Clippy, FAWS, mattnewport, FAWS↑ comment by Clippy · 2010-03-12T16:48:40.672Z · LW(p) · GW(p)
It is a disgusting and immoral attitude. That said, it is a near ubiquitous attitude. Humans have been killing humans from other groups and not giving a shit for as long as there have been humans. We're good at it. Really good. We do it like it's our job. In no way is this unique to residents or citizens of the United States of America. If cousin_it meant something else he can clarify. He's been commenting elsewhere throughout this conversation anyway.
Yes! Thank you! Finally, a human user says what I've been trying to say all along! (See for example here.)
On my first visit to Earth (or perhaps the first visit of one of my copies before a reconciliation), my reaction was (translated from the language of my logs):
"The Alpha species [i.e. humans] inflicts disutility on its members based on relative skin redness. I'm silver. Exit!"
↑ comment by FAWS · 2010-03-12T17:33:14.835Z · LW(p) · GW(p)
The obvious and straight forward interpretation of cousin it's comment was that he was referring to American nationalism. A real and quiet common phenomenon in which Americans don't give a lick about people who don't live their country (in civilized places this is referred to as racism). I've met plenty of people with this view. It is a disgusting and immoral attitude. That said, it is a near ubiquitous attitude. Humans have been killing humans from other groups and not giving a shit for as long as there have been humans. We're good at it. Really good. We do it like it's our job. In no way is this unique to residents or citizens of the United States of America. If cousin_it meant something else he can clarify. He's been commenting elsewhere throughout this conversation anyway.
While all what you say about nationalism is true It's not obvious to me that it explains what cousin_it was talking about, at least not to its full extent. Degradation of other people through nationalism usually evokes hate ("those damned X!"), while the linked comment seemed too cheerful for that, it's not like it encouraged to "help show it to those stinkin' Arabs" or anything like that. As if the fact that someone might be hurt simply didn't occur to them. There has been plenty of that in other historical cases of nationalism, but I think usually only in similarly asymmetrical situations. Nationalism in symmetrical situations seems to be of the plain hate kind.
Replies from: Jack↑ comment by Jack · 2010-03-12T17:51:08.976Z · LW(p) · GW(p)
Degradation of other people through nationalism usually evokes hate ("those damned X!"), while the linked comment seemed to cheerful for that, it's not like it encouraged to "help show it to those stinkin' Arabs" or anything like that, like the fact that someone might be hurt simply didn't occur to them.
Nationalism almost always displays as willful ignorance or apathy about the condition of those outside the nation. It's nation-centrism, in other words. Hatred is an extreme case (thus the moniker "ultra-nationalism").
Nationalism in symmetrical situations seems to be of the plain hate kind.
This just isn't true. At all. I'm not even sure where you would get it. There are nationalists all around the world who do not express hate toward other nations, even in cases of power symmetries.
More importantly: Why are we arguing about this? Cousin_it isn't some old philosopher or public intellectual who we can't reach for clarification. If he wants to correct my understanding of his comment let him do it.
Replies from: cousin_it, FAWS↑ comment by cousin_it · 2010-03-13T17:47:57.383Z · LW(p) · GW(p)
Sorry for taking so much time to reply. FAWS is right, I'm not saying Americans hate foreigners. It's more like a blindness or deafness. See my link above to the "amazing and unique experience" guy. The ethical angle of the situation simply doesn't occur to him, it's as if Iraqis were videogame characters. America's fighting an aggressive war and killed umpteen thousand people?... uh, okay man, I got a career to advance and I wanna go someplace exotic, like expand my horizons and shit. I've never heard anything like that from Russians or anyone else except Americans, though I'd be the first to agree that we Russians are quite nationalistic.
↑ comment by FAWS · 2010-03-12T18:07:17.545Z · LW(p) · GW(p)
Nationalism almost always displays as willful ignorance or apathy about the condition of those outside the nation.
The original disagreement wasn't about the term nationalism (and I never claimed that nationalism didn't explain it, only that what you said about nationalism up to that point didn't), so you seem to be arguing my point here: For the reasons I described it's easier for Americans to be "ignorant about the condition of those outside the nation".
This just isn't true. At all. I'm not even sure where you would get it. There are nationalists all around the world who do not express hate toward other nations, even in cases of power symmetries.
You can't keep hurting someone and not even notice you do in a symmetrical conflict because they will hurt you back, and then you will want revenge in turn.
More importantly: Why are we arguing about this?
You seem to be of the opinion that you can't even coherently/rationally (?) think a certain thing and I disagree. That disagreement is independent of the question whether anyone had actually been thinking that.
EDIT: Nation-centrism is close to what I meant with not feeling that other nations are "real".
Replies from: Jack↑ comment by Jack · 2010-03-12T20:31:57.823Z · LW(p) · GW(p)
For the reasons I described it's easier for Americans to be "ignorant about the condition of those outside the nation".
"willful" ignorance... Do we really need to spend time distinguishing nationalism from the fact that the US gets the NBA?
You can't keep hurting someone and not even notice you do in a symmetrical conflict because they will hurt you back, and then you will want revenge in turn.
So what you want to claim is that asymmetrical conflict is more likely than symetrical conflict to lead to people in one country being ignorant of the animosity against them in the other country. This is plausible though several counterexamples come to mind and I'm not sure it applies since a large portion of American nationalists appear to conceive of the conflict as a symmetrical one (this has been a minor issue in American politics, of course). I'm not sure I see how this issue relates to nationalism exactly and what it's relevance is. But as you can see below I'm not sure I understand what you're claiming at this point.
You seem to be of the opinion that you can't even coherently/rationally (?) think a certain thing and I disagree. That disagreement is independent of the question whether anyone had actually been thinking that.
WHAA? This is incredibly vague and confusing. I honestly have no idea what you're talking about.
Replies from: FAWS↑ comment by FAWS · 2010-03-12T21:17:00.215Z · LW(p) · GW(p)
"willful" ignorance... Do we really need to spend time distinguishing nationalism from the fact that the US gets the NBA?
And the fact that you neither need to make any significant sacrifices nor engage in double-think doesn't make willful ignorance easier?
So what you want to claim is that asymmetrical conflict is more likely than symetrical conflict to lead to people in one country being ignorant of the animosity against them in the other country.
Not really. The term nationalism is unhelpful. There seem to be at least two kinds, the we're-great-don't-care-about-anyone-else nation-centric one, and unite-against-the-enemy-us-or-them kind. My point is that being a hegemonic power facilitates the nation-centric kind. The sub-point that a hot symmetric conflict turns nationalism into the second kind pretty much by necessity even if it started out as the first kind. An asymmetric conflict of course allows either kind in the stronger party, presumably that's what your counter-examples show.
WHAA? This is incredibly vague and confusing. I honestly have no idea what you're talking about.
Presumably you detected a feature that made the post knowably correctable. If that feature wasn't an incoherent or irrational (in light of further evidence you have available) opinion, what was it?
↑ comment by mattnewport · 2010-03-12T17:22:36.734Z · LW(p) · GW(p)
A real and quiet common phenomenon in which Americans don't give a lick about people who don't live their country (in civilized places this is referred to as racism).
That sounds like nationalism rather than racism to me. The country you live in has only a loose correlation with the colour of your skin. If people favoured countries which had a strong majority of people of a particular ethnicity that might be evidence for racism.
Replies from: Jack↑ comment by FAWS · 2010-03-12T16:50:19.018Z · LW(p) · GW(p)
No reason given to think this is the case on balance.
Because I thought it would be obvious enough. Americans are less likely to learn foreign languages, most Americans don't even have a passport, it's easier to write a science paper without referencing any non-American research (not that I think this done at a significant rate, but the equivalent would be unthinkable elsewhere), foreign movies are generally either ignored or remade (and set in the USA if possible), foreign trade is a smaller percentage of GDP than just about any other developed nation, it's possible to "buy American" for a greater range of products than the equivalent anywhere else, America has the top leagues for the sports it cares about (it's not just that America cares for different sports than the rest of the world, for almost all countries the top level of the sport that country cares most about is at least in part played elsewhere so a soccer fan in e. g. Romania has to pay attention to the English Premier League, the Spanish Premiera Divison etc. [and even the English and Spanish fans have incentive to pay attention to each others league because they are at roughly equal level and the top teams regularly play each other]. If America cared about soccer the top league would be there so Americans still wouldn't have any reason to pay attention to foreign sports).
Replies from: thomblake, FAWS↑ comment by thomblake · 2010-03-12T20:11:28.441Z · LW(p) · GW(p)
I think most of those things could be expected regardless of whether America has any such putative hegemonic status. Most Americans don't have passports because they can't afford to travel to another continent, and the number is rising now that passports are required to visit other countries in North America. Getting a passport in the US is a fairly annoying, expensive process, so I'm not surprised most people haven't bothered. Ditto with the foreign languages - most Americans don't meet or talk to people who don't speak American.
I haven't been able to find a source online - do most Chinese people speak foreign languages and have passports? Are they required?
Replies from: FAWS↑ comment by FAWS · 2010-03-12T20:45:07.328Z · LW(p) · GW(p)
Most Americans don't have passports because they can't afford to travel to another continent, and the number is rising now that passports are required to visit other countries in North America. Getting a passport in the US is a fairly annoying, expensive process, so I'm not surprised most people haven't bothered.
Getting a passport is a bother everywhere, the point is that Americans don't really need a passport because their country is huge, rich and powerful and they can take a vacation in whatever climate they like without ever leaving their borders. People in other developed nations would have to make much greater sacrifices to never travel abroad.
Ditto with the foreign languages - most Americans don't meet or talk to people who don't speak American.
That's exactly my point! They can do that without missing all that much, unlike most of the planet.
do most Chinese people speak foreign languages
IIRC compulsory foreign language instruction (mostly in English) starts in third grade, and many educated Chinese learn a third/fourth language later. For many Chinese Mandarin is effectively a L2 language so they know their native dialect, Mandarin and some English. The state of English learning is mostly horrible and only a minority can communicate effectively, but I'd think that Chinese on average speak better English than non-native-speaker Americans speak Spanish and the difficulty is much greater.
I'm not all that clear about the passport situation/foreign travel and China is a bad example anyway because it is itself an enormous country and very "nation-centric", but a huge number of Chinese study abroad, while there is no comparable reason for Americans to do so because they already have many of the most prestigious universities.
↑ comment by FAWS · 2010-03-12T15:44:21.192Z · LW(p) · GW(p)
Why was this voted down? Was there anything in this post that isn't either objectively true (Americans have more leeway to ignore other nations) or clearly marked as speculation ("seem to")? Is it inherently irrational to consider the hypothesis that cousin_it's observation was meant exactly as stated, and then to speculate about what might be behind this observation?
↑ comment by Rain · 2010-03-12T16:16:03.013Z · LW(p) · GW(p)
"War is bad, the military industrial complex is evil," sounds good, and it hits all the right emotional buttons (care for humanity, etc.), but it is not necessarily true when all of the costs and benefits are taken into account. A defensive military allows intellectual, cultural, economic, and artistic endeavors to flourish without fear of attack. Destruction of infrastructure can open the way for rebuilding into a far better environment, and massive war spending can push the boundaries of technology. Reshaping political landscapes can cause huge culture shifts through decades which may result in much more open, and better, societies.
Suffering is terrible; death is abhorrent; and the benefits are uncertain enough, they should not be used as arguments to start an otherwise preventable war. But I do not see how we can appropriately judge the complex results of "war in general" on the timeline of decades or centuries.
What I can certainly agree with is that contributing to the military is bad on the margins, since it's already getting more than its share of resources thanks to others of a more bloodthirsty bent.
Replies from: cousin_it↑ comment by cousin_it · 2010-03-12T19:16:09.118Z · LW(p) · GW(p)
A defensive military allows intellectual, cultural, economic, and artistic endeavors to flourish without fear of attack.
At this point I laughed with a kind of sad laugh. Everyone who thinks America will use military robots for self-defense, raise your hands! On the other hand, you've made a wonderful argument that a strong offensive US military stifles cultural/economic/artistic endeavours worldwide due to fear of attack, though I'm sure you didn't mean to.
Replies from: Rain↑ comment by Rain · 2010-03-12T19:35:33.457Z · LW(p) · GW(p)
Everyone who thinks America will use military robots for self-defense, raise your hands!
They will use them for defense as well as for offense. I've seen several articles already of American cities ready to purchase military drones for law enforcement purposes, and I would be very surprised if they were not also added to strategic military bases within America to defend against potential attackers. At the very least, when countries are making strategy decisions that may involve the military, the mere existence of drones will serve as a deterrence.
On the other hand, you've made a wonderful argument that a strong offensive US military stifles cultural/economic/artistic endeavours worldwide due to fear of attack, though I'm sure you didn't mean to.
My point was to state the necessity of defense. If there are strong, warlike countries with military drones, such as the United States, then other countries had better start developing countermeasures to protect themselves. That, or ally themselves with the strong country in the hopes of falling under their protection rather than their ire. As such, staying ahead of the other countries is a valid strategy.
And I would certainly agree that US aggressiveness is stifling those very things in Iraq, Afghanistan, Iran, etc. The word 'fear' was poorly chosen. I was thinking more of what happened to Tibet and all those pacifists when they failed to muster an appropriate military defense: actual invasion and displacement or destruction.
Replies from: Rain, thomblake↑ comment by thomblake · 2010-03-12T19:42:50.910Z · LW(p) · GW(p)
I've seen several articles already of American cities ready to purchase military drones for law enforcement purposes
Oddly I don't seem to have a reference handy, but several US cities already use robots in law enforcement. iRobot and Foster-Miller really took off after the success of their robot volunteers at the WTC.
↑ comment by Rain · 2010-03-11T21:29:43.581Z · LW(p) · GW(p)
How much harm do you contribute by working to enable military robots?
How much harm do you contribute by paying taxes to the US government, part of which are used to fund military robots?
How much harm do you contribute by existing, living in the US, and absorbing a huge amount of electricity and other natural resources?
Replies from: Rain, cousin_it↑ comment by Rain · 2010-03-11T21:38:00.774Z · LW(p) · GW(p)
Well, that was voted down pretty rapidly :)
However, I was being honest with my questions. I'd like to know what sort of utilon adjustments people assign to these different situations, even if it's just a general weighting like 'high' or 'low'.
Replies from: Kevin, AdeleneDawner, RobinZ↑ comment by AdeleneDawner · 2010-03-13T10:37:14.495Z · LW(p) · GW(p)
As I see it, it's less about how much harm those specific things do, and more about how viable the alternatives are. I expect that all governments makes tax avoidance/evasion difficult, and I suspect that paying taxes to any government will support a military. The lifestyle changes involved in actually living sustainably (as opposed to being 'slightly better than the US average' or applying greenwash) seem pretty significant and possibly unattainable for most of us, as well. (I could be wrong on the latter in a general sense; I haven't looked into it, since I'm already relatively sure that it's beyond what I, personally, could manage.) Given that Warrigal was asking about the career move, though, I expect that he does have other viable options that could be pursued without completely turning his life upside down, and that's a significant difference between this decision and the other two.
Replies from: wnoise, Rain↑ comment by Rain · 2010-03-13T13:46:35.277Z · LW(p) · GW(p)
As I see it, it's less about how much harm those specific things do, and more about how viable the alternatives are.
How viable, given that you want to live in relative comfort and ease. But if a true valuation is made, then perhaps that should not be taken as given, considering the costs.
↑ comment by thomblake · 2010-03-12T19:30:33.692Z · LW(p) · GW(p)
There are various arguments that building military robots is bad, but I don't think you've touched on any good ones. When you look at how unreliable human soldiers are on the field, creating military robots just seems like an obvious way to make things better for everyone involved. Fewer American casualties because we're using robots, and fewer civilian casualties because the robots are better at not shooting at civilians.
Also, FWIW, most military robots currently aren't the sort that shoot people - they do things like look around corners, draw fire, perform aerial surveillance, and detect/defuse bombs.
Replies from: cousin_it↑ comment by cousin_it · 2010-03-12T19:42:10.527Z · LW(p) · GW(p)
This is ironic. I wrote:
It's some kind of crazy ethical blindness that most Americans seem to have for some reason, where "our guys" are human beings, but arbitrarily chosen foreigners deserve whatever they get...
Then you wrote:
...an obvious way to make things better for everyone involved. Fewer American casualties because we're using robots, and fewer civilian casualties because the robots are better at not shooting at civilians.
This happens to pixel-perfectly demonstrate my point about ethical blindness. Reread my quote again, then your quote, then mine, then yours again. Notice anything wrong? Anything missing?
You see, you omitted one pretty important group: everyone America calls "enemy combatants". If you think all of them are bad people and deserve to die, then you obviously don't get it. Repeat after me: America Starts Aggressive Wars. Then say it again because it's true and truth won't suffer from repetition. Say it as many times as you need to make it sink in, then come back and we will resume this discussion.
Replies from: thomblake, thomblake↑ comment by thomblake · 2010-03-12T19:50:30.364Z · LW(p) · GW(p)
everyone America calls "enemy combatants"
America will be killing those people with or without robots. We already have ways of wiping all of the enemy combatants off the map if we want to (for example nukes). Military technology is primarily about finding ways to 1) kill fewer of our own soldiers and 2) kill fewer people who aren't enemy combatants.
Replies from: jimrandomh, FAWS↑ comment by jimrandomh · 2010-03-12T20:05:44.946Z · LW(p) · GW(p)
America will be killing those people with or without robots
Not necessarily. All else equal, the less it costs to wage a war (in money, American lives, and good will), the more more likely leaders are to actually start one.
↑ comment by FAWS · 2010-03-12T19:59:42.527Z · LW(p) · GW(p)
Ignoring the question whether that's desirable or not (politics is the mindkiller) reducing the cost of killing those people will lead to more of those people killed in marginal situations where such considerations matter.
Replies from: thomblake↑ comment by thomblake · 2010-03-12T20:03:41.023Z · LW(p) · GW(p)
Yes, that's one of the good arguments against robot soliders I mentioned above. We're more likely to not care about the fate of our robot soliders, and so would be less hesitant to send them into battle. Though it's still an open question whether that effect would trump any increased monetary cost per soldier (if any) and whether the other benefits outweigh such concerns.
Human soldiers perform horribly in terms of following the rules of war, and above that do absolutely horrible things sometimes.
↑ comment by thomblake · 2010-03-12T19:51:35.872Z · LW(p) · GW(p)
America Starts Aggressive Wars
Also, this is definitely not the place to debate this, and you have to know a lot of people won't agree with you, so stop with the flamebait.
Replies from: wnoise, cousin_it↑ comment by wnoise · 2010-03-12T21:26:01.290Z · LW(p) · GW(p)
You don't even have to go as far as "America Starts Aggressive Wars" -- "Under the right conditions, America is capable of starting aggressive wars, and is more likely to do so if the cost of doing so is lowered."
Look, I get the "Politics is the Mind Killer" mantra, and I agree that it would be fruitless to start a debate about something like abortion here -- it comes down to definitions and conventions about what is moral.
But when something is actually, demonstrably, true, refusing to look at and examine the truth because it is painful to do so is not compelling. It doesn't even trigger most of the reasons in "politics is the mindkiller" -- both major U.S. Political parties are just fine with most of the examples. The only two teams that can credibly be put in opposition here are "U.S.A." and "Everyone else".
Replies from: Jack↑ comment by Jack · 2010-03-12T21:35:23.022Z · LW(p) · GW(p)
You don't even have to go as far as "America Starts Aggressive Wars" -- "Under the right conditions, America is capable of starting aggressive wars, and is more likely to do so if the cost of doing so is lowered."
It is worth noting that to complete the argument someone needs to show that America starting aggressive wars is bad. The people starting such wars, it turns out, have their reasons.
Replies from: CronoDAS↑ comment by cousin_it · 2010-03-12T20:13:41.345Z · LW(p) · GW(p)
Why flamebait? I stated a very well-known fact.
http://en.wikipedia.org/wiki/Bay_of_Pigs_Invasion
http://en.wikipedia.org/wiki/Operation_Power_Pack
http://en.wikipedia.org/wiki/Operation_Urgent_Fury
http://en.wikipedia.org/wiki/Operation_Just_Cause
More here: http://en.wikipedia.org/wiki/CIA_sponsored_regime_change
ETA: to tell the truth, until I dug up that last Wikipedia page just now for purposes of argument, I still had no clear idea how much this happened. And give these people autonomous killer robots? In the name of developing Friendly Intelligence?
Replies from: SilasBarta, Jack, thomblake↑ comment by SilasBarta · 2010-03-12T20:21:23.971Z · LW(p) · GW(p)
1) Politics is the mind killer, 2) Agree denotationally but not connotationally
↑ comment by thomblake · 2010-03-12T20:24:56.523Z · LW(p) · GW(p)
I stated a very well-known fact.
That's why. Folks will disagree that's something that the US does, and pointing to things the US might have done decades ago won't convince them. There's no way to even debate this point without going down a potentially mind-killing rabbit hole, and I find it hard to believe you weren't aware of this when you posted it.
In case you weren't aware of it: I live in the US, and I've talked to a number of ordinary folks and a number of scholarly folks about it, and I don't tend to encounter people who would grant that the US starts aggressive wars. You should be able to see why someone who thinks that would be angry and vocal about the accusation.
Replies from: cousin_it↑ comment by JGWeissman · 2010-03-11T21:30:14.515Z · LW(p) · GW(p)
Creating military robots can be friendly, if:
Lbh fryy gur ebobgf gb nyy fvqrf, ercynpvat uhzna nezvrf, naq unir gurz evttrq gb abg npghnyyl svtug rnpu bgure, ohg vafgrnq gnxr njnl gur rssrpgvir cbjre bs gur tbireazragf gung jnagrq nyy gur jnef.
(Rot13)
Replies from: cousin_it↑ comment by cousin_it · 2010-03-11T21:37:20.479Z · LW(p) · GW(p)
Unfortunately, this isn't a realistic option if you're an employee at a big military contractor, which is the most likely scenario...
Replies from: JGWeissman↑ comment by JGWeissman · 2010-03-11T21:54:41.671Z · LW(p) · GW(p)
Well, yeah, there is no way someone at standard human level would pull off what happened in that story.
↑ comment by Peter_de_Blanc · 2010-03-12T17:35:56.204Z · LW(p) · GW(p)
The difference between specialized FAI and general FAI is like the difference between adaptation executors and fitness maximizers. It's a big difference.
Replies from: FAWS↑ comment by FAWS · 2010-03-12T17:42:19.094Z · LW(p) · GW(p)
Is specialized FAI even a meaningful term? ISTM that to implement actual friendliness even in a specialized application an AI needs capabilities that imply AGI.
Replies from: Peter_de_Blanc↑ comment by Peter_de_Blanc · 2010-03-12T17:46:10.827Z · LW(p) · GW(p)
It's a nonstandard term that seemed appropriate to the discussion. By specialized FAI, I mean an AI that reliably does the thing it was made to do in a specific context.
Replies from: FAWS↑ comment by SilasBarta · 2010-03-11T19:33:13.504Z · LW(p) · GW(p)
Sounds like a good idea, but here are my reservations/warnings:
1) For the kind of work you describe, you would probably need a high-level security clearance and continued scrutiny on your life (to make sure you don't share it with the wrong people), and you probably wouldn't be able to publicly discuss your work. (i.e., where SIAI can hear it.)
2) What are your chances you'll actually get to work on the aspect of the problem that relates to Friendliness?
Replies from: Rain↑ comment by Rain · 2010-03-11T21:16:00.854Z · LW(p) · GW(p)
The scrutiny isn't so bad. They're mainly looking for illegality or potential for corruption. And even if you've committed illegal acts, so long as you own up to it, and it wasn't in the recent past (5 to 7 years), it's generally OK. Felonies are a different matter, of course.
A secret clearance is an interview, taking fingerprints, interviews of family and friends, interviews of neighbors, a credit check, and will likely require drug testing. Top secret clearances and above lead to polygraphs and heavy grilling, with monitoring for new developments. They're renewed every few years, going through the process again.
Most of the military drone programs would be given to one large contractor like Lockheed Martin or NGIT, with lots of smaller subcontractors. A security clearance at secret level or above takes up to 9 months, costs the company over $10,000, and adds that much or more to that person's annual salary potential, so it's not something they hand out lightly.
Most contracting agencies put a small, already-cleared team on the activities that require it, and farm out most of the work (documentation, mundane code, etc.) to people without clearances. If they need more people with clearances, they tend to get temporary waivers for the duration of the work (90 days or less, for example). Most only see a small part of the whole, and you don't choose your projects; your company does.
These are not good environments to learn complex, high-level things like Friendliness.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-03-11T21:49:41.918Z · LW(p) · GW(p)
It wasn't so much the background scrutiny I'm worried about so much as,
"Alright, it's been fun doing this research on human-level intelligent robots. Oh, hey, I'm going to go to an AI conference in Shanghai..."
"Hahahahahaha! Good one! Um ... were you being serious?"
↑ comment by Rain · 2010-03-11T21:56:33.561Z · LW(p) · GW(p)
Yeah, that could get you in big trouble.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-03-11T22:05:24.265Z · LW(p) · GW(p)
Yep. And so could the appearance on the internet of an e-book about "How to build a human-level armed android, by Warrigal", when Warrigal has worked at such a job.
And if you go to a potentially hostile country without telling them ... well, I guess you'll get the option of a PMITA federal prison, or solitary.
↑ comment by Vladimir_Nesov · 2010-03-11T18:30:08.690Z · LW(p) · GW(p)
No. FAI is about figuring out how to implement precise preference, not an approximation of it appropriate for non-magical environments. Requires completely different tools.
It seems that to work on FAI, one has to become mathematician and theoretical computer scientist (whatever the actual career).
Replies from: None↑ comment by [deleted] · 2010-03-11T18:52:45.651Z · LW(p) · GW(p)
What do you mean by "non-magical environments"?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-03-11T20:52:33.749Z · LW(p) · GW(p)
I gave a link! A non-magical environment gives limited expressive power, so there are few surprising situations that given heuristics don't capture. With enough testing and debugging, you may get your weakly intelligent robot to behave. Where more possibilities are open, you have to get preference exactly, or the decisions will be obviously wrong (see The Hidden Complexity of Wishes).
Replies from: RobinZ↑ comment by Kevin · 2010-03-11T21:11:03.442Z · LW(p) · GW(p)
I have very little in the way of morality, but I personally draw the line at supporting the military industrial complex. I don't think helping the military make robots that make kill decisions themselves has much to do with provable mathematical Friendliness.
Replies from: wedrifid↑ comment by wedrifid · 2010-03-12T00:40:53.432Z · LW(p) · GW(p)
It seems you are morally obliged to at least investigate possible mechanisms for tax evasion. But then, morality doesn't have all that much to do with consequences.
Replies from: Kevin↑ comment by Kevin · 2010-03-12T01:03:32.867Z · LW(p) · GW(p)
One practical way for me to evade taxes is to start a startup and sell it, which means my income will be taxed at the much lower capital gains rate.
Also, I draw a distinction between something I am comfortable doing, and the likely future progress of society as a whole. Killer robots aren't going away anytime soon, and except for the extra wars it will allow us to have, killer robots result in less US deaths and more effective military tactics than on the ground troops. I expect that US killer robots will be making kill decisions or at least very strong kill suggestions that are followed 99% of the time within 10 years. There's just too much data coming in too fast for a single human operator to be able to process.
If the African totalitarians are still around in 25 years, the possibility of being conquered by an army of killer robots may make them more amenable to internationally monitored elections.
So good and bad things will come about as a result of the killer robot armies of the future. It's really the military industrial complex as a whole I object to; robots making kill decisions is one of the less objectionable things within the military industrial complex.
Replies from: mattnewport↑ comment by mattnewport · 2010-03-12T01:09:00.014Z · LW(p) · GW(p)
One practical way for me to evade taxes is to start a startup and sell it, which means my income will be taxed at the much lower capital gains rate.
Uh, that's a pretty dumb thing to say. For one, starting a startup and selling it has rather broader consequences than a typical tax avoidance strategy. That's like suggesting moving to a third world country to cut down on your daily living expenses - your food and accommodation costs may indeed decrease but it significantly changes your life in all kinds of other ways as well. For another this would not be tax evasion but tax avoidance which has the rather significant difference of being entirely legal.
Replies from: Kevin↑ comment by Kevin · 2010-03-12T01:12:43.106Z · LW(p) · GW(p)
I'm fully aware of the distinction; I was playing with the ambiguous distinction between evasion and avoidance (as you say, the distinction being that avoidance is legal) by using the language of the person I replied to. I was trying to imply that there is no profound difference between avoidance and evasion, just the definitions given by the rule of law.
Replies from: mattnewport↑ comment by mattnewport · 2010-03-12T01:31:22.322Z · LW(p) · GW(p)
I assumed wedrifid knew the difference and was suggesting you were morally bound to evade rather than merely avoid taxes if you draw the line at supporting the military industrial complex. I don't necessarily agree with that but I took that to be his point.
I would have thought that maximizing tax avoidance is something that any aspiring rationalist ought to be doing as a matter of course.
I was trying to imply that there is no profound difference between avoidance and evasion, just the definitions given by the rule of law.
The fact that you can go to jail for tax evasion seems like a pretty profound difference from tax avoidance to me. The whole tax structure is 'just' the definitions given by the rule of law.
Replies from: RobinZ, Kevin↑ comment by RobinZ · 2010-03-12T01:48:04.315Z · LW(p) · GW(p)
I don't particularly want to avoid taxes, either - I like living in a country with a government.
Replies from: Kevin, mattnewport↑ comment by Kevin · 2010-03-12T01:54:36.322Z · LW(p) · GW(p)
I like living in a country with a government compared to Somalian anarchism, but not compared to libertarian utopia. This is getting close to politics.
Replies from: RobinZ↑ comment by RobinZ · 2010-03-12T02:19:13.542Z · LW(p) · GW(p)
As good a reason as any to drop the subject of tax avoidance.
Replies from: Kevin↑ comment by Kevin · 2010-03-12T02:24:07.731Z · LW(p) · GW(p)
Yes, Less Wrong could use some sort of Godwin's law analog, where a thread is declared dead or at least discouraged once it hits politics.
Replies from: mattnewport↑ comment by mattnewport · 2010-03-12T02:33:28.495Z · LW(p) · GW(p)
I think the general consensus is that we tread carefully when straying into political territory and tend to avoid explicitly political (certainly party political) discussion but that we don't entirely avoid discussion that has a political dimension. Taken to an extreme that would seem to preclude most topics of any interest or significance. Generally the standard of discourse is fairly high here and political slanging matches are avoided.
And I still don't consider it a political point that you basically fail at instrumental rationality if you overpay on your taxes.
↑ comment by mattnewport · 2010-03-12T01:56:04.571Z · LW(p) · GW(p)
I don't see the contradiction. The government creates the tax code with at least the stated intention of encouraging or subsidizing certain behaviours over others. That only works if people respond rationally to the incentives.
From the individual rationalist's point of view one should aim to optimize one's resources. In the context of taxes that generally means arranging your financial affairs to minimize the taxes paid without breaking the law. You can then choose how to best meet your own goals by allocating the money you save as you see fit.
It is only rational to not avoid taxes if you either believe the effort required to avoid them is not worth the money saved or if you believe that the optimal use of the money is to give it to the government. It seems unlikely in the latter case that the optimal amount to give to the government just happens to be the very amount they take from you so you should probably be voluntarily donating a larger portion of your income to the government. If you live in the US you should go here.
Replies from: orthonormal↑ comment by orthonormal · 2010-03-12T02:05:01.090Z · LW(p) · GW(p)
In the context of taxes that generally means arranging your financial affairs to minimize the taxes paid without breaking the law.
Since we were talking about choice of career among other things, it's worth stating that your actual incentive here more closely resembles "maximizing your after-tax income" than "minimizing your taxes paid".
Replies from: mattnewport↑ comment by mattnewport · 2010-03-12T02:11:01.018Z · LW(p) · GW(p)
True, I was focusing slightly more narrowly on the idea of minimizing your tax burden at your current income level without making major changes in your career, country of residence, etc. but on a longer timescale or in the context of broader life goals you are aiming to maximize your after-tax income rather than minimize the taxes you pay.
↑ comment by Kevin · 2010-03-12T01:43:45.949Z · LW(p) · GW(p)
I don't think I'm morally bound to evade taxes for the same reason I'm not morally bound to stop the world's massive amounts of animal suffering. My utility function breaks if I take my morality too seriously. As you say, I am somewhat bound morally to try and evade taxes or even actively stage insurrection against my government. Both of those seem like very bad ideas, as the state will just crush me.
Not working for the government in lieu of trying to bring down the government is similar to my decision to eat less meet rather than trying to make the whole world eat less meat. Yes, I am aware that these are not anywhere close to perfectly analogous decisions.
↑ comment by Mitchell_Porter · 2010-03-17T07:48:25.117Z · LW(p) · GW(p)
Cognitive neuroscience and cognitive psychology are far more relevant. A Friendly AI is a moral agent; it's more like a judge than a cruise missile. A killer robot must inflict harm appropriately but it does not need to know what "harm" is; that's for politicians, generals, and other strategists.
We have to extract the part of the human cognitive algorithm which, on reflection, encodes the essence of rational and moral judgment and action. That's the sort of achievement which FAI will require.
↑ comment by thomblake · 2010-03-12T19:26:38.670Z · LW(p) · GW(p)
The problems involved in creating ethical military robots are vastly different from those involved in general AI. Ron Arkin's Governing Lethal Behavior in Autonomous Robots does a good job of describing how one should think when building such a thing. Basically, there are rules for war, and the trick is to just implement those in the robot, and there's very little judgement left over. To hear him explain it, it doesn't even sound like a very hard problem.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-03-12T19:41:20.816Z · LW(p) · GW(p)
To hear him explain it, it doesn't even sound like a very hard problem.
Then I'm not sure he understands the problem. How does the robot tell the difference between an enemy soldier and a noncombatant? When they're surrendering? When they're dead/severely wounded?
The rules of war themselves are fairly algorithmic, but applying them is a different story.
Replies from: thomblake↑ comment by thomblake · 2010-03-12T19:48:06.252Z · LW(p) · GW(p)
Well there's a bit of bracketing at work here. Distinguishing between an enemy soldier and a noncombatant isn't an ethical problem. He does note that determining when a soldier is surrendering is difficult, and points out the places where there really is an ethical difficulty (for example, someone who surrenders and then seems to be aggressive).
↑ comment by Daniel_Burfoot · 2010-03-12T04:23:25.621Z · LW(p) · GW(p)
This is a good question, I would appreciate more discussion of it on LW. I am wondering about similar issues: my research involves computer vision, the most obvious applications of which are for surveillance and security. One does not need to be a science fiction author or devotee to imagine powerful computer vision tools or military robots being used for evil.
Replies from: Hook, RobinZ↑ comment by Hook · 2010-03-12T14:13:47.245Z · LW(p) · GW(p)
Whether something can be used for evil or not is the wrong question. It's better to ask "How much does computer vision decrease the cost of evil?" Many of the bad things that could be done with CV can be done with a camera, a fast network connection, and an airman in Nevada, just as many of the good medical applications can be done by a patient postdoc or technician.
Replies from: RobinZ↑ comment by RobinZ · 2010-03-12T14:32:10.512Z · LW(p) · GW(p)
Better still is to ask, "What are the benefits and harms of doing this rather than something else, including cascading consequences on to the indefinite future?" Which, of course, is murderously hard to answer in cases this far removed from direct consequences.
Which is what I meant when I said computer vision research was not distinguished. Although upon consideration I would weaken the claim to "not strongly distinguished", which might still be enough to justify doing something else.
comment by PhilGoetz · 2010-03-17T01:15:45.071Z · LW(p) · GW(p)
This is from the friendly AI document:
Unity of will occurs when deixis is eliminated; that is, when speaker-dependent variables are eliminated from cognition. If a human simultaneously suppresses her adversarial attitude, and also suppresses her expectations that the AI will make observer-biased decisions, the result is unity of will. Thinking in the third person is natural to AIs and very hard for humans; thus, the task for a Friendship programmer is to suppress her belief that the AI will think about verself in the first person (and, to a lesser extent, think about herself in the third person).
Actually, thinking in the third person is unnatural to humans and computers. It's just that writing logic programs in the third person is natural to programmers. Many difficult representational problems, however, become much simpler when you use deictic representations. There's an overview of this literature in the book Deixis in Narrative: A cognitive science perspective (Duchan et al. 1995). For a shorter introduction, see A logic of arbitrary and indefinite objects.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-17T15:24:00.052Z · LW(p) · GW(p)
Actually this may be a better link.
Part of the problem is that 3rd person representations have extensional semantics. If Mary Doe represents her knowledge about herself internally as a set of propositions about Mary Doe, and then meets someone else named Mary Doe, or marries John Deer and changes her name, confusion results.
A more severe problem becomes apparent when you represent beliefs about beliefs. If you ask, "What would agent X do in this situation?", and you represent agent X's beliefs using a 3rd-person representation, you have a lot of trouble keeping straight what you know about who is who, and what agent X knows about who is who. If you just put a tag on something at call it Herbert, you don't know if that means that you think the entity represented is the same entity named Herbert somewhere else, or that agent X thinks that (or thought that).
An even more severe problem becomes apparent when you try to build robots. Agre & Chapman's paper on Pengi is a key paper in the behavior-based robotics movement of the 1990s. If you want to use 3rd-person representations, you need to give your robot a whole lot of knowledge and do a whole lot of calculation just to get it to, say, dodge a rock aimed at its head. Using deictic representations makes it much simpler.
We could perhaps summarize the problem by saying that, when you use a 3rd-person representation, every time you use that representation you need to invoke or at least trust a vast and complicated system for establishing a link between the functional thing being represented, and an identity in some giant lookup table of names. Whereas often you don't care about all that, and it's nothing but an opportunity to introduce error into the system.
comment by Strange7 · 2010-03-13T11:52:54.035Z · LW(p) · GW(p)
The prince of one hundred thousand leaves is, among other things, a sort of fictionalized open-source project for horrifying eutopias. It might provide useful insights about that which we are least willing to consider.
comment by Kevin · 2010-03-16T09:54:56.291Z · LW(p) · GW(p)
Einstein's Gravity Confirmed on a Cosmic Scale
or
Confirmation of general relativity on large scales from weak lensing and galaxy velocities
http://www.nature.com/nature/journal/v464/n7286/full/nature08857.html
comment by Unnamed · 2010-03-14T19:10:39.052Z · LW(p) · GW(p)
Has there been any activity on the Craigslist charity idea? If people are pursuing it, is there someplace to post updates, or an email list to join?
comment by Richard_Kennaway · 2010-03-14T12:57:16.225Z · LW(p) · GW(p)
Spirit on the Brain is a blog and a book in progress by Geoffrey Falk about the neurophysical sources of religion, which will make interesting reading for anyone wanting to know about the aetiology of the religious pathology.
comment by PhilGoetz · 2010-03-13T22:36:13.682Z · LW(p) · GW(p)
I have a program that estimates the chances that one gene has the same function as another gene, based on their similarity. This is estimated from the % identity of amino acids between the proteins, and on the % of the larger protein that is covered by an alignment with the shorter protein.
For various reasons, this is done by breaking %id and %len into bins, eg 20-30%id, 30-40%id, 40-50%id, ... 30-40%len, 40-50%len, ... and estimating a probability for each bin that two proteins matched in that way have the same function.
What I want to do is to reduce the number of bins, so there are only 3 bins for %ID and 3 bins for %len, and 9 bins for their cross-product.
I can gather a bunch of statistics on matches made where we think we know the answer. The frequentist statistician can take, say for %ID, every side-by-side pair of the original 10 bins, do an ANOVA, and look at the F-statistic; then retain the 2 boundaries with the largest F-statistics.
What would the Bayesian do?
Replies from: wnoise↑ comment by wnoise · 2010-03-14T08:46:27.960Z · LW(p) · GW(p)
To make sure I'm interpreting this correctly: the calibration data is a list of pairs of genes, along with their %id, and %len, and tagged by either "same function" or "different function"? And currently, these are binned, and the probabilities estimated from the statistics known in that bin?
You want to change this, in particular, reduce the number of bins. Before we get to "how", may I ask why you want to do this? It doesn't seem as if it would reduce the computational cost. It would up the number of samples and possibly get a better discrimination, but at the same time it spreads the gene pairs being compared against over larger regions of parameter space, meaning your inference is now based more on genes that should have less relevance to your case...
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-15T14:42:34.036Z · LW(p) · GW(p)
To make sure I'm interpreting this correctly: the calibration data is a list of pairs of genes, along with their %id, and %len, and tagged by either "same function" or "different function"? And currently, these are binned, and the probabilities estimated from the statistics known in that bin?
Yes.
Before we get to "how", may I ask why you want to do this?
Not enough samples for a large number of bins.
Ideally I'd use a different method that would use the numbers directly, in regression or some machine-learning technique. I may do that someday. But there are institutional barriers to doing that.
Replies from: wnoise↑ comment by wnoise · 2010-03-16T08:57:25.064Z · LW(p) · GW(p)
Ideally I'd use a different method that would use the numbers directly,
So would I, but that would be a research project.
There is no direct Bayesian prescription for the best way of binning, though the motto of "use every scrap of information: throw nothing away", implies to me that the proper thing to do is minimize information left out once we know the bin. A bin is most informative if the statistics of the bin have the least entropy. So select a binning that does this, and obeys whatever other reasonable restraints you want, such as being contiguous, or dividing directly into 9, by the cross product of 3 on each axis.
A natural measure of the entropy is just -p log p - (1-p) log (1- p), where p is the revealed frequency, but it's not the right one. I'm going to argue that instead we want to use a different measure of entropy: that of an underlying posterior distribution. This is essentially what information we're still lacking once we have the bin.
For no prior information, and data of the counts, this a Beta distribution, with parameters of the number in the bin judged to be the "same" + 1, and the number judged to be "different" + 1. There is an entropy formula in the Wikipedia article. EDIT: be careful about signs though, it appears to be the negative of the entropy currently.
Because we're concerned about the gain per gene pair, naturally each bin's entropy should be weighted by how often it comes up -- that is, the number of samples in the bin (perhaps +1).
Does this seem like a reasonable procedure? Note that it doesn't directly maximize differing bins getting differing predictions. Instead it minimizes uncertainty in each bin. In practice, I believe it will have the same effect. A slightly more ad-hoc thing to try would be minimizing the variance in each, rather than the entropy.
Replies from: PhilGoetz, wnoise, PhilGoetz↑ comment by PhilGoetz · 2010-03-21T20:25:40.600Z · LW(p) · GW(p)
So would I, but that would be a research project.
You know what's funny - My bosses have a "research project bad" reaction. If I say that fixing a problem requires finding a new solution, they usually say, "That would be a research project", and nix it.
But if I say, "Fixing this would require changing the underlying database from Sybase to SQLite", or, "Fixing this would require using NCBI's NRAA database instead of the PANDA database", that's easier for people to accept, even if it requires ten times as much work.
↑ comment by wnoise · 2010-03-16T21:16:55.548Z · LW(p) · GW(p)
Doing some simulations on a similar problem (1-d, with p(x) = x), I'm getting results indicating that this isn't working well at all. Reducing the entropy by means of having large numbers in one bin seems to override the reduction in entropy by having the probabilities be more skewed, at least for this case. I am surprised, and a bit perplexed.
EDIT: I was hoping that a better measure, such as the mutual information I(x; y) between bin (x) and probability of being the same function (y) would work. But this boils down to the measure I suggested last time: I(x; y) = H(y) - H(y|x). H(y) is fixed just by the distribution of same vs. not. H(Y|X) = - sum x p(x) int p(y|x) log p(y|x) dy, and so maximizing the mutual information is the same as minimizing the measure I suggested last time.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-21T20:43:19.096Z · LW(p) · GW(p)
What's the similar problem? "1-d, with p(x) = x)" doesn't mean much to me. It sounds like you're looking for bins on the region 1 to d, with p(x) = x. I think that if you used the plogp -qlogq entropy, it would work fine.
Replies from: wnoise↑ comment by PhilGoetz · 2010-03-21T20:29:11.952Z · LW(p) · GW(p)
A bin is most informative if the statistics of the bin have the least entropy.
That's a good idea.
A natural measure of the entropy is just -p log p - (1-p) log (1- p), where p is the revealed frequency, but it's not the right one.
I'm glad you said that, since that was what I immediately thought of doing. I'll read up on the beta distribution, thanks!
Replies from: wnoise↑ comment by wnoise · 2010-03-22T15:41:50.224Z · LW(p) · GW(p)
I still think it's not a great choice, though clearly my other choices haven't worked well. But please do try it.
Given that the probability is a continuous distribution, the Fisher information might instead be a reasonable thing to look at. For a single distribution, maximizing it corresponds to minimizing the variance, so my suggestion for that wasn't as ad-hoc as I thought. I'm not sure the equivalence holds for multiple distributions.
comment by DanielVarga · 2010-03-13T16:22:45.247Z · LW(p) · GW(p)
I shared this argument against cryonics here, but cyphergoth, the original poster of that thread noted that he prefers that discussion to focus on the technical feasibility of cryonics. This is not my actual opinion, just my solution to an intellectual puzzle: how can a rational person skip cryonics, even if he believes in its technical feasibility?
Let us first assume that I don't care too much about my future self, in the simple sense that I don't exercise, I eat unhealthy food, etc. Most of us are like that, and this is not irrational behavior: We simply heavily discount the well-being of our future selves, even using a time-based cutoff. (Cutoff is definitely necessary: If a formalized decision theory infinitely penalizes eating foie gras, then I'll skip the decision theory rather than foie gras. :) )
Now comes the argument: If I sign up for cryonics, I'll have serious incentives to get frozen sooner rather than later. I fear that these incentives consciously or unconsciously influence my future decisions in a way I currently do not prefer. Ergo cryonics is not for me.
What are the incentives? Basically they all boil down to this: I would want my post-cryo personality to be more rather than less similar to my current personality. If they revive my 100 years old self, there will be a practical problem (many of his brain cells are already dead, he is half the man he used to be) and a conceptual problem (his ideas about the world will quite possibly heavily diverge from my ideas, and this divergence will be a result of decay rather than progress).
comment by JamesAndrix · 2010-03-13T06:19:51.223Z · LW(p) · GW(p)
From the guy that brought us the creative commons license:
comment by djcb · 2010-03-12T13:33:27.450Z · LW(p) · GW(p)
If you want to write UIs, Lisp and friends would probably not be first choice, but since you mentioned it...
For Lisp, you can of course install Emacs, which (apart from being an editor) is a pretty convenient way to play around with Lisp. Emacs-Lisp may not be a start of the art Lisp implementation, but it certainly good enough to get started. An because of the full integration with the editor, there is this instant-gratification when you can use some Lisp to glue to existing things together into something useful. Emacs is available for just about any self-respecting computer system.
You can also try Scheme (a Lisp dialect); there is the excellent freely available Structure and Interpretation of Computer Programs, which uses Scheme as the vehicle to explain many programming concepts. Guile is nice, free-software implementation.
When you're really into a more mathematical approach, Haskell is pretty nice. For UI stuff, I find it rather painful though (same is true for Lisp and to some extent, Scheme).