Open Thread, May 19 - 25, 2014

post by somnicule · 2014-05-19T04:49:59.430Z · LW · GW · Legacy · 291 comments

Contents

291 comments

Previous Open Thread

 

You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

 

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should start on Monday, and end on Sunday.

4. Open Threads should be posted in Discussion, and not Main.

 

291 comments

Comments sorted by top scores.

comment by Viliam_Bur · 2014-05-19T11:59:33.010Z · LW(p) · GW(p)

I'm reading the "You're Calling Who A Cult Leader?" again, and now the answer seems obvious.

"I publicly express strong admiration towards the work of Person X." -- What could possibly be wrong about this? Why are our instincts screaming at us not to do this?

Well, assigning a very high status to someone else is dangerous for pretty much the same reason as assigning a very high status to yourself. (With possible exception if the person you admire happens to be the leader of the whole tribe. Even so, who are you to speak about such topics? As if your opinion had any meaning.) You are challenging the power balance in the tribe. Only instead of saying "Down with the current tribe leader; I should be the new leader!" you say "Down with the current tribe leader; my friend here should be the new leader!"

Either way, the current tribe leader is not going to like. Neither his allies. Neither neutral people, who merely want to prevent another internal fight where they have nothing to gain. All of them will tell you to shut up.

There is nothing bad per se about suggesting that e.g. Douglas R. Hofstadter should be the king of the nonconformist tribe. Maybe we can't unite behind this king, but neither can we unite behind any competitor, so... why not. At worst, some of us will ignore him.

The problem is, we live in a context of a larger society that merely tolerates us, and we know it. Praise Hofstadter too high and someone outside of our circle may notice it. And suddenly the rest of the tribe might decide that it is going to get rid of our ill-mannered faction once and for all. (Not really, but this is what would happen in the ancient jungle.) So we better police ourselves... unless we are ready to take the fight with the current leadership.

Being a strong fan of Douglas R. Hofstadter means challenging those who are strong fans of e.g. Brad Pitt. There is only so much place at the top of the status ladder, and our group is not strong enough to nominate even the highest-status one among us. So we rather not act like we are ready for open confrontation.

The irony is that if Douglas Hofstadter or Paul Graham or Eliezer Yudkowsky actually had their small cults, if they acted like dictators within the cult and ignored the rest of the world, the rest of the world would not care about them. Maybe people would even invent rationalizations about why everything is okay, and why anyone is free to follow anyone or anything. -- The problem starts with suggesting that they could somehow be important in the outside world; that the outside world has a reason to listen to them. That upsets people; the power change that might concern them. Cultish behavior well-contained within the cult doesn't. Saying that all nerds should read Hofstadter, that's okay. -- Saying that even non-nerds lose something valuable when they don't read something written by a member of our faction... now that's a battle call. (Are you suggesting that Hofstadter deserves a similar status to e.g. Dostoyevsky? Are you insane or what? Look at the size of your faction, our faction, and think again.)

Replies from: David_Gerard, Punoxysm, John_Maxwell_IV, Emile
comment by David_Gerard · 2014-05-20T16:40:16.663Z · LW(p) · GW(p)

I was talking to the loved one about this last night. She is going for ministry in the Church of England. (Yes, I remain a skeptical atheist.)

She is very charismatic (despite her introversion) and has the superpower of convincing people. I can just picture her standing up in front of a crowd and explaining to them how black is white, and the crowd each nodding their heads and saying "you know, when you think about it, black really is white ..." She often leads her Bible study group (the sort with several translations to hand and at least one person who can quote the original Greek) and all sorts of people - of all sorts of intelligence levels and all sorts of actual depths of thinking - get really convinced of her viewpoint on whatever the matter is.

The thing is, you can form a cult by accident. Something that looks very like one from the outside, anyway. If you have a string of odd ideas, and you're charismatic and convincing, you can explain your odd ideas to people and they'll take on your chain of logic, approximately cut'n'pasting them into their minds and then thinking of them as their own thoughts. This can result in a pile of people who have a shared set of odd beliefs, which looks pretty damn cultish from the outside. Note this requires no intention.

As I said to her, "The only thing stopping you from being L. Ron Hubbard is that you don't want to. You better hope that's enough."

(Phygs look like regular pigs, but with yellow wings.)

comment by Punoxysm · 2014-05-20T03:27:52.918Z · LW(p) · GW(p)

I think you're overcomplicating it. People like Eliezer Yudkowsky and Paul Graham are certainly not cult leaders, but they have many strong opinions that are well outside the mainstream; they don't believe in, and in fact actively scorn, hedging/softening their expression of these opinions; and they have many readers, a visible subset of whom uncritically pattern all their opinions, mainstream or not, after them.

And pushback against excitement over Hofstadter can stem from legitimate disagreement about the importance/interestingness of his work. The pushback is proportional to the excitement that incites it.

comment by John_Maxwell (John_Maxwell_IV) · 2014-05-21T00:08:43.358Z · LW(p) · GW(p)

There is nothing bad per se about suggesting that e.g. Douglas R. Hofstadter should be the king of the nonconformist tribe.

Disagreed. IMO, there should only be kings if there's a good reason... among other things, I suspect that status differences are epistemologically harmful. See Stanley Milgram's research and the Asch conformity experiment.

I also disagree with the rest of your analysis. I anticipate a different sense of internal revulsion when someone starts talking to me about why Sun Myung Moon is super great vs why Mike Huckabee is so great or why LeBron James is so great. In the case of LW, I think people whose intuitions say "cult" are correct to a small degree... LW does seem a tad insular, groupthink-ish, and cultish to me, though it's still one of my favorite websites. And FWIW, I would prefer that people who think LW seems cultish help us improve (by contributing intelligent dissent and exposing us to novel outside thinking) instead of writing us off.

(The most charitable interpretation of the flaws I see in LW is that they are characteristics that trade off against some other things we value. E.g. if we downvoted sloppy criticism of LW canon less, that would mean we'd get more criticism of LW canon, both sloppy and high-quality... not clear whether this would be good or not, though I'm leaning towards it being good. A less charitable interpretation is that the length of the sequences produces some kind of hazing effect. Personally, I haven't finished the sequences, don't intend to, think they're needlessly verbose, and would like to see them compressed.)

Replies from: MathiasZaman
comment by MathiasZaman · 2014-05-21T12:36:34.074Z · LW(p) · GW(p)

E.g. if we downvoted sloppy criticism of LW canon less, that would mean we'd get more criticism of LW canon, both sloppy and high-quality... not clear whether this would be good or not, though I'm leaning towards it being good.

I've recently been subject to sloppy criticism of "weird ideas" (e.g. transhumanism) and the sloppy criticism is always the same. At this point I'd look forward to high-quality criticism, but I'm not willing to suffer again and again through the sloppy parts for it.

If people want to provide high-quality criticism, they should be rewarded for it (in this case, with upvotes and polite conversation). Sloppy criticism remains low-quality content and should not be rewarded.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-05-21T23:12:17.698Z · LW(p) · GW(p)

Makes sense. I still think the bar should be a bit lower for criticism, for a couple reasons.

Motivated reasoning means that we'll look harder for flaws in a critical piece, all else equal. So our estimation of post quality is biased.

Good disagreement is more valuable than good agreement, because it's more likely to cause valuable updates. But the person writing a post can only give a rough estimate of its quality before posting it. (Dunning-Kruger effect, unknown unknowns, etc.) Intuitively their subconscious will make some kind of "expected social reward" calculation that looks like

p_quality_criticism * social_reward_for_quality_criticism -
p_sloppy_criticism * social_punishment_for_sloppy_criticism

Because of human tendencies, social_punishment_for_sloppy_criticism is going to be higher than the corresponding social_punishment_for_sloppy_agreement parameter in the corresponding equation for agreement.

If social_punishment_for_sloppy_criticism is decreased, on, the margin, that will increase the expected values of this calculation, which means that more quality criticism will get through and be posted. LW users will infer these penalties by observing voting behavior on the posts they see, so it makes sense to go a bit easy on sloppy critical posts from a counterfactual perspective. Different users will interpret social reward/punishment differently, with some much more risk-averse than others. My guess is that the most common mechanism by which low expected social reward will manifest itself is procrastination on writing the post... I wouldn't be surprised if there are a number of high-quality critical pieces of LW that haven't been written yet because their writer is procrastinating due to an ugh field around possible rejection.

(I know intelligent people will disagree with me on this, so I thought I'd make my reasoning a bit more formal/explicit to give them something to attack.)

Replies from: MathiasZaman
comment by MathiasZaman · 2014-05-23T09:28:33.828Z · LW(p) · GW(p)

A good solution could be to just not downvote sloppy criticism. No reward, but also no punishment.

comment by Emile · 2014-05-19T12:45:58.570Z · LW(p) · GW(p)

(Are you suggesting that Hofstadter deserves a similar status to e.g. Dostoyevsky? Are you insane or what? Look at the size of your faction, our faction, and think again.)

I'm not sure about this - the "Yay Hofstadter" team looks about as big as the "Yay Dostoyevsky" team, at least especially in the anglophone internet.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-05-19T13:49:49.491Z · LW(p) · GW(p)

Bad example, perhaps. Try some big names from anglophone literature.

Shakespeare? Okay, maybe too old. Gone with the Wind? Something that is officially blessed and taught at schools as the literature. Something that perhaps not many people enjoy, but almost everyone perceives that is has an officially high status. The thing you suggest should be replaced by Hofstadter.

comment by Shmi (shminux) · 2014-05-23T20:55:20.222Z · LW(p) · GW(p)

A few questions about cryonics I could not find answers to online.

What is the fraction of deceased cryo subscribers who got preserved at all? Of those who are, how long after clinical death? Say, within 15 min? 1 hour? 6 hours? 24 hours? Later than that? With/without other remains preservation measures in the interim?

Alcor appears to list all its cases at http://www.alcor.org/cases.html , and Ci at http://198.170.115.106/refs.html#cases , though the last few case links are dead. So, at least some of the statistics can be extracted. However, it is not clear whether failures to preserve are listed anywhere.

Some other relevant questions which I could not find answers to:

  • How often do cryo memberships lapse and for what reasons?

  • How successful are last-minute cryo requests from non-subscribers?

comment by gwern · 2014-05-22T02:54:32.244Z · LW(p) · GW(p)

Bad news, guys - we're probably all charismatic psychotics; from "The Breivik case and what psychiatrists can learn from it", Melle 2013:

The court reports clearly illustrate the odd effect Breivik seems to have had on all his evaluators, including the first, in generating reluctance to explore what might lie behind some of his strange utterances. As an illustration, when asked if he ever was in doubt about Breivik's sanity, one of the witnesses stated that he was that once, when Breivik in a discussion suggested that in the future people's brains could be directly linked to a computer, thus circumventing the need for expensive schooling. Instead of asking Breivik to extrapolate, the witness stated that he “rapidly said to himself that this was not a psychotic notion but rather a vision of the future”.

It's a good thing Breivik didn't bring up cryonics.

comment by philh · 2014-05-19T13:33:56.836Z · LW(p) · GW(p)

The sanitised LW feedback survey results are here: https://docs.google.com/spreadsheet/ccc?key=0Aq1YuBYXaqWNdDhQQmQ3emNEOEc0MUFtRmd0bV9ZYUE&usp=sharing

I'll be writing up an analysis of results, but that takes time.

Locations that received feedback:

  • (1) Amsterdam, Netherlands
  • (3) Austin, TX
  • (5) Berkeley, CA
  • (1*) Berlin, Germany
  • (1*) Boston, MA
  • (4) Brussels, Belgium
  • (2) Cambridge, MA
  • (2) Cambridge, UK
  • (3) Chicago, IL
  • (1) Hamburg, Germany
  • (3) Helsinki, Finland
  • (14) London [None of them specified which London, but my current guess is that they all meant London, UK.]
  • (3) Los Angeles, CA
  • (4) Melbourne, Australia
  • (2) Montreal, QC
  • (1) Moscow, Russia
  • (1) Mountain View, CA
  • (5) New York City
  • (1**) Ottawa, ON
  • (1) Philadelphia, PA
  • (1*) Phoenix, AZ
  • (1) Portland, OR
  • (1**) Princeton, NY
  • (1*) San Diego, CA
  • (2) Seattle, WA
  • (2) Sydney, Australia
  • (1) Toronto, ON
  • (2) Utrecht, Netherlands
  • (3) Washington, DC

  • (1) No local meetup

  • (9) Not given

(*) means the feedback is from someone who hasn't attended because it's too far away, so seeing the specific response is probably not very helpful. (**) means the group name is written in the public results, so you can just search for it to find your feedback.

There were 78 responses, and four of them listed two or more cities, so these sum to 82.

If you organize one of these groups, and haven't already done so, please get in touch so I can send your feedback to you! (Or if you'd rather not receive it, it would be helpful if you could let me know that as well, so that I don't spend time trying to track you down.) I haven't yet sent anyone their feedback, and don't promise that I'll do it super quickly, but it will happen.

comment by Risto_Saarelma · 2014-05-22T08:02:20.959Z · LW(p) · GW(p)

Scott Aaronson isn't convinced by Giulio Tononi's integrated information theory for consciousness.

But let me end on a positive note. In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy, and malleable that they can only aspire to wrongness.

comment by Kaj_Sotala · 2014-05-19T11:05:28.331Z · LW(p) · GW(p)

DragonBox has been mentioned on this site a few times, so I figured that people might be interested knowing in that its makers have come up with a new geometry game, Elements. It's currently available for Android and iOS platforms.

DragonBox Elements takes its inspiration from “Elements”, one of the most influential works in the history of mathematics.Written by the Greek mathematician Euclid, “Elements” describes the foundations of geometry using a singular and coherent framework. Its 13 volumes have served as a reference textbook for over 23 centuries. The book also introduced the axiomatic method, which is the system of argumentation that forms the basis for the scientific method we still use today. DragonBox Elements makes it possible for players to master its essential axioms and theorems after just a couple of hours playing!

Geometry used to be my least favorite part of math and as a result, I hardly remember any of it. Playing this game with that background is weird: I don't really have a clue of what I'm doing or what the different powers represent, but they do have a clear logic to them, and now that I'm not playing, I find myself automatically looking for triangles and quadrilaterals (had to look up that word!) in everything that I see. Plus figuring out what the powers do represent makes for an interesting exercise.

I'd be curious to hear comments from anyone who was already familiar with Euclid before this.

Replies from: philh
comment by philh · 2014-05-19T13:17:29.774Z · LW(p) · GW(p)

Not an expert, but Euclid made some mistakes, like using superposition to prove some theorems. I'm curious how they handle those. (e.g. I think Euclid attempted to prove side-angle-side congruence, but Hilbert had to include it as an axiom.)

comment by zedzed · 2014-05-19T05:55:19.010Z · LW(p) · GW(p)

I have the privilege of working with a small group of young (12-14) highly gifted math students for 45 minutes a week for the next 5 weeks. I have extraordinary freedom with what we cover. Mathematically, we've covered some game theory and Bayes' theorem. I've also had a chance to discuss some non-mathy things, like Anki.

I only found out about Anki after I'd taken a bunch of courses, and I've had to spend a bunch of time restudying everything I'd previously learned and forgotten. It would have been really nice if someone had told me about Anki when I was 12.

So, what I want to ask Lesswrong, since I suspect most of you are like the kids I'm working with except older, is what blind spots did 12-14-year-old you have I could point out to the kids I'm working with?

Replies from: Viliam_Bur, None, CAE_Jones, Benito, AspiringRationalist, therufs, Gunnar_Zarncke, MathiasZaman, sixes_and_sevens, gwillen
comment by Viliam_Bur · 2014-05-19T07:06:52.946Z · LW(p) · GW(p)

what blind spots did 12-14-year-old you have

Heh, if I was 12-14 these days, the main message I would send to me would be: Start making and publishing mobile games while you have a lot of free time, so when you finish university, you have enough passive income that you don't have to take a job, because having a job destroys your most precious resources: time and energy.

(And a hyperlink or two to some PUA blogs. Yeah, I know some people object against this, but this is what I would definitely send to myself. Sending it to other kids would be more problematic.)

I would recommend Anki only for learning languages. For other things I would recommend writing notes (text documents); although this advice may be too me-optimized. One computer directory called "knowledge", subdirectories per subject, files per topic -- that's a good starting structure; you can change it later, if you need. But making notes becomes really important at the university level.

I would stress the importance of other things than math. Gifted kids sometimes focus on their strong skills, and ignore their weak skills -- they put all their attention to where they receive praise. This is a big mistake. However, saying this without providing actionable advice does not help. For example, my weak spots were exercise and social skills. For social skills a list of recommended books could help; with emphasis that I should not only read the books, but also practice what I learned. For exercise, a simple routine plus HabitRPG could do the job. Maybe to emphasise that I should not focus on how I compare with others, but how I compare with yesterday's me.

Something about an importance of keeping contact with smart people, and insanity of the world in general. As a smart person, talking with other smart people increases your powers: both because you develop with them the ideas you understand, and because you can ask them about things you don't understand. (A stupid person will not understand what you are saying, and will give you harmful advice about things you asked.) In school you are supposed to work alone, but in real life a lot of success is achieved by teams; but the best teams are composed of good people, not of random people.

Another advice that is risky to give to other kids: Religion is bullshit and a waste of time. People will try to manipulate you, using lies and emotional pressure. Whatever other positive traits they have, try to find other people that have the same positive traits, but without the mental poison; even if it takes more time, it's worth it.

comment by [deleted] · 2014-05-20T21:19:45.090Z · LW(p) · GW(p)

what blind spots did 12-14-year-old you have

  • Social capital is important. Build it.
  • Peer pressure is far more common and far more powerful than you think. Find an ingroup that puts it to constructive ends.
  • Don't major in a non-STEM field. College is job training and a networking opportunity. Act accordingly.
  • Something about time management, pattern-setting, and motivation management -- none of which I've managed to learn yet.
Replies from: Viliam_Bur, polymathwannabe
comment by Viliam_Bur · 2014-05-21T08:22:17.465Z · LW(p) · GW(p)

Social capital is important. Build it.

Some actionable advice: Keep written notes about people (don't let them know about that). For every person, create a file that will contain their name, e-mail, web page, facebook link, etc., and the information about their hobbies, what you did together, whom they know, etc. Plus a photo.

This will come very useful if you haven't been in contact with the person for years, and want to reconnect. (Read the whole file before you call them, and read it again before you meet them.) Bonus points if you can make the information searchable, so you can ask queries like "Who can speak Japanese?" or "Who can program in Ruby?".

This may feel a bit creepy, but many companies and entrepreneurs do something similar, and it brings them profit. And the people on the other side like it (at least if they don't suspect you to use a system for this). Simply think about your hard disk as your extended memory. There would be nothing wrong or creepy if you simply remembered all this stuff; and there are people with better memory who would.

Maybe make some schedule to reconnect with each person once in a few years, so they don't forget you completely. This also gives you an opportunity to update the info.

If you start doing it while young, your high-school and university classmates will already make a decent database. Then add your colleagues. You will appreciate it ten years later, when you would naturally forget most of them.

When you have a decent database, you can provide useful social service by connecting people. -- Your friend X asks you: "Do you know something who can program in Ruby?" "Uhm, not sure, but let me make a note and I'll think about it." Go home, look at the database. Find Y. Ask Y whether it is okay to give their contact to someone interested in Ruby. Give X contact to Y. At this moment, your friend X owes you a favor, and if X and Y do some successful business, also Y owes you a favor. The cost of you is virtually zero; apart from costs of maintaining the database, which you would do anyway.

An important note is that of course there is a huge difference between close friends and random acquaintances, but both can be useful in some situations, so you want to keep a database for both. Don't be selective. If your database has too much people, think about better navigation, but don't remove items.

Replies from: Metus
comment by Metus · 2014-05-21T14:06:44.083Z · LW(p) · GW(p)

I'm inclined to ask: Are there ready-made software solutions for this or should I roll my own in Python or some office program? If it wasn't for the secretive factor I'd write a simple program to put on my github and show off programming skills.

Replies from: Viliam_Bur, therufs
comment by Viliam_Bur · 2014-05-21T14:31:44.685Z · LW(p) · GW(p)

I don't know. But if I really did it (instead of just talking that this is the wise thing to do), I would probably use some offline wiki software. Preferably open source. Or at least something I can easily extract data from if I change my mind later.

I would use something like wiki -- nodes connected by hyperlinks -- because I tried this in the past with hierarchical structure, and it didn't work well. Sometimes a person is a member of multiple groups, which makes classification difficult. Or if you have a few dozen people in the database, it becomes difficult to navigate (which in turn becomes a trivial inconvenience for adding more people, which defeats the whole purpose).

But if every person (important or unimportant) has their own node, and you also create nodes for groups (e.g. former high school classmates, former colleagues from company X, rationalists,...), you can find anyone with two clicks: click on the category, click on the name. Also the hyperlinks would be useful to describe how people are connected with each other. It would be also nice to have automatic collections of nodes that have some atrribute (e.g. can program in Ruby); but you can manually add the links in both directions.

A few years ago I looked at some existing software, a lot of it was nice, but missed a feature or two I considered important. (For example, didn't support Unicode, or required web server, or just contained too many bugs.) In hindsight, if I would just use one of them, for example the one that didn't support Unicode, it would still be better than not having any.

Writing your own program... uhm, consider planning fallacy. Is this the best way to use your time? And by the way, if you do something like that, make it a general-purpose offline Unicode wiki-like editor, so that people can also use it for many other things.

comment by therufs · 2014-05-23T13:17:08.438Z · LW(p) · GW(p)

ISTR there's something in the Evernote family that does this.

comment by polymathwannabe · 2014-05-20T22:18:07.578Z · LW(p) · GW(p)

Downvoted for dismissing the humanities.

Replies from: Barry_Cotter, None, None
comment by Barry_Cotter · 2014-05-21T00:42:21.929Z · LW(p) · GW(p)

One can read in one's spare time or learn languages or act. If one does not come from wealth not majoring in something remunerative in college is a mistake if you will actually want money later.

He didn't dismiss the humanities he said studying them at university was a poor decision.

Replies from: MathiasZaman, polymathwannabe, therufs
comment by MathiasZaman · 2014-05-21T12:41:47.107Z · LW(p) · GW(p)

He didn't dismiss the humanities he said studying them at university was a poor decision.

Moreover, it wasn't really presented as general advice, but advice for their own younger version. It's not generally applicable advice (not everyone will be happy or successful in STEM fields), but I think it's safe to assume it is sound advice for Young!nydwracu.

Or even if it was intended as generally applicable advice, it's still directed at kids gifted at mathematics, who will have a high likelihood of enjoying STEM fields.

comment by polymathwannabe · 2014-05-21T02:56:46.183Z · LW(p) · GW(p)

My parents made me study business management instead of literature. My life has been much more boring and unfulfilling as a result, because the jobs I can apply for don't interest me, and the jobs I want demand qualifications I lack. In my personal experience, working in your passion beats working for the money.

Replies from: army1987, Barry_Cotter
comment by A1987dM (army1987) · 2014-05-21T09:24:00.755Z · LW(p) · GW(p)

How sure are you what your life would have been like if you had studied literature instead?

comment by Barry_Cotter · 2014-05-22T14:07:47.298Z · LW(p) · GW(p)

Why haven't you gone back to college for a Masters in English Literature or something along those lines? Robin Hanson was 35 before he got his Ph.D. in Economics and he's doing ok. The market for humanities scholars is not as forgiving as that for Economics but that's what you want, right?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-05-22T15:09:53.666Z · LW(p) · GW(p)

After some years of self-analysis and odd jobs, I'm close to finishing a second degree in journalism.

comment by therufs · 2014-05-23T13:28:02.175Z · LW(p) · GW(p)

not majoring in something remunerative in college

The implicit claim that humanities jobs are uniformly non-remunerative seems difficult to support.

if you will actually want money later

How about doing a humanities major to make connections to people who are any combination of rich, creative, or interesting and teaching yourself to program in the meantime?

comment by [deleted] · 2014-05-20T22:59:49.240Z · LW(p) · GW(p)

There's a difference between choosing a subject as your college major (which amounts to future employment signalling) and engaging in the study of a subject.

comment by [deleted] · 2014-05-21T22:27:17.337Z · LW(p) · GW(p)

It was a blind spot that I had until my senior year of college, when I realized that I wanted to make a lot of money, and that it was very unlikely that majoring in philosophy would let me do so. Had I realized this at 12-14, I would've saved myself a lot of time; but I didn't, so I'm probably going to have to go back for another degree.

If you don't care about money or you have the connections to succeed with a non-STEM degree, that's another thing. But that's not the question that was asked.

comment by CAE_Jones · 2014-05-23T20:47:37.450Z · LW(p) · GW(p)

I never learned how to put forth effort, because I didn't need to do so until after I graduated high school.

I got into recurring suboptimal ruts, sometimes due to outside forces, sometimes due to me not being agenty enough, that eroded my conscientiousness to the point that I'm quite terrified about my power (or lack there of) to get back to the level of ability I had at 12-14.

I suppose, if I had to give my younger self advice in the form of a sound-byte, it'd be something like: "If you aren't--maybe at least monthly--frustrated, or making less progress than you'd like, you aren't really trying; you're winning at easy mode, and Hard Mode is likely to capture you unprepared. Of course, zero progress is bad, too, so pick your battles accordingly."

Also, even if you're on a reasonable difficulty setting, it pays to look ahead and make sure you aren't missing anything important. My high school calculus teacher missed some notational standards in spite of grasping the math, and her first college-level course started off painful for it; I completely missed the cross and dot products in spite of an impressive math and physics High school transcript, and it turns out those matter a good deal (and at the time, the internet was very unsympathetic when I tried researching them).

comment by Ben Pace (Benito) · 2014-05-19T14:33:24.758Z · LW(p) · GW(p)

Speaking as a somewhat gifted seventeen year old, I'd really like to have known about AoPS, HPMOR and the Sequences.

Also, I'd like to have had in my mind the notion that my formal education is not optimised for me, and that I really need to optimise it myself. Speaking more concretely, I think that most teenagers in Britain pick their A Levels (if they do them at all) based on what classes the other people around them are doing, which isn't very useful. Speaking to a friend though, I realised that when he was picking his third A Level to study, there was no other A Level he needed to study to get into his main area of specialisation (jazz musician), and his time would be better spent not doing the A level at all; he needed to think more meta. He was just doing an A level because that's what everyone seems to think you should do. I'm about to give up a class because it's not going to help me get anywhere, I can use the time better and learn what I want to better alone anyway. So, really optimise.

Don't know if that helps. And AoPS is ridiculously useful.

comment by NoSignalNoNoise (AspiringRationalist) · 2014-05-25T01:14:55.920Z · LW(p) · GW(p)

Instill the importance of a mastery orientation (basically, optimizing for developing skills rather than proving one's innate ability). My 12-14 year old self had such a strong performance orientation as to consider things like mnemonics and study skills to be only for the lazy and stupid. Anyone stuck in the performance orientation won't even be receptive to things like Anki.

Replies from: Creutzer
comment by Creutzer · 2014-05-26T10:49:21.178Z · LW(p) · GW(p)

This. My upbringing screwed me up horribly in this respect.

comment by therufs · 2014-05-23T19:54:41.207Z · LW(p) · GW(p)

I had these blind spots as a 20some year old, so I assume I had them when I was 12-14 too:

  • I assumed that if I was good at something, I would be good at it forever. Turns out skills atrophy over time. Surprise! (This seems similar to your Anki revelation.)

  • I am agenty. I had no concept of the possibility that I might be able to cause* some meaningful effect outside my immediate circle of interaction.

* I did, of course, daydream about becoming rich and famous through no fault of my own; I wouldn't say I actually expected this to happen, but I thought it was more likely than becoming rich and famous under my own steam.

comment by Gunnar_Zarncke · 2014-05-19T22:16:23.014Z · LW(p) · GW(p)

I have been in such a program when I was 12-14 (run by the William Stern foudnation in Hamburg, Germany) and the curriculum consisted mostly of very diverse 'math' problems prepared in a way to make them accessible to us in a func way without introducing too much up-front terminology or notation. Examples I remember of the spot:

  • turing machines (dresses as short-sighted busy beavers)

  • generalized Nim really with lots of matches

  • tilings of the plane

  • conveys game of life (easy on paper)

More I just looked up in an older folder:

  • distance metrics on a graph

  • multi-way balances

  • continuous fractions (cool for approximations; I still use this)

  • logical derivations about beliefs of people whose dream are indistinuishable from reality

  • generalized magical squares

  • Fibinacci sequences and http://en.wikipedia.org/wiki/Missing_square_puzzle

  • Drawing fractals (the iterated function ones; with printouts of some)

In general only an exposition was given and no task to solve. Or some introductory initial questions, The patterns to be detected were the primary reward.

We were not introduced to really practical applications but I'm unsure whether that had been helpful or rather whether it had been interesting. My interest at that time stemmed from the material being systematic patterns that I could approach abstractly and symbolically and 'solve'. I'm not clear whether the Sequences would have been interesting in that way. Their patterns are clear only in hindsight.

What should work is Bayes rule - at least in the form that can be visualized (tiling of the 1/1 grid) or symbolcally derived easily.

Also guessing and calibration games should work. You can also take standard games and add some layer of complexity on them (but please not arbitrary but helpful ones; a minimum example is: Play Uno but cards don't have to match color+number but some number theoretic identity e.g. +(2,5) modulo (4,10)).

Replies from: Douglas_Knight, None
comment by Douglas_Knight · 2014-05-19T23:27:08.457Z · LW(p) · GW(p)

conveys game of life

I assume you mean Conway's game of life.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-05-20T08:26:55.083Z · LW(p) · GW(p)

Yes of course. That and we tried variations of the rule-set. We also discovered the flyer.

It is interesting what can come out of this seed. When I later had an Atari I wrote an optimized simulator in assembly which aggregated over multiple cells and I even tried to use the blitter reducing the number of clock cycles per cell as far as I could. This seed become a part of the mosaic of concepts that sits behind understanding complex processes now.

comment by [deleted] · 2014-05-21T16:20:30.138Z · LW(p) · GW(p)

logical derivations about beliefs of people whose dream are indistinuishable from reality

That sounds interesting. Would you care to elaborate?

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-05-21T21:04:35.878Z · LW(p) · GW(p)

The story goes as follows (translated from German):

"Once I dreamed that there was an island called "the island of dreams". The inhabitants of the island dreamed very vivid and lucid. Indeed the imaginations which occurred during sleep are as clear and present as perceived during waking. Even more their dreamlife follows from night to night the same continuity as their waking perception during the day. Consequently some inhabitants have difficulties to distinguish whether they are awake or asleep.

Now every inhabitant belongs to one of two groups: Day-type and night-type. The inhabitants of day-type are characterized by their thinking during the day being true and during the night being false. For the night-type it is the opposite: Their thoughts during sleep are true and those during waking are false."

Questions:

  1. Once an inhabitant though/believed that he belonged to the day-type. Kann be tested whether this is true? Was be awake or asleep at the time of the thought?

...

comment by MathiasZaman · 2014-05-19T12:51:44.471Z · LW(p) · GW(p)

I think most of my blindspots before roughly the age of 18 involved not understanding that I'm personally responsible for my success and the extent of my knowledge and that "good enough" doesn't cut it. If I were to send a message back to 14-year-old!Me, I'd tell him that he has a lot of potential, but that he can't rely on others to fulfill that potential.

comment by sixes_and_sevens · 2014-05-19T10:19:43.120Z · LW(p) · GW(p)

what blind spots did 12-14-year-old you have

I don't know how much of this falls under your remit, but I had quite a few educational blind-spots I inherited from my parents, who didn't come from a higher-educated background. If any of your students are in a similar position, it's worth checking they don't have any ludicrous expectations out of the next several years of education which no-one close to them is in a position to correct.

Replies from: Metus
comment by Metus · 2014-05-19T10:38:14.447Z · LW(p) · GW(p)

Blind spots such as?

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-05-19T12:26:41.367Z · LW(p) · GW(p)

I'm not sure any specific examples from my own experience would generalise very well.

If I were to translate my comment into a specific piece of generally-applicable advice, it would be to give students a realistic overview of what their forthcoming formal education involves, what it expects from them, and what options they have available.

As mentioned, this may be outside of the OP's remit.

Replies from: somnicule
comment by somnicule · 2014-05-19T17:32:07.291Z · LW(p) · GW(p)

The specific examples may not be used, but would clarify what sort of thing you're talking about.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-05-19T17:47:55.185Z · LW(p) · GW(p)

One example: certain scholastic activities are simply less important than others. If your model is "everything given to me by an authority figure is equally important", you don't manage your workload so well.

comment by gwillen · 2014-05-20T19:50:34.292Z · LW(p) · GW(p)

Just curious -- are you teaching at a math camp? Which one? (I have a lot of friends from Canada/USA Mathcamp, although I didn't go myself.)

Replies from: zedzed
comment by zedzed · 2014-05-20T20:06:39.950Z · LW(p) · GW(p)

No. I know one of my former teachers outside of school, and we decided it would be a good thing if I ran an afterschool program for the mathcounts kids after it had ended.

comment by ShardPhoenix · 2014-05-22T11:38:24.259Z · LW(p) · GW(p)

Where is somewhere to go for decent discussion on the internet? I'm tired of how intellectually mediocre reddit is, but this place is kind of dead.

Replies from: MathiasZaman, blacktrance, spqr0a1, None, Metus
comment by MathiasZaman · 2014-05-22T23:13:16.701Z · LW(p) · GW(p)

Alternative: Liven up Less Wrong. I'm not sure how to do that, but it's possible solution to your problem.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2014-05-23T06:28:48.525Z · LW(p) · GW(p)

If you want to make LW livelier, you should downvote less on the margin... downvoting disincentivizes posting. It makes sense to downvote if there's lots of content and you want to help other people cut through the crap. But if there's too little it's arguably less useful.

Also develop your interesting thoughts and create posts out of them.

comment by blacktrance · 2014-05-22T16:59:45.868Z · LW(p) · GW(p)

Slate Star Codex comments have smart people and a significant overlap with LW, but the interface isn't great (comment threading stops after it gets to a certain level of depth, etc). Alternatively, it may help to be more selective on reddit - no default subreddits, for example.

comment by spqr0a1 · 2014-05-22T15:01:14.719Z · LW(p) · GW(p)

Check out metafilter.

Replies from: Lumifer
comment by Lumifer · 2014-05-22T16:44:20.644Z · LW(p) · GW(p)

Check out metafilter.

Its survival is in doubt. In particular, "The site is currently and has been for several months operating at a significant loss. If nothing were to change, MeFi would defaulting on bills and hitting bankruptcy by mid-summer."

comment by [deleted] · 2014-05-22T12:58:33.708Z · LW(p) · GW(p)

Also looking for LW replacement, with no current success.

Replies from: shminux
comment by Shmi (shminux) · 2014-05-22T17:12:14.294Z · LW(p) · GW(p)

This question occasionally comes up on #lesswrong, too, especially given the perceived decline in the quality of LW discussions in the last year or so. There are various stackoverflow-based sites for quality discussions of very specific topics, but I am not aware of anything more general. Various subreddits unfortunately tend to be swarmed by inanity.

comment by Metus · 2014-05-22T13:48:29.659Z · LW(p) · GW(p)

So LW but bigger? I think you are out of luck there.

comment by tgb · 2014-05-19T15:33:11.149Z · LW(p) · GW(p)

This just struck me: people always credit WWII as being the thing that got the US out of the great depression. We've all seen the graph (like the one at the top of this paper) where standard of living drops precipitously during the great depression then more than recovers during WWII.

How in the world did that work? Why is it that suddenly pouring huge resources out of the country into a massive utility-sink that didn't exist until the start of the war rapidly brought up the standard of living? This makes no sense to me.

The only plausible explanation I can think up is that they somehow borrowed from the future using the necessities of war as justification. I feel like that would involve a dip in the growth rate after WWII - and there is one, but it just dips back down to the trend-line not below like I would expect if they genuinely borrowed enough from the future to offset such a large downturn as the great depression. The only other thing seems to be externalities.

However this goes, this seems to be a huge argument in favor of big-government spending (if we get this much utility from the government building things that literally explode themselves without providing non-military utility, then in a time of peace, we should be able to get even more by having the government build things like high-tech infrastructure, places of beauty, peaceful scientific research, large-scale engineering projects, etc.). So should we be spending 20-40% of our GDP on peace-time government mega-projects? It's either that or this piece of common knowledge is wrong (and we all know how reliable common knowledge is!).

Or I'm wrong, of course. So what is it?

(Bonus question: why didn't WWI see a similar boost in living standards?)

Replies from: Vaniver, Unnamed, chaosmage, knb, solipsist, pcm, pianoforte611
comment by Vaniver · 2014-05-19T18:20:09.107Z · LW(p) · GW(p)

How in the world did that work?

It didn't. This is the argument in image form, and you can find similar ones for employment (basically, when you conscript people, unemployment goes down. Shocking!). There are lots of libertarian articles on the subject--this might be an alright introduction--but the basic argument is that standards of living dropped (that's what happens when food is rationed and metal is used for tanks instead of cars or household appliances) but the government spending on bombs and soldiers made the GDP numbers go up, and then the post-war boost in standards of living was mostly due to deferred spending.

Replies from: solipsist
comment by solipsist · 2014-05-20T01:01:37.507Z · LW(p) · GW(p)

Note: as the article implies, the above viewpoint is not representative of mainstream economic consensus.

Replies from: knb
comment by knb · 2014-05-21T00:44:00.159Z · LW(p) · GW(p)

What tgb stated above was factually incorrect--WWII did not increase living standards. While most economists credit WWII with kickstarting GDP growth and cutting unemployment, I don't know anyone who would actually argue that living standards rose during WWII.

Replies from: tgb
comment by tgb · 2014-05-21T20:54:59.816Z · LW(p) · GW(p)

Krugman doesn't quiiiite come out and say it, but he sure seems to want the reader to infer that living standards rose: http://krugman.blogs.nytimes.com/2011/08/15/oh-what-a-lovely-war/ And in that article, he quotes and quote of Rick Perry's book saying that the recovery happened because of WW2 (due to forcing FDR to "unleash private enterprise", oddly).

So maybe no one actually makes that argument, but boy it's common for people (economists and politicians!) to imply it. (Look at the contortions Perry goes through to not have to refute it!) It's always nice to notice the confusion a cached thought should have made all along.

Replies from: knb
comment by knb · 2014-05-21T23:33:57.748Z · LW(p) · GW(p)

I think you're reading way too much into Krugman's argument. I don't read Krugman as trying to imply that living standards rose during WWII. He doesn't even mention living standards. When economists talk about ending a recession or ending a depression, they mean something technical. Krugman was just talking about increased production and lowered unemployment, etc.

Frankly it seems bizarre to me that anyone would believe that crashing consumer spending + mass shortages = better living standards. It is fair to say that people had a better attitude about their economic deprivation, since it had a patriotic purpose in serving the war effort.

Replies from: tgb
comment by tgb · 2014-05-23T15:31:22.915Z · LW(p) · GW(p)

I think it's clear that you know more about what economists mean than I do, but when the typical person hears that a depression is ending, they imagine people being happier than they were before. I'm not really claiming that anyone thinks that crashing consumer spending + mass shortages = better living standards, just that the average Joe in the US hears about the depression ending and not about those negative things.

Anyway, not sure what point I'm trying to make since I think you already know what I'm saying.

comment by Unnamed · 2014-05-19T19:45:21.409Z · LW(p) · GW(p)

One simple model which seems to fit the "WWII ending the depression" piece of data (and which might have some overlap with the truth) is that it's relatively difficult to put idle resources into use, and significantly easier to repurpose resources that have been in use for other uses.

During the depression, a bunch of people were unemployed, factories were not running, storefronts were empty, etc. According to this model, under those economic conditions there were significant barriers to taking those idle resources and putting them to productive use.

Then WWII came and forced the country to mobilize and put those resources to use (even if that use was just to make stuff which would be shipped off to Europe and the Pacific to be destroyed). Once the war was over, those resources which had been devoted to war could be repurposed (with relatively little friction) to uses with a much more positive effect on people's standard of living. So things became good according to meaningful metrics like living standards, not merely according to metrics like unemployment rate or total output which ignore the fact that building a tank to send to war isn't valuable in the same way as building a car for local consumers.

The glaring open question here is why there might be this asymmetry between putting idle resources to use and repurposing in-use resources. Which is closely related to the question of why recessions/depressions exist at all (as more than momentary blips): once a recession hits and bunch of people become unemployed (and other resources go idle), why doesn't the market immediately jump in to snap up those idle resources? This article gets into some of the attempts to answer those questions.

(Bonus answer: World War One did not happen during a depression, so mobilizing for war mostly involved repurposing resources which had served other uses in peacetime rather than bringing idle resources into use.)

Replies from: tgb
comment by tgb · 2014-05-21T20:01:17.474Z · LW(p) · GW(p)

I like that this explanation gives a good reason for why this kind of spending could only work to fix a depression or similar situation versus always inflating standards of living. Thanks.

comment by chaosmage · 2014-05-20T12:06:22.437Z · LW(p) · GW(p)

I'm not sure how much it influenced the overall picture, but there was quite a brain drain to the US before and during WWII (mostly Jewish refugees) as well as after (Wernher von Braun and the like). Migrating away from the Nazi and Stalinist spheres of influence demonstrates intelligence, and the ability to enter the US despite the complex “national origins quota system” that went into effect in 1929 demonstrates persistence, affluence and/or marketable skills, so I estimate these immigrants gave a significant boost to the US economy.

Replies from: None
comment by [deleted] · 2014-05-20T21:24:17.108Z · LW(p) · GW(p)

Also: salt iodization in 1924. Possibly also widespread flour enrichment in the early 1940s due to both Army incentivization and the need for alternate nutrient sources during rationing.

comment by knb · 2014-05-21T01:16:46.132Z · LW(p) · GW(p)

However this goes, this seems to be a huge argument in favor of big-government spending (if we get this much utility from the government building things that literally explode themselves without providing non-military utility, then in a time of peace, we should be able to get even more by having the government build things like high-tech infrastructure, places of beauty, peaceful scientific research, large-scale engineering projects, etc.). So should we be spending 20-40% of our GDP on peace-time government mega-projects? It's either that or this piece of common knowledge is wrong (and we all know how reliable common knowledge is!).

I'm surprised no one has explained this yet, but this is wrong according to standard economic theory as I understand it.

  1. The United States suffered from terrible monetary policy during the Great Depression.
  2. Due to "animal spirits" and "sticky wages" this caused large scale unemployment and output well below our production possibilities frontier.
  3. World War II caused the government to kickstart production for the war effort.
  4. Living standards actually didn't rise, although GDP did (GDP per capita is NOT the same as living standards). Consumption was dramatically deferred during the war. People had fewer babies, bought fewer consumer products (and fewer were produced) and shifted toward home production for some necessities.
  5. There was a short recession as the end of the war lowered demand, but pent-up consumer demand quickly re-stabilized the economy.

The point is WWII helped the economy because we were well under our production possibilities frontier during the depression. Peace-time mega projects would only be helpful under recessed/depressed conditions, and fortunately, we now can use monetary policy to produce similar effects.

Anyway, the argument you were making seems pretty common among people who don't follow economics debates, and in fact is one of the major policy recommendations of the oddball Lyndon LaRouche cult.

Replies from: tgb
comment by tgb · 2014-05-21T20:43:30.104Z · LW(p) · GW(p)

Do you know of a typical measure (or component) of living standard that would have been measured for the US across both the great depression and WW2? The standard story I have heard informally is that WWII efforts did actually increase standards of living. I'm not surprised to learn that that's false, but given the level of consensus in the group-think I've encountered, I'd be interested in seeing some hard numbers. Plus, I'm interested in seeing whether there was a drop in living standards.

comment by solipsist · 2014-05-21T00:07:13.999Z · LW(p) · GW(p)

The labor force of the 1930s was sapped by over-allocation in unproductive industries. Specifically, much of the labor share was occupied in the sitting around feeling depressed and wishing you had a job industry. Economic conditions improved as workers shifted out of that industry and into more productive ones, such as all of them.

Replies from: army1987
comment by A1987dM (army1987) · 2014-05-21T17:13:06.204Z · LW(p) · GW(p)

ADB I'm not sure what your intended connotations are, but I'd guess I'd OC.

comment by pcm · 2014-05-20T02:25:16.197Z · LW(p) · GW(p)

Part of it is that deflation in the early 1930s meant that workers were overpaid relative to the value of goods they produced (wages being harder to cut than prices). That caused wasteful amounts of unemployment. WWII triggered inflation, and combined with wage controls caused wages to become low relative to goods, shifting the labor supply and demand to the opposite extreme.

The people who were employed pre-war presumably had their standard of living lowered in the war (after having it increased a good deal during the deflation).

I won't try to explain here why deflation and inflation happened when they did, or why wages are hard to cut (look for "sticky wages" for info about the latter).

comment by pianoforte611 · 2014-05-19T17:52:32.413Z · LW(p) · GW(p)

I assumed it was because it motivated people into becoming much more productive.

Replies from: Protagoras
comment by Protagoras · 2014-05-20T20:16:03.207Z · LW(p) · GW(p)

It looks like this has been an unpopular suggestion, but I wouldn't discount motivation completely. A lot of early 20th century economists thought centrally planned economies were a great idea, based on the evidence of how productive various centrally planned war economies had been. Presumably there's some explanation for why central planning works better (or doesn't fail as badly) with war economies compared with peacetime economies, and I've always suspected that people's motivation to help the country in wartime was probably one of the factors.

comment by patrickmclaren · 2014-05-24T00:42:42.615Z · LW(p) · GW(p)

I've been searching LessWrong for prior discussions on Anxiety and I'm not getting very many hits. This surprised me. Obviously there have been well developed discussions on arkrasia, and ugh fields, yet little regarding their evil siblings Anxiety, Panic, and Mania.

I'd be interested to hear what people have to say about these topics from a rationalist's perspective. I wonder if anyone has developed any tricks, to calm the storm, and search for a third alternative.

Of course, first, and foremost, in such situations one should seek medical advice.

EDIT: Some very slightly related discussions: Don't Fear Failure, Hoping to start a discussion about overcoming insecurity.

Replies from: ChristianKl, TylerJay
comment by ChristianKl · 2014-05-30T17:50:45.228Z · LW(p) · GW(p)

I think you probably find some relevant hits if you search for depression. In particular you will find recommendations of Burn's Feel Good Handbook.

comment by TylerJay · 2014-05-27T01:10:28.996Z · LW(p) · GW(p)

A combination of controlled breathing, visualization, and mantra is pretty effective for me at battling acute anxiety and panic attacks. Personally, I use the Litany Against Fear from Dune. I'm happy to elaborate if there's any interest.

comment by jaime2000 · 2014-05-21T13:46:11.266Z · LW(p) · GW(p)

I just realized you can model low time preference as a high degree of cooperation between instances of yourself across time, so that earlier instances of you sacrifice themselves to give later instances a higher payoff. By contrast, a high time preference consists of instances of you each trying to do whatever benefits them most at the time, later instances be damned.

Replies from: Raythen, army1987
comment by Raythen · 2014-05-21T16:08:53.111Z · LW(p) · GW(p)

That makes sense. Even cooperating across short time frames might be problematic - "I'll stay in bed for 10 more minutes, even if it means that me-in-10-minutes will be stressed out and might be late for work"

I prefer to see long-term thinking as increased integration among different time-selves rather than a sacrifice, though - it's not a sacrifice to take actions with a delayed payoff if your utility function puts a high weight on your future-selves' wellbeing.

Replies from: AlexSchell
comment by AlexSchell · 2014-05-31T06:07:12.935Z · LW(p) · GW(p)

Your definition of sacrifice seems to exclude some instances of literal self-sacrifice.

comment by Richard_Kennaway · 2014-05-19T08:13:01.158Z · LW(p) · GW(p)

This is a test posting to determine the time zone of the timestamps, posted at 09:13 BST / 08:13 UTC.

ETA: it's UTC.

comment by charlemango · 2014-05-23T02:25:10.974Z · LW(p) · GW(p)

What would happen if citizens had direct control over where their tax dollars went? Imagine a system like this: the United States government raises the average person's tax by 3% (while preserving the current progressive tax rates). This will be a "vote-with-your-wallet" tax, where the citizen can choose where the money should go. For example, he may choose to allocate his tax funds towards the education budget, whereas someone else may choose to put the money towards healthcare instead. Such a system would have the benefit of being at democratic in deciding the nation's priorities, while bypassing political gridlock. What would be the consequences of this system?

Replies from: Nornagest, NancyLebovitz, army1987, JoshuaFox
comment by Nornagest · 2014-05-23T18:18:49.995Z · LW(p) · GW(p)

The biggest problem I can see with this is inefficient resource allocation. Others have mentioned ways of giving money to yourself, but we could probably minimize that with conflict-of-interest controls or by scoping budgetary buckets correctly. But there's no reason, even in principle, to think that the public's willingness to donate to a government office corresponds usefully to its actual needs.

As a toy example, let's say the public really likes puppies and decides to put, say, 1% of GDP into puppy shelters and puppy-related veterinary programs. Diminishing returns kick in at 0.1% of GDP; puppies are still being saved, but at that point marginal dollars would be doing more good directed at kitten shelters (which were too busy herding cats to spend time on outreach in the run-up to tax season). The last puppy is saved at 0.5% of GDP, and the remaining 0.5% -- after a modest indirect subsidy to casinos and makers of exotic sports cars -- goes into the newly minted Bureau for Puppy Salvation's public education fund.

Next tax cycle, that investment pays off and puppies get 2% of GDP.

comment by NancyLebovitz · 2014-05-23T02:35:02.734Z · LW(p) · GW(p)

There would be a lot of advertising.

Replies from: charlemango
comment by charlemango · 2014-05-23T03:19:47.277Z · LW(p) · GW(p)

I think it would be a plus. Americans would be forced to actually consider which issues are important to them.

comment by A1987dM (army1987) · 2014-05-25T14:41:12.498Z · LW(p) · GW(p)

In Italy there's something similar: you can choose whether 0.8% of your income taxes goes to the government or to an organized religion of your choice (if you don't choose, it's shared in proportion to the number of people who choose each church), and 0.5% goes to a non-profit or research organization of your choice.

comment by JoshuaFox · 2014-05-23T06:43:27.540Z · LW(p) · GW(p)

What would happen if citizens had direct control over where their tax dollars went?

That's what the free market looks like -- and the dollars involved are no longer tax.

I suppose the government could still tax and then ask you if you'd rather use it to buy a flatscreen TV for your living room or else better air conditioning for Army tents in Afghanistan, or they could even restrict options to typical government spending,

Take a look at Hanson's proposals for allocating government resource with prediction market.

Replies from: MathiasZaman
comment by MathiasZaman · 2014-05-23T09:41:11.399Z · LW(p) · GW(p)

The scenario described is different from a free market in that you still have to pay taxes. You just get more control over how the government can spend your tax-money. You can't use the money to buy a flatscreen TV, but you can decide if it gets spend on healthcare, military spending, NASA...

Replies from: ygert
comment by ygert · 2014-05-23T14:42:25.388Z · LW(p) · GW(p)

or they could even restrict options to typical government spending.

JoshuaFox noted that the government might tack on such restrictions

That said, it's not so clear where the borders of such restrictions would be. Obviously you could choose to allocate the money to the big budget items, like healthcare or the military. But there are many smaller things that the government also pays for.

For example, the government maintains parks. Under this scheme, could I use my tax money to pay for the improvement of the park next to my house? After all, it's one of the many things that tax money often works towards. But if you answer affirmatively, then what if I work for some institutute that gets government funding? Could I increase the size of the government grants we get? After all, I always wanted a bigger budget...

Or what if I'm a government employee? Could I give my money to the part of government spending that is assigned as my salary?

I suppose the whole question is one of specificity. Am I allowed to give my money to a specific park, or do I have to give it to parks in general? Can I give it to a specific government employee, or do I have to give it to the salary budget of the department that employs that employee? Or do I have to give it to that department "as is", with no restrictions on what it is spent on?

The more specitivity you add, the more abusable it is, and the more you take away, the closer it becomes to the current system. In fact, the current system is merely this exact proposal, with the specificity dial turned down to the minimum.

Think about the continuum between what we have now and the free market (where you can control exactly where your money goes), and it becomes fairly clear that the only points which have a good reason to be used are the two extreme ends. If you advocate a point in the middle, you'll have a hard time justifying the choice of that particular point, as opposed to one further up or down.

Replies from: asr, Lumifer
comment by asr · 2014-05-23T15:55:39.453Z · LW(p) · GW(p)

Think about the continuum between what we have now and the free market (where you can control exactly where your money goes), and it becomes fairly clear that the only points which have a good reason to be used are the two extreme ends. If you advocate a point in the middle, you'll have a hard time justifying the choice of that particular point, as opposed to one further up or down.

I don't follow your argument here. We have some function that maps from "levels of individual control" to happiness outcomes. We want to find the maximum of this function. It might be that the endpoints are the max, or it might be that the max is in the middle.

Yes, it might be that there is no good justification for any particular precise value. But that seems both unsurprising and irrelevant. If you think that our utility function here is smooth, then sufficiently near the max, small changes in the level of social control would result in negligible changes in outcome. Once we're near enough the maximum, it's hard to tune precisely. What follows from this?

Replies from: ygert
comment by ygert · 2014-05-25T10:38:14.549Z · LW(p) · GW(p)

Hmm. To me it seemed intuitively clear that the function would be monotonic.

In retrospect, this monotonicity assumption may have been unjustified. I'll have to think more about what sort of curve this function follows.

comment by Lumifer · 2014-05-23T15:26:22.004Z · LW(p) · GW(p)

If you advocate a point in the middle, you'll have a hard time justifying the choice of that particular point, as opposed to one further up or down.

Trouble with justifying does not necessarily mean that the choice is unjustified.

I like to wash my hands in warm water. I would have a hard time justifying a particular water temperature, as opposed to one slightly colder or slightly warmer. This does not mean that "the only points which have a good reason to be used" are ice-cold water and boiling water.

Replies from: Randy_M
comment by Randy_M · 2014-05-23T17:56:44.349Z · LW(p) · GW(p)

You can't justify a point, but you could justify a range by speficfying temperatures where it becomes uncomforable. Actually, specifying a range is just specifying the give point with less resolution.

comment by Prismattic · 2014-05-22T03:30:33.854Z · LW(p) · GW(p)

Second Livestock

I feel there are many possible Lesswrong punchlines in response to this.

comment by Stefan_Schubert · 2014-05-20T13:43:34.466Z · LW(p) · GW(p)

Lots of people are arguing governments should provide all citizens with an unconditional basic income. One problem with this is that it would be very expensive. If the government would give each person say 30 % of GDP per capita to each person (not a very high standard of living), then that would force them to raise 30 % of GDP in taxes to cover for that.

On the other hand, means-tested benefits have disadvantages too. It is administratively costly. Receiving them is seen as shameful in many countries. Most importantly, it is hard to create a means-tested system that doesn't create perverse incentives for those on benefits, since when you start working, you will both lose your benefits and start paying taxes under such a system. That may mean that the net income can be a very small proportion of the gross income for certain groups, incentivizing them to stay unemployed.

One middle route I've been toying with is that the government could provide people with cheap goods and services. People who were satisfied with them could settle for them, whereas those who wanted something more fancy would have to pay out of their own pockets. The government would thus provide people with no-frills food - Soylent, perhaps - no-frills housing, etc, for free or for highly subsidized prices (it is important that they produce enough and/or set the prices so that demand doesn't outstrip supply, since otherwise you get queues - a perennial problem of subsidized goods and services).

Of course some well-off people might choose to consume these subsidized goods and services, and some poor people might not choose to do that. Still, it should in general be very redistributionary. The advantage over the basic income system is that it would be considerably cheaper, since these goods and services would only be used by a part of the population. The advantage over the means-tested system is that people will still be allowed to use these goods and services if their income goes up, so it doesn't create perverse incentives.

Another advantage with this system is that it could perhaps rein in rampant consumerism somewhat. Parts of the population will be habituated to smaller apartments and less fancy food. Those who want to distinguish themselves from the masses - who want to consume conspiciously - will also be affected, since they will have to spend less to stand out from the crowd.

I guess this system to some extent exist - e.g. in many countries, the government does provide you with education and health care, but rich people opt to go for private health-care and private education. So the idea isn't novel - my suggestion is just to take it a bit further.

Replies from: chaosmage, kevin_p, DanielLC, NancyLebovitz, Kaj_Sotala, ChristianKl, Lumifer, drethelin
comment by chaosmage · 2014-05-20T17:38:52.583Z · LW(p) · GW(p)

A sharp divide between basic, subsidized, no-frills good and services and other ones didn't work in the socialist German Democratic Republic (long story, reply if you need it). What does seem to be for various countries is different rates of value-added tax depending on the good or service - the greater the difference in taxation, the closer you get to the system you've described, but it is more gradual and can be fine-tuned. Maybe that could work for sales tax, too?

Replies from: None, None
comment by [deleted] · 2014-05-20T21:26:13.142Z · LW(p) · GW(p)

A sharp divide between basic, subsidized, no-frills good and services and other ones didn't work in the socialist German Democratic Republic (long story, reply if you need it).

I'd be interested in hearing about this.

Replies from: chaosmage, army1987
comment by chaosmage · 2014-05-21T13:17:46.643Z · LW(p) · GW(p)

I'm no economist, but as a former citizen of that former country, this is what I could see.

There was a divide of basic goods and services and luxury ones. Basic ones would get subsidies and be sold pretty much at cost, luxury ones would get taxed extra to finance those subsidies.

The (practically entirely state-owned) industries that provided the basic type of goods and services were making very little profit and had no real incentive to improve their products, except to produce them cheaper and more numerously. Nobody was doing comparison shopping on those, after all. (Products from imperalist countries were expected to be better in every way, but that would often be explained away by capitalist exploitation, not seen as evidence homemade ones could be better.) So for example, the country's standard (and almost only) car did not see significant improvements for decades, although the manufacturer had many ideas for new models. The old model had been defined as sufficient, so to improve it was considered wasteful and all such plans were rejected by the economy planners.

The basic goods were of course popular, and due to their low price, demand was frequently not met. People would chance upon a shop that happened to have gotten a shipment of something rare and stand in line for hours to buy as much of that thing as they would be permitted to buy, to trade later. In the case of the (Trabant) car, you could register to buy one at a seriously discounted price if you went via an ever-growing waiting list that, near the end, might have you wait for more than 15 years. Of course many who got a car this way sold it afterwards, and pocketed a premium the buyer paid for not waiting.

Arguably more importantly, money was a lot better at getting you basic goods than luxury ones. So people tended to use money mostly for basic goods and services, and would naturally compare a luxury buy's value with those. When you can buy a (luxury) color TV at ten times the price of a (basic) black-and-white TV, it feels like you'd pay nine basic TVs for adding color to the one you use. Empirically, people often simply saved their money and thus kept it out of circulation.

Housing was a mess, too. Any rent was decreed to have to be very small. So there was no profit in renting out apartments, which again created a shortage of supply. (Private landownership was considered bourgeouis and thus not subsidized.) It got so bad many young couples decided to have child as early as possible, because that'd help them in the application to receive a flat of their own, and move out from their parents. And of course most buildings fell into disrepair - after all, there was no incentive to invest in providing higher quality for renters. This demonstrates again that to be making a basic good or service meant you'd always have demand, but that demand wouldn't benefit you much.

The production of luxury goods went better, partly because these were often exported for hard currency. The GDR had some industries that were fairly skilled at stealing capitalist innovations and producing products that had them, for sale at fairly competitive prices. Artificially low prices and subsidies for certain goods and products made pretty sure most of domestic consumption never benefitted from that skill.

comment by A1987dM (army1987) · 2014-05-21T09:13:10.513Z · LW(p) · GW(p)

Start by googling "hard currency shop".

comment by [deleted] · 2014-05-20T19:20:23.549Z · LW(p) · GW(p)

A sharp divide between basic, subsidized, no-frills good and services and other ones didn't work in the socialist German Democratic Republic

Nor did it in other Soviet block countries, e.g. People's Republic of Poland.

comment by kevin_p · 2014-05-21T02:53:30.205Z · LW(p) · GW(p)

"Those who want to distinguish themselves from the masses - who want to consume conspiciously - will also be affected, since they will have to spend less to stand out from the crowd" - maybe I've misunderstood this, but surely it would have the opposite result? Let's say rents are ~$20/sqm (adjust for your own city; the principle stays the same). If I want my apartment to be 50 sqm rather than 40 sqm, that's an extra $200. But if 40 sqm apartments were free, the price difference would be the full $1000/month price of the bigger apartment. You've still got a cliff, just like in the means-tested welfare case; it's just that now it's on the consumption side.

In practice this would probably destroy the market for mid-priced goods - who wants to pay $1000/month just for an extra 10 square meters? Non-subsidized goods will only start being attractive when they get much better than the stuff the government provides, not just slightly better.

Also, if you give out goods rather than money, you're going to have to provide a huge range of different goods/services, because otherwise there will be whole categories of products that people who legitimately can't work (elderly, disabled etc) won't have access to. And if you do that, the efficiency of your economy is going to go way down - not just because the government is generally less efficient than the free market, but also because people can't use money to allocate resources according to their own preferences.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-05-21T10:14:44.793Z · LW(p) · GW(p)

You've still got a cliff, just like in the means-tested welfare case; it's just that now it's on the consumption side.

Yes, that's what it's like (only the cliff is actually usually less steep under means-tested welfare). And you're also right about this:

In practice this would probably destroy the market for mid-priced goods - who wants to pay $1000/month just for an extra 10 square meters?

To clarify, I should say that my idea was that these subsidized or free goods and services would be so frugal that they would in effect not be an option to the majority of the population. Hence, it's not exactly the market for mid-priced goods, but the market for "low-priced but not extremely low-priced goods" that would get destroyed.

To your main point: since some people go down in standard, thanks to the fact that they by doing so they can get significantly cheaper goods, the average standard will go down. Now say that to get the average standard before this reform you had to pay 1000 dollars a month, but after the reform you just have to pay 900 dollars a month (because the average standard is now lower). Then those who want higher than the average standard will only have to pay more than 900 dollars rather than more than 1000.

The actual story might be more complicated than this - e.g., what some people really might be interested in is having a higher standard than the mean, or the the eight first deciles, or what-not. But generally it seems to me intuitive that if parts of the population lower their standards, then this should mean that those who want to consume consipiciously will also lower their standards.

Also, if you give out goods rather than money, you're going to have to provide a huge range of different goods/services, because otherwise there will be whole categories of products that people who legitimately can't work (elderly, disabled etc) won't have access to.

I don't see this as a comprehensive system: rather, you would just use it for some important goods and services: food, housing, education, health, public transport (in fact, the system is already used in the three latter; possibly housing too, though most subsidized housing is means-tested which it wouldn't be under this system). The system would be too complicated otherwise. Possibly it could be combined with a low UBI.

comment by DanielLC · 2014-05-20T18:17:37.195Z · LW(p) · GW(p)

If the government would give each person say 30 % of GDP per capita to each person (not a very high standard of living), then that would force them to raise 30 % of GDP in taxes to cover for that.

In 2002, total U.S. social welfare expenditure constitutes over 35% of GDP

I think that would be too high anyway. Since anyone who bothers to work can make more than that, and the reduction in labor supply would increase pay, and any money you save will last you longer, there's little reason to make it enough for people to be well off, as opposed to getting just enough to scrape by.

It's also worth noting that most people will get a significant portion of that money back. If you make below the mean income (which most people do, since it's positively skewed) you will end up getting all of it back.

It seems unfair to charge people the entire price to get slightly better goods. Thus, if you want to get slightly better goods, the government should still reimburse you for the price of the cheap goods. At this point, it's just unconditional basic income with the government selling cheap goods.

As a minor point, Soylent as it is now can't be considered no-frills food. If you buy it ready-made, it costs around $10 a day.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-05-21T10:02:27.914Z · LW(p) · GW(p)

It seems unfair to charge people the entire price to get slightly better goods. Thus, if you want to get slightly better goods, the government should still reimburse you for the price of the cheap goods. At this point, it's just unconditional basic income with the government selling cheap goods.

What you do then is in effect (if I understand you correctly) to give them a "food voucher" (and similarly a "housing voucher", etc) worth a certain amount which they would be able to spend as they saw fit (but only on food/housing, what-not). Such as a system doesn't seem very clever (as you imply): in that case, it would be better to just give people money in the form of an unconditional basic income.

I'm not sure why it would be so unfair not to reimburse people who want more expensive goods, though. Of course, the government does to a certain extent discriminate in favour of those with more frugal preferences in this set-up. But one of my points is precisely that we want people to develop more frugal tastes - to spend less on, e.g. housing and food. There is a "conspicious consumption" arms race going on concerning these and many other goods which this system is intended to mitigate to some point.

Replies from: Raythen, DanielLC
comment by Raythen · 2014-05-21T10:51:49.141Z · LW(p) · GW(p)

Different people have different needs. Some people would be happy in cheap housing and others wouldn't - maybe they're more sensitive to sounds, environmental conditions or whatever else is the difference is between cheap housing and more expensive housing.

The point is, there's no basic standard that would satisfy everyone (unless that's a reasonably high standard, which isn't what is proposed here). Some people would consider more expensive goods and services NEEDS rather than luxuries, and for good reason - consuming cheaper alternatives might not kill them, but it would make them depressed, less healthy and less productive (for example)

So it is unfair to subsidize certain goods and services and not others - one might wonder "why is my neighbor getting her needs met for cheap, while I have to pay full price to meet my needs?"

comment by DanielLC · 2014-05-21T19:29:22.000Z · LW(p) · GW(p)

I'm not sure why it would be so unfair not to reimburse people who want more expensive goods, though.

If it costs $1.00 to make the basic food, and $1.10 to make slightly better food, and someone is willing to pay the difference, shouldn't they get the slightly better food?

Maybe it's not a big deal that nobody will eat anything that costs between $1.00 and $2.00. That's not a lot of deadweight cost. It's only around a dollar a person. But this will apply to everything you're paying for, which we have established is significant. If it costs $300 a month for cheap housing, and you virtually eliminate any housing that costs less than $600 a month, that is a lot of deadweight cost.

comment by NancyLebovitz · 2014-05-20T16:03:55.859Z · LW(p) · GW(p)

If a government produces goods, the results tend to be low quality (education may be an exception in some places).

The cost of a guaranteed minimum income may not be quite as high as you think-- it would replace a lot of more complicated government support. Also, it might be possible to build in some social rewards for not taking it if you don't need it.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-05-21T10:34:34.869Z · LW(p) · GW(p)

The government wouldn't have to produce the low-standard/cheap goods and services. They could be produced by private companies. My point is just that the government would subsidize them (possibly to the point where they become free).

comment by Kaj_Sotala · 2014-05-21T06:57:44.967Z · LW(p) · GW(p)

The advantage over the basic income system is that it would be considerably cheaper, since these goods and services would only be used by a part of the population. The advantage over the means-tested system is that people will still be allowed to use these goods and services if their income goes up, so it doesn't create perverse incentives.

The universal basic income schemes that seem the most reasonable to me adjust the taxation so that, while the UBI itself is never taxed, if you make a lot of money then your non-UBI earnings get an extra tax so that the whole reform ends up having very little direct effect on you. In effect, that ends up covering the "only used by a part of the population" criteria. The perverse incentives can't be avoided entirely, but they can be mitigated somewhat if the tax system is set up so that you're always better off working than not working.

For a concrete example, there's e.g. this 2007 proposal by the Finnish Green party. Your working wage (in euros per month) is on the X-axis, your total income after is on the Y-axis. Light green is the basic income, dark green is your after-tax wage, red is paid in tax. According to their calculations, this scheme would have been approximately cost neutral (compared to what the Finnish state normally gets in tax income and pays out in welfare).

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-05-21T10:28:36.716Z · LW(p) · GW(p)

Thanks, that's interesting. 440 euro is not a lot, though - could you live in Helsinki on that (in 2007)? Is this supposed to replace for instance unemployment benefits (which I'm sure are much higher)? It so, this system would make some people who aren't that well off worse off.

One thing that is seldom noted is that the Scandinavian "general welfare states" are in effect half-way to the UBI. In Sweden, and I would guess the other Scandinavian countries as well, everyone gets a significant pension no-matter what, child benefits are not means-tested, etc. Also virtually everyone uses public schools, public health-care, public universities and public child-care (all of which are either heavily subsidized or free). So it's not a question of either you have an Anglo-saxon system where benefits mostly go to the poor or a UBI system, but there are other options.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-21T12:10:30.824Z · LW(p) · GW(p)

440 euros is almost the same amount as direct student benefits were in 2007, though that's not taking into account the fact that most students also have access to subsidized housing which helps substantially. On the other hand, the proposed UBI model would have maintained as separate systems the current Finnish system of "housing benefits" (which pays a part of your rent if you're low-income, exact amount depending on the city so as to take into account varying price levels around the country) as well as "income support", which is supposed to be a last-resort aid that pays for your expenses if you can show that you have reasonable needs that you just can't meet in any other way. So we might be able to say that in total, the effective total support paid to someone on basic income would have been roughly comparable to that paid to a student in 2007.

Some students manage to live on that whereas some need to take part-time jobs to supplement it, which seems to be roughly the right level to aim for - doable if you're really frugal about your expenses, but low enough that it will still encourage you to find work regardless. Might need to increase child benefits a bit in order to ensure that it's doable even if you're having a family, though.

The Greens' proposed UBI would have replaced "all welfare's minimum benefits", so other benefits that currently pay out about the same amount. That would include student benefits and the very lowest level of unemployment benefit (which you AFAIK get if your former job paid you hardly anything, basically), but it wouldn't replace e.g. higher levels of unemployment benefits.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-05-21T14:47:37.459Z · LW(p) · GW(p)

Thanks, that's interesting and comprehensive.

Housing benefits are an alternative to the idea discussed here; i.e. subsidizing particular low-cost, low-standard flats. However, the problem with housing benefits is that you tend to get more of them if you have higher rent, and thus you in effect reward people with more expensive tastes, which leads to a general increase of housing consumption. My proposal is intended to have the exact opposite consequence.

I'm not that adverse to the UBI but there is something counter-intuitive about the idea that rich people first pay taxes and then get benefits back. This forces you to either lower the level of basic income (or other government expenditure) or raise taxes. My suggestion is intended to take care of this without having to resort to means-testing.

comment by ChristianKl · 2014-05-30T18:39:54.176Z · LW(p) · GW(p)

Lots of people are arguing governments should provide all citizens with an unconditional basic income. One problem with this is that it would be very expensive.

You are missing the point. It's cheaper to give the poor unconditional basic income than to have a huge bureaucratic administration that makes sure that they pass certain conditions to be eligible for welfare payments.

That might mean a low basic income but it would still be an unconditional basic income. Don't confuse the debate for a unconditional income with the debate about how high it or welfare payments to the poorest should be.

I guess this system to some extent exist - e.g. in many countries, the government does provide you with education and health care

Actually you are looking at the wrong countries. Countries like Iran would be an example where essential goods like food get's heavily subventioned.

There are many reasons why subventions are a bad idea. The produce incentives for companies to lobby heavily to be included. The encourage people to waste products that get subventioned. They need bureaucracy to be organised. The prevent innovation because new products usually don't fit into the template along with old products are subventioned.

Replies from: DanielLC
comment by DanielLC · 2014-06-04T00:41:20.056Z · LW(p) · GW(p)

It's cheaper to give the poor unconditional basic income than to have a huge bureaucratic administration that makes sure that they pass certain conditions to be eligible for welfare payments.

I decided to see what I could find on how much the administrative costs are, and I found this: http://mediamatters.org/research/2005/09/21/limbaugh-dramatically-overstated-administrative/133859

The most useful part seems to be this line:

Finally, the report estimated that the federal administrative costs amounted to $12,452,000,000 for the 11 programs studied -- 6.4 percent of total federal expenditures on these programs.

That doesn't sound like much of an issue.

comment by Lumifer · 2014-05-21T15:28:18.842Z · LW(p) · GW(p)

One middle route I've been toying with is that the government could provide people with cheap goods and services.

This is a popular practice in the third world.

See e.g. this or this.

comment by drethelin · 2014-05-22T04:46:40.107Z · LW(p) · GW(p)

how is this better than Walmart and Mcdonalds?

comment by gmzamz · 2014-05-20T04:23:09.441Z · LW(p) · GW(p)

Regarding networks; is there a colloquially accepted term for when one has a ton of descriptive words (furry, bread sized, purrs when you pet them, claws, domesticated, hunts mice, etc) but you do not have the colloquially accepted term (cat) for the network? I have searched high and low and the most I have found is reverse defintion search, but no actual term.

Replies from: Emily, Richard_Kennaway, knb, satt
comment by Emily · 2014-05-20T09:34:19.664Z · LW(p) · GW(p)

Not quite what you're looking for I think, but if someone is having that problem they might have anomic aphasia.

comment by Richard_Kennaway · 2014-05-20T05:59:14.092Z · LW(p) · GW(p)

"Not having a word for it"? Or in the technical vocabulary of linguistics, the concept is not "lexicalised".

comment by knb · 2014-05-21T01:31:44.856Z · LW(p) · GW(p)

Sounds kind of like the Tip of the Tongue Effect

Replies from: army1987
comment by A1987dM (army1987) · 2014-05-21T17:17:58.219Z · LW(p) · GW(p)

That's a particular subcase of it, when you know that there's a word for that concept and you've heard it but you can't remember it. But other times it's more like “there should be a word for this”.

Replies from: satt
comment by satt · 2014-05-21T22:36:16.660Z · LW(p) · GW(p)

But other times it's more like “there should be a word for this”.

However, that's distinct from what gmzamz asked about: occasions when "you do not have the colloquially accepted term" for something.

comment by satt · 2014-05-21T00:59:16.864Z · LW(p) · GW(p)

I've heard "anomia" and "being able to talk all around the idea of an [X] but not the word [X] itself".

comment by wnoise · 2014-05-20T03:35:55.371Z · LW(p) · GW(p)

A video of Daniel Dennett giving an excellent talk on free will at the Santa Fe Institute: https://www.youtube.com/watch?v=wGPIzSe5cAU It largely follows the general Less Wrong consensus, but dives into how this construction is useful in the punishment and moral agent contexts more than I've seen developed here.

comment by ike · 2014-05-22T19:56:38.552Z · LW(p) · GW(p)

Hack the SAT essay:

First, some background: The SAT has an essay, graded on a scale from 1-6. The essay scoring guidelines are here . I'll quote the important ones for my purposes:

“Each essay is independently scored by two readers on a scale from 1 to 6. These readers' scores are combined to produce the 2-12 scale. The essay readers are experienced and trained high school and college teachers.” “Essays not written on the essay assignment will receive a score of zero”

Reports vary, but apparently, most grader spend between 90 seconds to 2 and a half minutes on each essay.

My challenge, inspired by the Aibox experiment, is as follows. You are an AI taking the test. You need to write an off-topic anything that will convince both graders to give you a six. (Or, if the two graders disagree by more than one point, a third grader takes over, and you only need to convince them). You have 25 minutes to actually write it, but unlimited time to plan in advance. You could probably draw anything, not just writing, but you run the risk of them seeing a picture and immediately giving a zero without having time to get hacked.

I've come up with two ideas so far:

  • Writing a sob story about how the essay prompt is misprinted on your page (although I don't think that would work)
  • Threatening to commit suicide if the grader doesn't give you a six (would probably result in them calling the police)

I didn't think either of them were very good, but I like the concept. Some rules: No paying them off or threatening them with physical harm.

Can anyone come up with better ideas?

I'm putting this on open thread because it's my first real post, and I'm not sure of the reaction.

Replies from: Alejandro1, DanielLC, palladias, shminux, chaosmage, Dorikka
comment by Alejandro1 · 2014-05-22T21:15:54.872Z · LW(p) · GW(p)

First observation: Surely any entity intelligent enough to hack the essay according to the rules you have set is also intelligent enough to get the maximum grade (much more easily) by the usual means of writing the assigned essay…

Second observation: Since the concept of "being on topic" is vague (essentially, anything that humans interpret as being on a certain topic is on that topic) maybe the easiest way to hack it following your rules would be to write an essay that is not on topic by the criteria the designers of the exam had in mind, but that is close enough that it can confuse the graders into believing it is on topic. An analogy could be how some postmodernists confused people into believing they were doing philosophy...

Replies from: ike
comment by ike · 2014-05-23T15:17:07.686Z · LW(p) · GW(p)

On the point that any AI smart enough to do this could write a 12 essay: remember that you don't know the essay topic in advance. You only have 25 minutes to write, while if you do one off-topic, you have more time.

comment by DanielLC · 2014-05-22T23:33:46.964Z · LW(p) · GW(p)

This reminds me of something I've read about Isaac Asimov doing. He said that people tended not to believe him when he told them he didn't know anything about the subject he was asked to give a speech on. As a result, he started changing the subject.

He gave an example in which he was asked to give a speech on information retrieval or something. He didn't know anything about it beyond that it was apparently called "information retrieval". He basically said that Mendelian inheritance was discovered long before it was needed to solve certain problems in the theory of evolution, but nobody knew about it so it took a while to figure out the answer, so a better way to retrieve information would be helpful. Mostly he was just talking about Mendelian inheritance.

comment by palladias · 2014-05-24T02:06:03.253Z · LW(p) · GW(p)

Heh, part of the strategy I used when I took the SAT was slightly darkening my "two-bit" words with my pencil and making sure to fill the exact amount of space provided-minus-one line. I had read (don't have the citation at hand) that length of essay tracks score pretty well. And, to clinch it, I wanted their (very brief) attention to be drawn to good words, used correctly.

Result: 12.

(Though, I think the main thing was just committing to writing a tight, formulaic essay. I outscored some friends who I thought were better writers than I was, because they were trying to write a good essay rather than a good SAT essay.)

comment by Shmi (shminux) · 2014-05-22T22:14:36.673Z · LW(p) · GW(p)

You have to reliably convince a grader in the 1-2 min they spend on it that your essay is in the top 1% or so (that's the fraction of perfect 12s), and the grader intuitively knows the score she'll give you within one point after 30 seconds or less. I doubt there is a sure way to do it without hitting their mental model of a perfect essay on all counts.

Replies from: ike
comment by ike · 2014-05-23T15:16:18.504Z · LW(p) · GW(p)

You need to reliably convince a grader that they should

  1. Take more time to look at the essay or
  2. Give a six, regardless of merit.

Few restrictions on how, like with AIbox. (You could tell them you're an AI, or an alien, or whatnot, as long as it's believable.)

comment by chaosmage · 2014-05-23T18:29:40.175Z · LW(p) · GW(p)

Write a subtly but powerfully persuasive narrative about how you've long been planning to become a teacher, and rate essays like this one, because obviously that is the job that ultimately decides what kind of minds will be in charge in the next generation. Include a mention of the off topic problem, and claim that the "official" topic of your essay is merely an element in a more important and more real topic: this situation, happening right now, of a real and complex relationship between the writer and rater that will, in a sense, continue for the rest of both people's lives, even if they never meet again.

I'd rate that a 6 anyway.

comment by Dorikka · 2014-05-22T20:27:13.925Z · LW(p) · GW(p)

There's always using a modified version of Pascal's mugging.

comment by sixes_and_sevens · 2014-05-19T17:02:36.673Z · LW(p) · GW(p)

A while ago I mentioned how I'd set up some regexes in my browser to alert me to certain suspicious words that might be indicative of weak points in arguments.

I still have this running. It didn't have the intended effect, but it is still slightly more useful than it is annoying. I keep on meaning to write a more sophisticated regex that can somehow distinguish the intended context of "rather" from unintended contexts. Natural language is annoying and irregular, etc., etc.

Just lately, I've been wondering if I could do this with more elaborate patterns of language. It's recently come to my attention that expressions of the form "in saying [X] (s)he is [Y]" is often indicative of sketchy value-judgement attribution. It's also very easy to capture with a regex. It's gone in the list.

So, my question: what patterns of language are (a) indicative of sloppy thinking, weak arguments, etc., and (b) reliably captured by a regex?

(In the back of my mind, I am imagining some sort of sanity-equivalent of a spelling and grammar check that you can apply to something you've just written, or something you're about to read. This is probably one of those projects I will start and then abandon, but for the time being it's fun to think about.)

Replies from: satt, TsviBT, moridinamael, Punoxysm
comment by satt · 2014-05-26T20:36:09.217Z · LW(p) · GW(p)

The pair "tend to always" or "always tend to". Sometimes they come off to me as a way to exploit the rhetorical force of "always" while committing only to a hedged "tend to", in which case they can condense a two-step of terrific triviality into three words. There are likely other phrases that can provide plausibly deniable pseudo-certainty but I can't think of any.

More generally, the Unix utility diction tries to pick out "frequently misused, bad or wordy diction", which is a kinda related precedent.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-05-26T23:12:15.543Z · LW(p) · GW(p)

two-step of terrific triviality

When they come in the form of portentous pronouncements, Daniel Dennett calls these "deepities"; ambiguous expressions having one meaning which is trivially true but unimportant, and another that is obviously false but would be earth-shatteringly significant if it were true.

Also related in cold reading is the Rainbow Ruse.

comment by TsviBT · 2014-05-20T15:16:42.665Z · LW(p) · GW(p)

"[...]may be the case[...]"

Sometimes this phrase is harmless, but sometimes it is part of an important enumeration of possible outcomes/counterarguments/whatever. If "the case" does not come with either a solid plan/argument or an explanation why it is unlikely or not important, then it is often there to make the author and/or the audience feel like all the bases have been covered. E.g.,

We should implement plan X. It may be the case that [important weak point of X], but [unrelated benefit of X].

comment by moridinamael · 2014-05-19T18:53:27.404Z · LW(p) · GW(p)

I had the notion a while ago to try to write a linter to aid in tasks beyond code correctness by automatically detecting the desired features in a plethora of objects. Kudos on actually doing it and in a not hare-brained fashion.

comment by Punoxysm · 2014-05-20T03:38:26.504Z · LW(p) · GW(p)

As a former Natural Language Processing researcher, the technology definitely exists. Using general vocabulary combined with many (semi-manually generated) regexes to figure out argumentative or weaselly sentences with decent accuracy should be doable. It could improve over time if you input exemplar sentences you came across.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2014-05-20T11:38:10.445Z · LW(p) · GW(p)

Do you have a recommendation for a good language-agnostic text / reference resource on NLP?

ETA: my own background is a professional programmer with a reasonable (undergrad) background in statistics. I've dabbled with machine learning (I'm in the process of developing this as a skill set) and messed around with python's nltk. I'd like a broader conceptual overview of NLP.

Replies from: Punoxysm
comment by Punoxysm · 2014-05-20T20:13:48.301Z · LW(p) · GW(p)

I'd recommend this book for a general overview : http://nlp.stanford.edu/fsnlp/

However, tasks like parsing are unnecessary for many tasks. A simple classifier on a sparse vector of word counts can be quite effective as a starting point in classifying sentence/document content.

comment by Metus · 2014-05-19T19:44:34.967Z · LW(p) · GW(p)

Apparently I don't forget ideas, they just move places in my consciousness.

In the first week of last september I mused about writing a handbook of rationality for myself akin to how the ancient Stoics wrote handbooks for themselves. Nothing came from it, I plain and simply forgot about it. Next week I mused about writing a book using LaTeX and git as the git model allows to have many parallel versions of the book and there needs to be no canon for it to work, as opposed to a wiki, though still allowing collaboration. Now there already is a book written with git and writing a document with git is not a new idea at all.

Thinking about parallel legal systems or organisation forms with the explicit goal of copying the viable parts reminded me of using git to write source code. Inded, there is no difference between writing down social rules and personal maxims with this principle so I came to the obvious conclusion only a couple of hours ago: Use git to write a handbook of rationality, encourage other people to fork it and to do their own edits, keeping the viable parts and rejecting the questionable stuff.

Actions speak louder than words though lack of knowledge and other commitments can be an impediment, so I made a repository with only just the hint of a structure. Please provide your content and your thoughts about this.

Replies from: Vaniver
comment by Vaniver · 2014-05-20T19:31:28.706Z · LW(p) · GW(p)

I think this is a good idea, and I'm curious to see how it goes. I'll be watching, and as I complete some of my other writing duties I think this has a good chance of becoming one.

Something else that might be interesting: this comment and the idea it's a response to in the OP.

Replies from: Metus
comment by Metus · 2014-05-20T21:03:23.285Z · LW(p) · GW(p)

Thank you for your comment. I would be very happy to see you work on this too.

At the moment I am sadly swamped but this will pass in a week or two.

Edit: Now that I actually took the time to read the comment, I dump these first thoughts. Yes, most of the advice won't apply to any single person but the idea is to ahve anyone edit their own version. What I expect to see is some kind of tome with the most useful (widely applicable or extremely effective) stuff in it and explanations of it too, and a shorter version everyone or their group creates for themselves.

comment by Jayson_Virissimo · 2014-05-19T05:40:17.149Z · LW(p) · GW(p)

Yann LeCun, head of Facebook's AI-lab, did an AMA on /r/MachineLearning/ a few days ago. You can find the thread here.

In response to someone asking "What are your biggest hopes and fears as they pertain to the future of artificial intelligence?", LeCun responds that:

Every new technology has potential benefits and potential dangers. As with nuclear technology and biotech in decades past, societies will have to come up with guidelines and safety measures to prevent misuses of AI.

One hope is that AI will transform communication between people, and between people and machines. Ai will facilitate and mediate our interactions with the digital world and with each other. It could help people access information and protect their privacy. Beyond that, AI will drive our cars and reduce traffic accidents, help our doctors make medical decisions, and do all kinds of other things.

But it will have a profound impact on society, and we have to prepare for it. We need to think about ethical questions surrounding AI and establish rules and guidelines (e.g. for privacy protection). That said, AI will not happen one day out of the blue. It will be progressive, and it will give us time to think about the right way to deal with it.

It's important to keep mind that the arrival of AI will not be any more or any less disruptive than the arrival of indoor plumbing, vaccines, the car, air travel, the television, the computer, the internet, etc.

EDIT: I didn't see this one the first time. In response to someone asking "What do you think of the Friendly AI effort led by Yudkowsky? (e.g. is it premature? or fully worth the time to reduce the related aI existential risk?)", LeCun says that:

We are still very, very far from building machines intelligent enough to present an existential risk. So, we have time to think about how to deal with it. But it's a problem we have to take very seriously, just like people did with biotech, nuclear technology, etc.

Replies from: XiXiDu
comment by XiXiDu · 2014-05-19T08:41:07.119Z · LW(p) · GW(p)

I'd love to see a discussion between people like LeCun, Norvig, Yudkowsky and e.g. Russell. A discussion where they talk about what exactly they mean when they think about "AI risks", and why they disagree, if they disagree.

Right now I often have the feeling that many people mean completely different things when they talk about AI risks. One person might mean that a lot of jobs will be gone, or that AI will destroy privacy, while the other person means something along the lines of "5 people in a basement launch a seed AI, which then turns the world into computronium". These are vastly different perceptions, and I personally find myself somewhere between those positions.

LeCun and Norvig seem to disagree that there will be an uncontrollable intelligence explosion. And I am still not sure what exactly Russell believes.

Anyway, it is possible to figure this out. You just have to ask the right questions. And this never seems to happen when MIRI or FHI talk to experts. They never specifically ask about their controversial beliefs. If you e.g. ask someone if they agree that general AI could be a risk, a yes/no answer provides very little information about how much they agree with MIRI. You'll have to ask specific questions.

Replies from: Vulture
comment by Vulture · 2014-05-23T02:02:01.891Z · LW(p) · GW(p)

Is it possible that MIRI knows privately (which is good enough for their own strategic purposes) that some of these high-profile people disagree with them on key issues, but they don't want to publicly draw attention to that fact?

comment by Daniel_Burfoot · 2014-05-19T21:09:10.584Z · LW(p) · GW(p)

Are there any math/stats/CS theory types out there who are interested in suggestions for new problems?

I am finding that my large scale lossless data compression work is generating some mathematical problems that I don't have time to solve in their full generality. I could write up the problem definition and post to LW if people are interested.

Replies from: Punoxysm, cousin_it
comment by Punoxysm · 2014-05-20T00:41:04.450Z · LW(p) · GW(p)

Sure, lay it on us. If nothing else, writing it up clearly should help you.

comment by cousin_it · 2014-05-19T22:35:50.421Z · LW(p) · GW(p)

Try posting some problems in the open threads here. MathOverflow has also worked really well for me.

comment by Viliam_Bur · 2014-05-19T10:02:45.142Z · LW(p) · GW(p)

I have a random mathematical idea, not sure what it means, whether it is somehow useful, or whether anyone has explored this before. So I guess I'll just write it here.

Imagine the most unexpected sequence of bits. What would it look like? Well, probably not what you'd expect, by definition, right? But let's be more specific.

By "expecting" I mean this: You have a prediction machine, similar to AIXI. You show the first N bits of the sequence to the machine, and the machine tries to predict the following bit. And the most unexpected sequence is one where the machine makes the most guesses wrong; preferably all of them.

More precisely: The prediction machine starts with imagining all possible algorithms that could generate sequences of bits, and it assigns them probability according to the Solomonoff prior. (Which is impossible to do in real life, because of the infinities involved, etc.) Then it receives the first N bits of the sequence, and removes all algorithms which would not generate a sequence starting with these N bits. Now it normalizes the probabilities of the remaining algorithms, and lets them vote on whether the next bit would be 0 or 1.

However, our sequence is generated in defiance to the prediction machine. We actualy don't have any sequence in advance. We just ask the prediction machine what is the next bit (starting with the empty initial sequence), and then do the exact opposite. (There is some analogy with Cantor's diagonal proof.) Then we send the sequence with this new bit to the machine, ask it to predict the next bit, and again do the opposite. Etc.

There is this technical detail, that the prediction machine may answer "I don't know" if exactly half of the remaining algorithms predict that the next bit will be 0, and other half predicts that it will be 1. Let's say that if we receive this specific answer, we will always add 0 to the end of the sequence. (But if the machine thinks it's 0 with probability 50.000001%, and 1 with probability 49.999999%, it will output "0", and we will add 1 to the end of the sequence.)

So... at the beginning, there is no way to predict the first bit, so the machine says "I don't know" and the first bit is 0. At that moment, the prediction of the following bit is 0 (because the "only 0's" hypothesis is very simple), so the first two bits are 01. I am not sure here, but my next prediction (though I am predicting this with naive human reasoning, no math) would be 0 (as in "010101..."), so the first three bits are 011. -- And I don't dare to speculate about the following bits.

The exact sequence depends on how exactly the prediction machine defines the "algorithms that generate the sequence of bits" (the technical details of the language these algorithms are written in), but can still something be said about these "most unexpected" sequences in general? My guess is that to a human observer they would seem like a random noise. -- Which contradicts my initial words that the sequence would not be what you'd expect... but I guess the answer is that the generation process is trying to surprise the prediction machine, not me as a human.

Replies from: Nisan, witzvo, None, Cube, Punoxysm, kpreid, DanielLC
comment by Nisan · 2014-05-19T15:53:19.254Z · LW(p) · GW(p)

In order to capture your intuition that a random sequence is "unsurprising", you want the predictor to output a distribution over {0,1} — or equivalently, a subjective probability p of the next bit being 1. The predictor tries to maximize the expectation of a proper scoring rule. In that case, the maximally unexpected sequence will be random, and the probability of the sequence will approach 2^{-n}.

Allowing the predictor to output {0, 1, ?} is kind of like restricting its outputs to {0%, 50%, 100%}.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-05-19T18:27:33.358Z · LW(p) · GW(p)

the maximally unexpected sequence will be random

In a random sequence, AIXI would guess on average half of the bits. My goal was to create a specific sequence, where it couldn't guess any. Not just a random sequence, but specifically... uhm... "anti-inductive"? The exact opposite of lawful, where random is merely halfway opposed. I don't care about other possible predictors, only about AIXI.

Imagine playing rock-paper-scissors against someone who beats you all the time, whatever you do. That's worse than random. This sequence would bring the mighty AIXI to tears... but I suspect to a human observer it would merely seem pseudo-random. And is probably not very useful for other goals than making fun of AIXI.

Replies from: Nisan, ShardPhoenix
comment by Nisan · 2014-05-20T01:53:57.486Z · LW(p) · GW(p)

Ok. I still think the sequence is random in the algorithmic information theory sense; i.e., it's incompressible. But I understand you're interested in the adversarial aspect of the scenario.

You only need a halting oracle to compute your adversarial sequence (because that's what it takes to run AIXI). A super-Solomonoff inductor that inducts over all Turing machines with access to halting oracles would be able to learn the sequence, I think. The adversarial sequence for that inductor would require a higher oracle to compute, and so on up the ordinal hierarchy.

comment by ShardPhoenix · 2014-05-20T00:12:40.040Z · LW(p) · GW(p)

Shouldn't AIXI include itself (for all inputs) recursively? If so I don't think your sequence is well defined.

Replies from: Nisan
comment by Nisan · 2014-05-20T01:54:55.076Z · LW(p) · GW(p)

No, AIXI isn't computable and so does not include itself as a hypothesis.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2014-05-20T04:36:13.486Z · LW(p) · GW(p)

Oh, I see.

comment by witzvo · 2014-05-20T03:40:12.968Z · LW(p) · GW(p)

Just in case anyone wants pointers to existing mathematical work on "unpredictable" sequences: Algorithmically random sequences (wikipedia)

comment by [deleted] · 2014-05-19T10:36:49.170Z · LW(p) · GW(p)

My guess is that to a human observer they would seem like a random noise. -- Which contradicts my initial words that the sequence would not be what you'd expect... but I guess the answer is that the generation process is trying to surprise the prediction machine, not me as a human.

"What is the specific pattern of bits?" and "Give a vague description that applies to both this pattern and asymptotically 100% of possible patterns of bits" are very different questions. You're asking the machine the first question and the human the second question, so I'm not surprised the answers are different.

comment by Cube · 2014-05-20T16:44:55.579Z · LW(p) · GW(p)

Does "most unexpected" differ from "least predictable" in any way? Seems like a random number generator would match any algorithm around 50% of the time so making an algorithm less predictable than that is impossible no?

comment by Punoxysm · 2014-05-20T03:57:28.541Z · LW(p) · GW(p)

My prediction machine can maximize it's expected minimum score by outputting random guesses. Then your bitstring is precisely the complement of my random string, and therefore drawn from the random distribution.

comment by kpreid · 2014-05-20T01:41:25.357Z · LW(p) · GW(p)

or whether anyone has explored this before

I have briefly thought about this idea in the context of password selection and password crackers: the "most unexpected" string (of some maximum length) is a good password. No deep reasoning here though.

comment by DanielLC · 2014-05-19T22:55:14.057Z · LW(p) · GW(p)

I think adding a little meta-probability will help.

Since there's some probability of the sequence being "the most surprising", this would basically mean that several of the most surprising end up with basically the same probability. For example, if it takes n bits of data to define "the most surprising m-bit sequence", then there must be a 2^-n chance of that happening. Since there are 2^m sequences, and the most surprising sequence must have a probability of at most 2^-m, there must be at least 2^(m-n) most surprising sequences.

comment by Raythen · 2014-05-25T11:46:07.946Z · LW(p) · GW(p)

Asking "Would an AI experience emotions?" is akin to asking "Would a robot have toenails?"

There is little functional reason for either of them to have those, but they would if someone designed them that way.

Edit: the background for this comment - I'm frustrated by the way AI is represented in (non-rationalist) fiction.

Replies from: Mitchell_Porter, ChristianKl, DanielLC, XiXiDu
comment by Mitchell_Porter · 2014-05-26T07:34:10.781Z · LW(p) · GW(p)

What sort of AIs have emotions? How can I tell whether an AI has emotions?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-05-26T12:58:46.047Z · LW(p) · GW(p)

Given how emotions are essential to decision-making, I'd ask what sort of AI doesn't have emotions.

Replies from: DanielLC
comment by DanielLC · 2014-06-04T00:45:48.656Z · LW(p) · GW(p)

I'd say that a chess-playing program does not have emotions, and a norn does.

comment by ChristianKl · 2014-05-28T05:23:01.764Z · LW(p) · GW(p)

I think you are plain wrong.

There a lot of thought in AI development of mimicking human neural decision making processes and it's very well possible that the first human level AGI will be similar in structure to human decision making. Emotions are a core part of how humans make decisions.

Replies from: Raythen
comment by Raythen · 2014-05-28T08:53:32.888Z · LW(p) · GW(p)

I should probably make clear that most of my knowledge of AI comes from LW posts, I do not work with it professionally, and that this discussion is on my part motivated by curiosity and desire to learn.

Emotions are a core part of how humans make decisions.

Agreed.


Your assessment is probably more accurate than mine.

My original line was of thinking was that while AIs might use quick-and-imprecise thinking shortcuts triggered by pattern-matching (which is sort of how I see emotions), human emotions are too inconveniently packaged to be much of use in AI design. (While being necessary, they also misfire a lot; coping with emotions is an important skill to learn; in some situations emotions do more harm than good; all in all this doesn't seem like good mind design). So I was wondering if whatever AI uses for its thinking, we would even recognize as emotions.

My assessment now is that even if AI uses different thinking shortcuts than humans do, they might still misfire. For example, I can imagine a pattern activation triggering more patterns, which in turn trigger more and more patterns, resulting in a cascade effect not unlike emotional over-stimulation/breakdown in humans.
So I think it's possible that we might see AI having what we would describe as emotions (perhaps somewhat uncanny emotions, but emotions all the same).


P. S. For the sake of completeness: my mental model also includes biological organisms needing emotions in order to create motivation (rather than just drawing conclusions). (example: fear creating motivation to escape danger).
AI should already have a supergoal so it does not need "motivation". However it would need to see how its current context connects to its supergoal, and create/activate subgoals that apply to the current situation, and here once again thinking shortcuts might be useful, perhaps not too unlike human emotions.

Example: AI sees a fast-moving object that it predicts will intersect its current location, and a thinking shortcut activates a dodging strategy. This is a subgoal to the goal of surviving, which is in turn is a subgoal to the AI's supergoal (whatever that is).

Having a thinking shortcut (this one we might call "reflex" rather "emotion") results in faster thinking. Slow thinking might be inefficient to the point of fatal "Hm... that object seems to be moving mighty fast in my direction... if it hits me it might damage/destroy me. Would that be a good thing? No, I guess not - I need to functional in order to achieve my supergoal. So I should probably dodg.. "

Replies from: ChristianKl
comment by ChristianKl · 2014-05-28T12:28:47.489Z · LW(p) · GW(p)

AI should already have a supergoal so it does not need "motivation".

We know relatively little about what it takes to create a AGI. Saying that an AGI should have feature X or feature Y to be a functioning AGI is drawing to much conclusions from the data we have.

On the other hand we know that the architecture on which humans run produces "intelligence" so that at least one possible architecture that could be implemented in a computer.

Bootstraping AGI from Whole Brain Emulations is on of the ideas that is in discussion even on LessWrong.

comment by DanielLC · 2014-06-04T00:44:20.154Z · LW(p) · GW(p)

Define "emotion".

I find it highly unlikely robots would have anything corresponding to any given human emotion, but if you just look at the general area in thingspace that emotions are in, and you're perfectly okay with the idea of finding a new one, then it would be perfectly reasonable for robots to have emotions. For one thing, general negative and positive emotions would be pretty important for learning.

comment by XiXiDu · 2014-05-26T08:07:45.112Z · LW(p) · GW(p)

I have never thought about this, so this is a serious question. Why do you think evolution resulted in beings with emotions and what makes you confident enough that emotions are unnecessary for practical agents that you would end up being frustrated about the depiction of emotional AIs created by emotional beings in SF stories?

From Wikipedia:

Emotion is often the driving force behind motivation, positive or negative. An alternative definition of emotion is a "positive or negative experience that is associated with a particular pattern of physiological activity."

...cognition is an important aspect of emotion, particularly the interpretation of events.

Let's say the AI in your story becomes aware of an imminent and unexpected threat and allocates most resources to dealing with it. This sounds like fear. The rest is semantics. Or how exactly would you tell that the AI is not in fear? I think we'll quickly come up against the hard problem of consciousness here and whether consciousness is an important feature for agents to possess. And I don't think one can be confident enough about this issue in order to become frustrated about a science fiction author using emotional terminology to describe the AIs in their story (a world in which AIs have "emotions" is not too absurd).

comment by NancyLebovitz · 2014-05-23T15:59:08.978Z · LW(p) · GW(p)

Slatestarcodex isn't loading for me. It's obviously loading for other people-- I'm getting email notifications of comments. I use chrome.

Anyone have any idea what the problem might be?

Replies from: Yvain, CAE_Jones, Randy_M, Nornagest, Error, jaime2000, Oscar_Cunningham
comment by Scott Alexander (Yvain) · 2014-05-24T02:48:38.872Z · LW(p) · GW(p)

It wasn't working for me either all day. A few hours ago it mysteriously reappeared. I called tech support. They said they had no explanation.

It should be up again now. I will investigate better hosting solutions.

comment by CAE_Jones · 2014-05-23T19:03:50.779Z · LW(p) · GW(p)

It's not loading for me, either; I'm getting my ISP's "website suggestions" page, which tells me it's probably a DNS issue (this page theoretically only shows up when a domain name is not registered).

I wound up googling the URL in the "Recent on Rationality Blogs" sidebar, and was able to read Google's cache of the latest post. Said cache includes no comments. I did not try to comment from the cached page (I didn't expect it to work).

[edit: This is with Firefox.]

comment by Randy_M · 2014-05-23T18:20:58.651Z · LW(p) · GW(p)

working for my cheap mobile phone, not for my new laptop with IE. Which is a shame, because it's a very good post, but I'm going to be way behind to contribute to any comment threads.

edit: Shame for me, I mean, not for the observer concerne with signal to noise ratio.

comment by Nornagest · 2014-05-23T17:23:22.921Z · LW(p) · GW(p)

It's down for me, too. Ping is failing to resolve the address, so I think we're looking at a DNS issue.

comment by Error · 2014-05-23T17:20:04.691Z · LW(p) · GW(p)

On my Firefox it works fine. If it's loading for everyone else and not you, some things you might look at: See if you can ping the site, and see if it works under a clean browser profile. I'm not sure how to get one on Chrome but I'm sure there's a way.

You might also post whatever error message you're getting, if any. "Not loading" covers a fairly broad range of behavior.

[Edit: It worked when I was at work, but does not work at home. And yes, it looks like a DNS issue.]

comment by jaime2000 · 2014-05-23T16:49:19.575Z · LW(p) · GW(p)

Using Chrome as well, not having a problem. Have you e-mailed Yvain at the reverse of gro.htorerihs@ttocs?

comment by Oscar_Cunningham · 2014-05-23T16:45:39.826Z · LW(p) · GW(p)

It's not loading for me either, nor for downforeveryoneorjustme.com. I use firefox.

comment by [deleted] · 2014-05-19T06:42:48.476Z · LW(p) · GW(p)

The OpenWorm Kickstarter ends in a few hours, and they're almost to their goal! Pitch in if you want to help fund the world's first uploads.

Replies from: MathiasZaman
comment by MathiasZaman · 2014-05-20T09:26:43.756Z · LW(p) · GW(p)

Update: They made it.

comment by Leonhart · 2014-05-23T14:40:28.876Z · LW(p) · GW(p)

ETA: Problems solved, LW is amazing, love you all, &c.

I am in that annoying state where I vaguely recall the shape of a concept, but can't find the right search terms to let me work out what it was I originally read. Does anyone recognise either of the things below?

  • a business-ethics test-like-thing where someone left confectionary and a donation box out unsupervised, and then looked at who paid, in some form.

(One of the many situations where googling "morality of doughnuts" doesn't help much)

  • a survey-design concept where instead of asking people "do you do x", you ask them "do you think your co-workers do x" and that is taken as more representative, or used to debias the first answer; or, um, something.

Any help appreciated!

Replies from: Unnamed, None
comment by Unnamed · 2014-05-23T18:09:09.285Z · LW(p) · GW(p)

Bayesian Truth Serum

Replies from: Leonhart
comment by Leonhart · 2014-05-24T12:06:16.330Z · LW(p) · GW(p)

The very thing. Thank you!

comment by [deleted] · 2014-05-23T15:08:11.054Z · LW(p) · GW(p)

I wonder if you aren't thinking of this bagel vendor.

Replies from: Leonhart
comment by Leonhart · 2014-05-23T15:20:36.214Z · LW(p) · GW(p)

Bingo. Thank you!

comment by witzvo · 2014-05-21T17:24:24.577Z · LW(p) · GW(p)

[Link] why do people persist in believing things that just aren't true

Replies from: Vaniver, satt
comment by Vaniver · 2014-05-22T17:30:48.426Z · LW(p) · GW(p)

The square brackets are greedy. What you want to do is this:

\[Link\]: [Why do people persist in believing things that just aren't true?](http://www.newyorker.com/online/blogs/mariakonnikova/2014/05/why-do-people-persist-in-believing-things-that-just-arent-true.html?utm_source=www&utm_medium=tw&utm_campaign=20140519&mobify=0)

which looks like:

[Link]: Why do people persist in believing things that just aren't true?

Replies from: witzvo
comment by witzvo · 2014-05-24T04:10:19.674Z · LW(p) · GW(p)

fixed. Thanks.

comment by satt · 2014-05-28T03:11:46.617Z · LW(p) · GW(p)

This bit of the article jumped out at me:

But, when the researchers took a closer look, they found that the only people who had changed their views were those who were ideologically predisposed to disbelieve the fact in question. If someone held a contrary attitude, the correction not only didn’t work—it made the subject more distrustful of the source. [...] If information doesn’t square with someone’s prior beliefs, he discards the beliefs if they’re weak and discards the information if the beliefs are strong.

As unfortunate as this may be, even perfect Bayesians would reason similarly; Bayes's rule essentially quantifies the trade-off between discarding new information and discarding your prior when the two conflict. (Which is one way in which Bayesianism is a theory of consistency rather than simple correctness.)

comment by Error · 2014-05-25T02:26:54.069Z · LW(p) · GW(p)

Probably too late, but: I have the impression there's a substantial number of anime fans here. Are there any lesswrongers at or near MomoCon (taking place in Atlanta downtown this weekend) and interested in meeting up?

comment by iarwain1 · 2014-05-21T14:42:24.459Z · LW(p) · GW(p)

What's the best way to learn programming from a fundamentals-first perspective? I've taken / am taking a few introductory programming courses, but I keep feeling like I've got all sorts of gaps in my understanding of what's going on. The professors keep throwing out new ideas and functions and tools and terms without thoroughly explaining how and why it works like that. If someone has a question the approach is often, "so google it or look in the help file". But my preferred learning style is to go back to the basics and carefully work my way up so that I thoroughly understand what's going on at each step along the way.

Replies from: Antiochus, None
comment by Antiochus · 2014-05-21T15:08:15.982Z · LW(p) · GW(p)

This might be counter-intuitive and impractical for self-teaching, but for me it was an assembly language course that made it 'click' for how things work behind the scenes. It doesn't have to be much and you'll probably never use it again, but the concepts will help your broader understanding.

If you can be more specific about which parts baffle you, I might be able to recommend something more useful.

Replies from: iarwain1
comment by iarwain1 · 2014-05-21T15:32:03.742Z · LW(p) · GW(p)

Nothing in particular baffles me. I can get through the material pretty fine. It's just that I prefer starting from a solid and thorough grasp of all the fundamentals and working on up from there, rather than jumping head-first into the middle of a subject and then working backwards to fill in any gaps as needed. I also prefer understanding why things work rather than just knowing that they do.

Replies from: Lumifer, TylerJay
comment by Lumifer · 2014-05-21T15:49:32.472Z · LW(p) · GW(p)

It's just that I prefer starting from a solid and thorough grasp of all the fundamentals

Which fundamentals do you have in mind? There are multiple levels of "fundamentals" and they fork, too.

For example, the "physical execution" fork will lead you to delving into assembly language and basic operations that processors perform. But the "computer science" fork will lead you into a very different direction, maybe to LISP's lambdas and ultimately to things like the Turing machine.

Replies from: iarwain1
comment by iarwain1 · 2014-05-21T16:03:46.491Z · LW(p) · GW(p)

Whatever fundamentals are necessary to understand the things that I'm likely to come across while programming (I'm hoping to go into data science, if that makes a difference). I don't know enough to know which particular fundamentals are needed for this, so I guess that's actually part of the question.

Replies from: Lumifer
comment by Lumifer · 2014-05-21T16:36:45.310Z · LW(p) · GW(p)

Well, if you'll be going into data science, it's unlikely that you will care greatly about the particulars of the underlying hardware. This means the computer-science branch is more useful to you than the physical-execution one.

I am still not sure what kind of fundamentals do you want. The issue is that the lowest abstraction level is trivially simple: you have memory which can store and retrieve values (numbers, basically), and you have a processing unit which understands sequences of instructions about doing logical and mathematical operations on those values. That's it.

The interesting parts, and the ones from which understanding comes (IMHO) are somewhat higher in the abstraction hierarchy. They are often referred to as programming language paradigms.

The major paradigms are imperative (Fortran, C, Perl, etc.), functional (LISP), logical (Prolog), and object-oriented (Smalltalk, Ruby).

They are noticeably different in that writing non-trivial code in different paradigms requires you to... rearrange your mind in particular ways. The experience is often described as a *click*, an "oh, now it all makes sense" moment.

Replies from: iarwain1
comment by iarwain1 · 2014-05-21T17:26:01.636Z · LW(p) · GW(p)

I guess a good starting point might be: Where do I go to learn about each of the different paradigms? Again, I'd like to know the theory as well as the practice.

Replies from: Lumifer
comment by Lumifer · 2014-05-21T17:31:39.620Z · LW(p) · GW(p)

Google is your friend. You can start e.g. here or here.

comment by TylerJay · 2014-05-22T23:17:08.415Z · LW(p) · GW(p)

I understand what you mean here, but in programming, it sometimes makes sense to do things this way. For example, in my introduction to programming course, I used Dictionaries/Hashes to write some programs. Key-value pairs are important for writing certain types of simple programs, but I didn't really understand how they worked. Almost a year later, I took an algorithms course and learned about hash functions and hash maps and finally understood how they worked. It wouldn't have made sense to refrain from using this tool until I'd learned how to implement them and it was really rewarding to finally understand it.

I always like to learn things from the ground up too, but this way just works sometimes in programming.

comment by [deleted] · 2014-05-22T13:07:01.430Z · LW(p) · GW(p)

Could you give a couple examples of specific things that you'd like to understand?

Without that, a classic that might match what you're interested in is Structure and Interpretation of Computer Programs. It starts as an introduction to general programming concepts and ends as an introduction to writing interpreters.

Replies from: iarwain1
comment by iarwain1 · 2014-05-22T14:40:34.241Z · LW(p) · GW(p)

I've been having a bit of a hard time coming up with specifics, because it's more a general sense that I'm lacking a lot of the basics. Like the professor will say something and it'll obliquely reference a concept that he seems to expect I'm familiar with, but I have no idea what he's referring to. So then I look it up on Wikipedia and the article mentions 10 other basic-sounding concepts that I've never heard of either. Or for example when the programming assignment uses a function that I don't know how to use yet. So I do the obvious thing of googling for it or looking it up in the documentation. But the documentation is referencing numerous concepts that I have only a vague idea of what they mean, so that I often only get a hazy notion of what the function does.

After I made my original post I looked around for a while on sites like Quora. I also took a look at this reddit list. The general sense I got was that to learn programming properly you should go for a thorough computer science curriculum. Do you agree?

The suggestion was to look up university CS degree curricula and then look around for equivalent MOOCs / books / etc. to learn it on my own. So I looked up the curricula. But most of the universities I looked at said to start out with an introductory programming language course, which is what I was doing before anyway. I've taken intro courses in Python and R, and I ran into the problems I mentioned above. The MITx Python course that I took was better on this score, but still not as good as I would have hoped. There are loads of resources out there for learning either of those languages, but I don't know how to find which ones fit my learning style. Maybe I should just try out each until I find one that works for me?

The book you mentioned kept coming up as well. That book was created for MIT's Intro to CS course, but MIT itself has since replaced the original course with the Python course that I took (I took the course on edX, so probably it's a little dumbed-down, but my sense was that it's pretty similar to the regular course at MIT). On the other hand, looking at the book's table of contents it looks like the book covers several topics not covered in the class.

There were also several alternative books mentioned:

Any thoughts on which is the best choice to start off with?

Replies from: cata
comment by cata · 2014-05-22T17:42:01.372Z · LW(p) · GW(p)

If you want a fundamentals-first perspective, I definitely suggest reading SICP. I think the Python course may have gone in a slightly different direction (I never looked at it) but I can't think of how you could get more fundamentals-first than the book.

Afterward, I suggest out of your list Concepts, Techniques, and Models of Computer Programming. That answers your question of "where do I go to learn about each of the different paradigms."

This is more background than you will strictly need to be a useful data scientist, but if you find it fun and satisfying to learn, then it will only be helpful.

comment by Raythen · 2014-05-21T13:24:25.398Z · LW(p) · GW(p)

Is there a way to get email notifications on receiving new messages or comments? I've looked under preferences, and I can't find that option.

comment by Markas · 2014-05-20T00:33:39.697Z · LW(p) · GW(p)

I buy a lot of berries, and I've heard conflicting opinions on the health risks of organic vs regular berries (and produce in general). My brief Google research seems to indicate that there's little additional risk if any from non-organic prodce, but if anyone knows more about the subject, I'd appreciate some evidence.

Replies from: Punoxysm, oru
comment by Punoxysm · 2014-05-20T03:35:04.093Z · LW(p) · GW(p)

Without citation: minimal "organic" labeling standards often aren't a very high or impressive barrier to clear.

comment by oru · 2014-05-21T06:44:29.155Z · LW(p) · GW(p)

Another suggestion (also without citation): It may also be worth evaluating risks associated with certain pesticides common to a crop or region and related externalities (the effect on local food chains). Also, when exposure involves more than one chemical, the overall risk assessment becomes more involved.

comment by JoshuaFox · 2014-05-19T10:21:51.651Z · LW(p) · GW(p)

If you could magically stop all human-on-human violence, or stop senescence (aging) for all humans, which would it be?

Replies from: Metus, DanArmak, Tenoke, Lumifer, Jayson_Virissimo, Benito, Oscar_Cunningham
comment by Metus · 2014-05-19T10:39:12.442Z · LW(p) · GW(p)

The latter. The former is already decreasing at an incredible speed but I see no trend for the latter.

comment by DanArmak · 2014-05-19T18:15:28.524Z · LW(p) · GW(p)

I'm much more likely to die of aging than of violence; so I'd rather stop aging.

This seems to generalize well to the rest of humanity. I am surprised that most others who replied disagrees. ISTM that most existential risks are not due to deliberate violence, but rather unintended consequences.

comment by Tenoke · 2014-05-19T12:42:44.768Z · LW(p) · GW(p)

The formed is a major existential risk, while the latter is probably going to be solved soon(er), so the former.

Replies from: JoshuaFox
comment by JoshuaFox · 2014-05-19T13:17:37.691Z · LW(p) · GW(p)

Good point! Then again, a lot of the existential risks we talk about have to do with accidental extinction, not caused by aggression per se.

comment by Lumifer · 2014-05-23T15:30:09.292Z · LW(p) · GW(p)

If you could magically stop all human-on-human violence, or stop senescence (aging) for all humans, which would it be?

All governments rely on an implicit or explicit threat of force, of "human-on-human violence".

If no one can apply violence to me why should I pay any taxes, or, more crudely, pay for that apple I just grabbed off a street stall?

comment by Jayson_Virissimo · 2014-05-20T03:56:04.871Z · LW(p) · GW(p)

Ending aging would almost certainly greatly diminish human-on-human violence, since increasing expected lifespans would lower time preference. Right?

Replies from: Kaj_Sotala, JoshuaFox
comment by Kaj_Sotala · 2014-05-21T07:07:43.574Z · LW(p) · GW(p)

I don't think it works that way. Currently most human-on-human violence is committed by young people (specifically young men), who by this logic should have the lowest time preference, since they can expect to have the most years left to live.

Replies from: army1987
comment by A1987dM (army1987) · 2014-05-21T09:55:33.218Z · LW(p) · GW(p)

So, depending on how much of this decrease in violence with age is biological and how much is memetic, stopping aging (assuming it would lead to a large drop in the birth rate) may increase or decrease the total violence in the long run (as the chronological age of the population increases but its biological age decreases).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-05-21T12:19:12.424Z · LW(p) · GW(p)

It would also depend on how anti-aging works. Suppose that every stage of life is made longer. If young male violence is mostly biological, then some young men would be violent for a few more years.

comment by JoshuaFox · 2014-05-20T07:46:45.080Z · LW(p) · GW(p)

Then again, if you had more to lose, maybe that would increase your incentive to protect yourself by getting the other guy before he gets you.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-05-20T15:58:33.132Z · LW(p) · GW(p)

I would assume there's a sorting effect-- people would tend to figure out eventually that it's better to live among low-violence people.

One big question is... ok, we want anti-aging, but what age do you aim for? 17 has some advantages, but how about 25? 35? 50?

Replies from: lmm
comment by lmm · 2014-05-20T18:25:25.066Z · LW(p) · GW(p)

I've read that cell death overtakes cell division at around 35, so perhaps a body in some longer-term equilibrium condition would look 35?

(I suspect that putting a single age on is too crude though. The optimal age for a set of lungs may not be the same as that for a liver)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-05-20T20:03:56.031Z · LW(p) · GW(p)

Optimal age is also relative to what you want to do-- different mental abilities peak at wildly different ages. If you stabilize your body at age 25 and then live to be 67 (edited-- was 53), will your verbal ability increase as much as if you let yourself age to 67?

Athletic abilities don't all peak at the same time, either. Strength doesn't peak at the same time as strength-to-weight ratio. Would you rather be a weightlifter or a gymnast? I believe coordination peaks late-- how do you feel about dressage?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-05-20T21:34:49.022Z · LW(p) · GW(p)

Optimal age is also relative to what you want to do-- different mental abilities peak at wildly different ages. If you stabilize your body at age 25 and then live to be 53, will your verbal ability increase as much as if you let yourself age to 67?

Staying physically 25 doesn't mean you have to stop learning or physically developing. Surely the development of abilities in adult life is the result of exercising body and mind over the years, not part and parcel of senscence?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-05-20T21:38:54.667Z · LW(p) · GW(p)

Surely the development of abilities in adult life is the result of exercising body and mind over the years, not part and parcel of senscence?

I don't think we know. I have no idea why verbal ability would peak so late, so I don't know whether brain changes associated with aging are part of the process.

comment by Ben Pace (Benito) · 2014-05-20T16:40:40.688Z · LW(p) · GW(p)

My problem with these questions is that it sorta gets difficult quickly. If you stopped aging today, I imagine there would very quickly be overpopulation issues and many old patients in hospitals wouldn't die etc. and yet I am finding it difficult to think of major issues with the ending of violence (boxing champions would be out of a job). And even now, I'm sure someone's thought of a counter example, and then the discussion would be harder. And so even though I think that aging is more important than violence as a focus, the question asks a hypothetical that is never going to occur (being able to just make that decision, I mean) and takes us away from reality into the nitty/gritty of a literal non-problem.

Why did you ask?

Edit: I didn't mean to make a case for either side, I was trying to suggest that the question itself seems unhelpful. We'll end up with a complicated technical discussion which is unlikely to have any practical value.

Replies from: Kaj_Sotala, JoshuaFox, polymathwannabe
comment by Kaj_Sotala · 2014-05-21T07:23:11.523Z · LW(p) · GW(p)

If you stopped aging today, I imagine there would very quickly be overpopulation issues

To give a sense of proportion: suppose that tomorrow, we developed literal immortality - not just an end to aging, but also prevented anyone from dying from any cause whatsoever. Further suppose that we could make it instantly available to everyone, and nobody would be so old as to be beyond help. So the death rate would drop to zero in a day.

Even if this completely unrealistic scenario were to take place, the overall US population growth would still only be about half of what it was during the height of the 1950s baby boom! Even in such a completely, utterly unrealistic scenario, it would still take around 53 years for the US population to double - assuming no compensating drop in birth rates in that whole time.

DR. OLSHANSKY: [...] I did some basic calculations to demonstrate what would happen if we achieved immortality today. And I compared it with growth rates for the population in the middle of the 20th Century. This is an estimate of the birth rate and the death rate in the year 1000, birth rate roughly 70, death rate about 69.5. Remember when there's a growth rate of 1 percent, very much like your money, a growth rate of 1 percent leads to a doubling time at about 69 to 70 years. It's the same thing with humans. With a 1 percent growth rate, the population doubles in about 69 years. If you have the growth rate — if you double the growth rate, you have the time it takes for the population to double, so it's nothing more than the difference between the birth rate and the death rate to generate the growth rate. And here you can see in 1900, the growth rate was about 2 percent, which meant the doubling time was about five years. During the 1950s at the height of the baby boom, the growth rate was about 3 percent, which means the doubling time was about 26 years. In the year 2000, we have birth rates of about 15 per thousand, deaths of about 10 per thousand, low mortality populations, which means the growth rate is about one half of 1 percent, which means it would take about 140 years for the population to double.

Well, if we achieved immortality today, in other words, if the death rate went down to zero, then the growth rate would be defined by the birth rate. The birth rate would be about 15 per thousand, which means the doubling time would be 53 years, and more realistically, if we achieved immortality, we might anticipate a reduction in the birth rate to roughly ten per thousand, in which case the doubling time would be about 80 years. The bottom line is, is that if we achieved immortality today, the growth rate of the population would be less than what we observed during the post World War II baby boom.

comment by JoshuaFox · 2014-05-20T17:57:21.665Z · LW(p) · GW(p)

[the question] sorta gets difficult

Sure does!

boxing champions would be out of a job

I don't count that as violence -- it is consensual (and there's a modicum of not-always-successful effort to prevent permanent harm).

overpopulation

This has been discussed at great depth and refuted, e.g. by Max More and de Grey.

Why did you ask

No particular reason: Every now and then a thought come to mind.

comment by polymathwannabe · 2014-05-20T18:37:05.684Z · LW(p) · GW(p)

If you take into account the risk of permanent brain damage, boxing (as well as rugby/football) is sacrificeable.

Replies from: JoshuaFox
comment by JoshuaFox · 2014-05-23T06:40:14.826Z · LW(p) · GW(p)

Never did any of those myself, but I think that being consensual, they don't count as violence.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-05-23T15:20:23.970Z · LW(p) · GW(p)

It's complicated. Power dynamics at school and at home, as well as joblessness in some countries, may make a sports career less than voluntary.

comment by Oscar_Cunningham · 2014-05-19T16:06:10.781Z · LW(p) · GW(p)

The former. Stopping ageing without giving us time to prepare for it would cause all sorts of problems in terms of increasing population. Whereas stopping violence would accelerate progress no end (if only for the resources it freed up).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-05-19T17:11:24.101Z · LW(p) · GW(p)

Stopping aging (preferably, reversing aging) would also free up a lot of resources.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-21T07:14:38.334Z · LW(p) · GW(p)

On that note, a 2006 article in The Scientist argues that simply slowing aging by seven years would produce large enough of an economic benefit to justify the US investing three billion dollars annually to this research. One excerpt:

Take, for instance, the impact of just one age-related disorder – Alzheimer disease (AD). For no other reason than inevitable shifting demographics, the number of Americans stricken with AD will rise from 4 million today to as many as 16 million by mid-century.4 This means there will be more people with AD in the US by 2050 than the entire current population of Australia. Globally, AD prevalence is expected to rise to 45 million by 2050, with three of every four AD patients living in a developing nation.5 The US economic toll is currently $[80 - 100] billion, but by 2050 more than $1 trillion will be spent annually on AD and related dementias. The impact of this single disease will be catastrophic, and this is just one example.

Cardiovascular disease, diabetes, cancer, and other age-related problems account for billions of dollars siphoned away for “sick care.” Imagine the problems in many developing nations where there is little or no formal training in geriatric health care. For instance, in China and India the elderly will outnumber the total current US population by mid-century. The demographic wave is a global phenomenon that appears to be leading health care financing into an abyss.

comment by Gunnar_Zarncke · 2014-05-28T20:24:56.725Z · LW(p) · GW(p)

What do you think about using visualizations for giving "rational" advice in a compact form?

Case in point: I just stumbled over relationtips by informationisbeautiful and thought: That is nice.

This also reminds me of the efficiency of visual presentation explained in The Visual Display of Quantitative Information by Tufte.

And I wonder how I might quote these in the Quotes Thread...

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2014-05-29T12:33:31.081Z · LW(p) · GW(p)

Modern "infographics" like those by informationisbeautiful are extremely often terrible in exactly the ways that Tufte warns against. They are often beautiful, but rarely excel at their original purpose of displaying data.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-05-29T16:37:10.268Z · LW(p) · GW(p)

I agree what many infographics (the contraction is the first hint) are often more beautiful than informational.

But yours was a general remark. Did you mean it to imply that the idea isn't good or just my particular example?

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2014-05-30T10:43:16.161Z · LW(p) · GW(p)

I think that good graphics illustrating some point about rationality would be a really cool thing to have in the quotes thread.

comment by JoshuaFox · 2014-05-23T13:24:50.908Z · LW(p) · GW(p)

Could someone write a Wiki article on Updateless Decision Theory? I'm looking for an article that is not too advanced and not too basic, and I think that a typical wiki article would be just right.

comment by niceguyanon · 2014-05-23T09:40:25.757Z · LW(p) · GW(p)

I find using a chess timer in conjunction with Pomodoros helpful in restricting break time overflow. Tracking work vs break time via the chess timer motivates me to keep the ratio in check. It is also satisfying to get your "score" up; a high work to break ratio at the end of a session feels good.

comment by CronoDAS · 2014-05-23T03:44:00.607Z · LW(p) · GW(p)

(Inspired by sci-fi story)

A new intelligence enhancing brain surgery has just been developed. In accordance with Algernon's Law, it has a severe drawback: your brain loses its ability to process sensory input from your eyes, causing you to go blind.

How big of an intelligence increase would it take before you'd be willing to give up your eyesight?

Replies from: Alicorn
comment by Alicorn · 2014-05-23T05:04:24.400Z · LW(p) · GW(p)

Enough to make learning Braille, meaningfully improving existing screenreader software if I don't care for it, and figuring out how to echolocate relatively short-term projects so that I could move on to other things instead of spending forever trying to reconstruct the anatomy of my life.

I almost said "enough to be able to route around this drawback somehow", but no, I don't think it's quite that dire.

comment by witzvo · 2014-05-21T20:06:06.487Z · LW(p) · GW(p)

I notice that I have a hard time getting myself to make decisions when there are tradeoffs to be made. I think this is because it's really emotionally painful for me to face actually choosing to accept one or another of the flaws. When I face making such a decision, often, the "next thing I know" I'm procrastinating or working on other things, but specifically I'm avoiding thinking about making the decision. Sometimes I do this when, objectively, I'd probably be better off rolling a dice and getting on with one of the choices, but I can't get myself to do that either. If it's relevant, I'm bad at planning generally. Any suggestions?

Replies from: wadavis, Torello
comment by wadavis · 2014-05-22T15:01:23.379Z · LW(p) · GW(p)

Spend some time deciding if decisiveness is a virtue. Dwell on it until you've convinced yourself that decisiveness is good, and have come to terms that you are not decisive. Around here it may be tempting to label decisiveness as rash and to rationalize your behavior, or not worth the work of changing, if so return to step one and reaffirm that you think it is good to be decisive. Now step outside your comfort zone and practice being decisive, practice at the restaurant, at work, doing chores. Have reminders to practice, set your desktop or phone background to "Be Decisive" in plain text (or whatever suits your esthetic tastes). Pick a role model who takes decisive action. Now after following these steps, you have practiced making decisions and following through on them, you have decided that to make a choice and not dwelling on it is a virtue, Now you can update your image of yourself as a decisive person. From there it should be self sustaining.

Replies from: RolfAndreassen, witzvo
comment by RolfAndreassen · 2014-05-24T07:01:24.461Z · LW(p) · GW(p)

Dwell on it until you've convinced yourself that decisiveness is good

Or not, as the case may be! And then there's the possibility that more data is needed.

comment by witzvo · 2014-05-24T04:56:39.966Z · LW(p) · GW(p)

Whoa. Fascinating! Thanks! I really like the idea of this approach. I'm, ironically, not sure I'm decisive enough to decide that decisiveness is a virtue, but this is worth thinking about. Where should I go to read more about the general idea that if I can decide that something is a virtue and practice acting in accord with that virtue that I can change myself?

Thinking about it just for a minute, I realize that I need a heuristic for when it's smart to be decisive and when it's smart to be more circumspect. I don't want to become a rash person. If I can convince myself that the heuristic is reliable enough, then hopefully I can convince myself to put it into practice like you say. I don't know if this means I'm falling into the rationalization trap that you mentioned or not, though. I don't think so; it would be a mistake to be decisive for decisiveness sake.

I can spend some time thinking more about role-models in this regard and maybe ask them when they decide to decide versus decide to contemplate, themselves. In particular, I think my role-models would not spend time on a decision if they knew that making either decision, now, was preferable to not making a decision until later.

Heuristic 1a: If making either decision now is preferable to making the decision later, make the decision promptly (flip coins if necessary).

In the particular case that prompted my original post, my current heuristics said it was a situation worth thinking about -- the options had significant consequences both good and bad. On the other hand, agonizing over the decision wouldn't get me anywhere and I knew what the consequences would be in a general sense -- I just didn't want to accept that I was responsible for the problems that I could expect to follow either decision, I wanted something more perfect. That's another situation my role-models would not fall prey to. Somehow they have the stomach to accept this and get on with things when there's no alternative....

Goal: I will be a person with the self-respect to stomach responsibility for the bad consequences of good decisions.

Heuristic 1b: When you pretty-much know what the consequences will be of all the options and they're all unavoidably problematic to around the same degree (multiply the importance of the decision by the error in the degree to define "around"), force yourself to pick one right away so you can put the decision-making behind you.

Am I on the right track? I'm not totally sure about how important it is to be decision-making behind yourself.

Replies from: wadavis
comment by wadavis · 2014-06-09T16:21:25.961Z · LW(p) · GW(p)

Sorry for the late reply, I couldn't decide how to communicate my point.

You strongly self-identify as not decisive and celebrate cautiousness as a virtue, if you desire to change that must change first. In all your examples you already know what has to be done, just want to avoid committing to action, and now you are contemplating finding methods to decide if you should be decisive on a decision by decision basis. That is a stalling tactic, stop it.

The goal to stomach the consequences is bang on, that might be some foundation work that is required first or something that develops with taking accountability and making decisions.

comment by Torello · 2014-05-21T22:09:32.356Z · LW(p) · GW(p)

If you're not familiar with the ideas read "The Paradox of Choice" by Barry Schwartz or watch a talk about it.

Other ideas:

give yourself a very short deadline for most decisions (most decisions are trivial); i.e. I will make this decision in the next two minutes and then I will stick with it. For long-term life decisions, maybe not so much.

Flip a coin. This is a good way to expose your gut feelings. A pros and cons type of weighting the options allows you to weigh lots of factors. Flipping a coin produces fewer reactions (in my experience): "Shoot, I really wish i had the other option (good information), or "I don't feel to strongly about the outcome (good information), or "I'm content with this flip" (good information).

comment by [deleted] · 2014-05-21T14:31:06.666Z · LW(p) · GW(p)

Tetlock thinks improved political forecasting is good. I haven't read his whole book but maybe someone can help me cheat. Why is improved forecasting not zero sum? suppose the USA and Russia can both forecast better but have different interests. so what?

[Edit] my guess might be that on areas of common interest like economics, improved forecasting is good. But on foreign policy...?

Replies from: bramflakes, Lumifer, NancyLebovitz, army1987, pcm
comment by bramflakes · 2014-05-21T19:18:25.480Z · LW(p) · GW(p)

A greatly simplified example: two countries are having a dispute and the tension keeps rising. They both believe that they can win against the other in a war, meaning neither side is willing to back down in the face of military threats. Improved forecasting would indicate who would be the likely winner in such a conflict, and thus the weaker side will preemptively back down.

comment by Lumifer · 2014-05-21T15:43:39.733Z · LW(p) · GW(p)

improved political forecasting is good. .... Why is improved forecasting not zero sum?

For the simple reason that politics is not zero-sum, foreign policy included.

Replies from: None
comment by [deleted] · 2014-05-21T17:25:54.720Z · LW(p) · GW(p)

cooperation is not zero sum. Why does better forecasting lead to more cooperation?

I would guess that it does--but if somebody hasn't seriously addressed this then I don't think I'm doing foreign policy questions on GJP Season 4

Replies from: Lumifer
comment by Lumifer · 2014-05-21T17:42:54.068Z · LW(p) · GW(p)

cooperation is not zero sum. Why does better forecasting lead to more cooperation?

Zero-sum means any actions of the participants do not change the total. Either up or down.

A nuclear exchange between US and Russia would not be zero-sum, to give an example. Better forecasting might reduce its chance by lessening the opportunities for misunderstanding, e.g. when one side mistakenly thinks the other side is bluffing.

As to more cooperation, better forecasting implies better understanding of the other side which implies less uncertainty about consequences which implies more trust which implies more cooperation.

Replies from: None
comment by [deleted] · 2014-05-21T18:11:19.466Z · LW(p) · GW(p)

How about the governments of the US and Russia correctly forecast that more hostility means more profits for their cronies, and increase military spending?

Replies from: falenas108, Lumifer
comment by falenas108 · 2014-05-22T15:50:51.683Z · LW(p) · GW(p)

That would still not be zero-sum. Which direction you think it is depends on your views.

comment by Lumifer · 2014-05-21T18:32:11.820Z · LW(p) · GW(p)

How about the governments of the US and Russia correctly forecast that more hostility means more profits for their cronies, and increase military spending?

Yes, and..?

If you want something that comes with ironclad guarantees that it leads to only goodness and light, go talk to Jesus. That's his domain.

comment by NancyLebovitz · 2014-05-21T15:09:08.354Z · LW(p) · GW(p)

Improved forecasting might mean that both sides do fewer stupid (negative sum) things.

comment by A1987dM (army1987) · 2014-05-22T07:35:34.795Z · LW(p) · GW(p)

International politics is zero-sum once you've already reached the Pareto frontier and can only move along it, but if forecasting is sufficiently bad you might not even be close to the Pareto frontier.

Replies from: Brian_Tomasik, None
comment by Brian_Tomasik · 2014-05-22T10:35:44.216Z · LW(p) · GW(p)

Right. A lot of politics is not zero-sum. Reduced uncertainty and better information may enable compromises that before had seemed too risky. Forecasting could help identify which compromises would work and which wouldn't. Etc.

comment by [deleted] · 2014-06-06T12:41:29.405Z · LW(p) · GW(p)

thanks army and bramflakes for illustrating. My guess is to agree--but I still have doubts. Maybe they have nothing to do after all with "zero sum." I think I'm concerned that forecasting could be used by governments against citizens. Before participating again I may need to read something in more detail about why this is unlikely. and also why I shouldn't participate in SciCast instead!

comment by pcm · 2014-05-21T14:44:05.597Z · LW(p) · GW(p)

I don't think Tetlock talks about that much.

Imagine a better forecast about whether invading Iraq reduces terrorism, or about whether Saddam would survive the invasion. Wouldn't both sides make wiser decisions?

Replies from: None
comment by [deleted] · 2014-05-21T15:53:21.482Z · LW(p) · GW(p)

so that's a good thought. I think you're saying that nations aren't coolly calculating rational actors but groups where foreign policy is often based on false claims.

I guess it really depends on where forecasting is deployed. It will increase the power of whoever has access. If accessible to George Bush, then George is more powerful. If accessible to the public, the public is. So my question depends (at least partly) on the kind of forecasting and who controls the resulting info

also this paper seems relevant

comment by Transfuturist · 2014-05-20T07:50:20.873Z · LW(p) · GW(p)

I've posted this before but I want to make it more clear that I want feedback.

I've written an essay on the effects of interactive computation as an improvement for Solomonoff-like induction. (It was written in two all-nighters for an English class, so it probably still needs proofreading. It isn't well-sourced, either.)

I want to build a better formalization of naturalized induction than Solomonoff's, one designed to be usable by space-, time-, and rate-limited agents, and interactive computation was a necessary first step. AIXI is by no means an ideal inductive agent.

Replies from: shminux, Manfred
comment by Shmi (shminux) · 2014-05-20T18:44:20.508Z · LW(p) · GW(p)

Had a look at your link, but couldn't make sense of it. Consider writing a proper summary upfront.

I want to build a better formalization of naturalized induction than Solomonoff's

This seems an ambitious task. Can you start with something simpler?

Replies from: Transfuturist
comment by Transfuturist · 2014-05-20T23:27:12.554Z · LW(p) · GW(p)

Sorry, my writing can get kind of dense.

This seems an ambitious task. Can you start with something simpler?

It doesn't quite strike me as ambitious; I see a lot of room for improvement. As for starting with something simpler, that's what this essay was.

Replies from: shminux
comment by Shmi (shminux) · 2014-05-20T23:56:31.201Z · LW(p) · GW(p)

Sorry, my writing can get kind of dense.

If you want people to read what you write, learn to write in a readable way.

Looked at your write-up again... Still no summary of what it is about. Something along the lines of (total BS follows, sorry, I have no clue what you are writing about, since the essay is unreadable as is): "This essay outlines the issues with AIXI built on Solomonoff induction and suggests a number of improvements, such as extending algorithmic calculus with interactive calculus. This extension removes hidden infinities inherent in the existing AIXI models and allows ."

Replies from: Transfuturist, Transfuturist
comment by Transfuturist · 2014-05-20T23:57:48.369Z · LW(p) · GW(p)

I'm in the process of writing summaries. I replied as soon as I read your response.

If you want people to read what you write, learn to write in a readable way.

You are pretty much the first person to give me feedback on this. I do not have an accurate representation as to how opaque this is at all.

In algorithmic representations:

  1. Separate hypotheses are inseparable.
  2. Hypotheses are required to be a complete world model (account for every part of the input).
  3. Conflicting hypotheses are not able to be held simultaneously. This stems mainly from there being no requirement for running in a finite amount of space and time.

There are other issues with Solomonoff induction in its current form, such as an inability to tolerate error, an inability to separate input in the first place, and an inability to exchange error for simplicity, among others. Some of these are addressable with this particular extension of SI; some are addressable with other extensions.

There is a similar intuition about nondeterministic hypotheses and a requirement that only part of the hypothesis must match the output, as nondeterministic Turing machines can be simulated by deterministic Turing machines via the simulation of every possible execution flow, but that strikes me as somewhat dodgy.

comment by Transfuturist · 2014-05-21T00:39:09.575Z · LW(p) · GW(p)

How's that? Every few lines, I give a summary of each subsection. I even double-spaced it, in case that was bothering you.

comment by Manfred · 2014-05-26T05:53:36.241Z · LW(p) · GW(p)

Your essay was interesting. What did you think of a similar post I recently wrote?

Feedback (entirely on the writing): The first goal when editing this should be to eliminate words from sentences. Use short and familiar words whenever possible. Change around a paragraph's structure to get it shorter. Since this is for English class, cut out every bit of jargon you can. If there's a length requirement, you can always fill it with story.

The best lesson of my dreadful college writing class was that nonfiction can have a story too - and the primary way you engage with a nontechnical audience is with this story. Solomonoff induction practically gets a character arc - the hope for a universal solution, the impotence at having to check every possible hypothesis, then being built back up by hard work and ancient wisdom to operate in the real world.

When you shift gears, e.g. to talk about science, you can make it easier on the reader by cutting technical explanations for historical or personal anecdotes. This only works once or twice per essay, though.

You can make your paragraphs more exciting. Rather than starting with "An issue similar in cause to separability is the idea of the frontier," and then have the reader go in with the mindset that they have to hear about a definition (English professors hate reading about definitions), try to give the reader a very concise big-picture view of the idea and immediately move on to the exciting applications, which is where they'll learn the concept.

Replies from: Transfuturist
comment by Transfuturist · 2014-05-26T12:50:42.992Z · LW(p) · GW(p)

Thanks for the in-depth critique! I haven't read your post yet, but it piqued my interest.

Also, moving on to the "exciting applications" isn't very effective when there aren't any. :I

Replies from: Manfred
comment by Manfred · 2014-05-26T21:24:59.471Z · LW(p) · GW(p)

Also, moving on to the "exciting applications" isn't very effective when there aren't any. :I

Bah humbug.

comment by Punoxysm · 2014-05-20T04:06:21.157Z · LW(p) · GW(p)

I am thinking of doing an article digesting a handful of research papers by some researcher or on some theme that would be of interest to less-wrongers. Any suggestions for what papers/theme, and any suggestions on how to write this mini-survey?

Replies from: JoshuaFox
comment by JoshuaFox · 2014-05-23T06:44:38.977Z · LW(p) · GW(p)

Please write a clear layperson's intro to UDT. You can also mention TDT etc. A good way to do this is a Wiki article.

A citation of related literature would be good too. Alex Altair's paper was good, but I'd like to read about UDT in more depth yet still in an accessible form.

Replies from: Punoxysm
comment by Punoxysm · 2014-05-23T17:12:25.803Z · LW(p) · GW(p)

I am not interested in UDT/TDT. And people already write tons about it here. Thank you for the suggestion though.

comment by Sherincall · 2014-05-21T19:51:09.997Z · LW(p) · GW(p)

Suppose you have the option that with every purchase you make, you can divert a percentage (including 0 and 100) of the money to a GiveWell endorsed charity that you're not personally affiliated with. Meaning, you still pay the same price, but the seller gets less/none, and the rest goes to charity. Seller has no right to complain. To what extent would you use this? Would it be different for different products, or sellers? Do you have any specific examples of where you would or wouldn't use it?

Also, assume you can start a company, and that the same thing applies to all purchases the company makes, would you do it? Any specific business?

Replies from: wadavis, DanielLC, Lumifer
comment by wadavis · 2014-05-22T15:14:18.294Z · LW(p) · GW(p)

Well meaning, rationalized theft is still an assault on the seller.

comment by DanielLC · 2014-05-22T01:36:01.877Z · LW(p) · GW(p)

I see no reason to send my money anywhere other than to the most needy person. I'd divert 100%.

comment by Lumifer · 2014-05-21T20:20:06.783Z · LW(p) · GW(p)

but the seller gets less/none, and the rest goes to charity. Seller has no right to complain.

Why would there be any sellers under this system?

Replies from: Sherincall, ChristianKl
comment by Sherincall · 2014-05-21T20:24:18.558Z · LW(p) · GW(p)

It is just a thought experiment, not something that could realistically exist. Suppose the president/king/whoever gave you (and only you) this power, and while the sellers are furious, they can't do anything about it. They are not participating by choice.

Replies from: Nornagest, DanielLC
comment by Nornagest · 2014-05-21T20:50:08.627Z · LW(p) · GW(p)

This seems consequentially equivalent to "legal issues aside, is it ethical to steal from businesses in order to give to [EA-approved] charity, and if so, which ones?".

I suspect answering would shed more heat than light.

Replies from: Sherincall
comment by Sherincall · 2014-05-21T21:03:40.098Z · LW(p) · GW(p)

Yes, pretty much. Has this been discussed before? I did a search and found nothing similar here.

EDIT: I found this somewhat related post: Really Extreme Altruism.

If it is a well known controversial issue, how about a poll to satisfy my curiosity without sparking any flames..

So: legal issues aside, is it ethical to steal from businesses in order to give to [EA-approved] charity?

[pollid:700]

Replies from: Lumifer, Nornagest
comment by Lumifer · 2014-05-21T21:20:50.842Z · LW(p) · GW(p)

Yes, pretty much.

For fun, let's reshuffle accents. So, every time you make a contribution to an EA-approved charity, you can go and pick yourself a free gift of equal value from any seller, and the seller can't do anything about that including complain. Is that OK? :-)

Replies from: Sherincall
comment by Sherincall · 2014-05-21T21:38:19.877Z · LW(p) · GW(p)

Great example. It is an isomorphic situation, that paints it in a completely different light.

If you are asking me personally, I can see myself doing just that in some cases, though definitely not as a standard way of obtaining goods. The reason for the original question was to see what the rest of you think of the matter.

comment by Nornagest · 2014-05-21T21:25:47.756Z · LW(p) · GW(p)

I don't recall any past controversy offhand, but given that business in general and many specific categories of business in particular are highly politicized, I suspect the answers you'd get would be more revealing of your respondents' politics (read: boring) than of the underlying ethics. For the same reason I'd expect it to be more contentious than average once we start getting into details.

There are also PR issues with thought experiments that could be construed as advocating crime, although that's more an issue with my reframing than with your original question. There's no actual policy, though; there is policy against advocating violence, but this doesn't qualify.

comment by DanielLC · 2014-05-22T01:34:58.382Z · LW(p) · GW(p)

It does happen to an extent.

You can buy a movie or you can pirate it and donate the price of the move.

Replies from: Sherincall
comment by Sherincall · 2014-05-22T02:33:55.061Z · LW(p) · GW(p)

That was actually the original topic of a conversation that inspired this question.

comment by ChristianKl · 2014-05-30T18:00:27.031Z · LW(p) · GW(p)

Because probably not everyone would divert 100% to a GiveWell endorsed charity.

comment by cousin_it · 2014-05-22T14:32:58.503Z · LW(p) · GW(p)

I have a very confused question about programming. Is there an interpretation of arithmetic operations on types that goes beyond sum=or=either and product=and=pair? For example this paper proposes an interpretation of subtraction and division on types, but it seems a little strained to me.

A sub-question: which arithmetical operation corresponds to function types? On one hand, the number of functions from A to B is the cardinality of B raised to the power of the cardinality of A. That plays nicely with the interpretation of sum and product types in terms of cardinality, but doesn't seem to play nicely with the logical interpretation, where "if A, and A implies B, then B". For the logical interpretation to work, it seems that implication should correspond to division (B/A), not exponentiation (B^A).

Another sub-question: which arithmetical operation corresponds to negation? On one hand, the assertion that type A is uninhabited should probably correspond to 1-A, because "A is either inhabited or uninhabited" corresponds to A+(1-A)=1, which is trivially true. On the other hand, the assertion "A can't be both inhabited and uninhabited at the same time" corresponds to A*(1-A), which doesn't look like 0 to me. Something strange is going on here.