Open Thread, Jun. 22 - Jun. 28, 2015

post by Gondolinian · 2015-06-22T00:01:54.872Z · LW · GW · Legacy · 204 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

204 comments

Comments sorted by top scores.

comment by Artaxerxes · 2015-06-22T04:38:02.690Z · LW(p) · GW(p)

A short, nicely animated adaptation of The Unfinished Fable of the Sparrows from Bostrom's book was made recently.

Replies from: AlexLundborg, Elo
comment by AlexLundborg · 2015-06-22T05:08:42.088Z · LW(p) · GW(p)

The same animation studio also made this fairly accurate and entertaining introduction to (parts of) Bostrom's argument. Although I don't know what to think of their (subjective) probability for possible outcomes.

comment by Elo · 2015-06-25T00:51:26.946Z · LW(p) · GW(p)

Although not improving my life at all; I quite like the short story as an analogy for UFAI risks.

comment by ZeitPolizei · 2015-06-22T16:42:55.056Z · LW(p) · GW(p)

Hope this is appropriate for here.

I had an epiphany related to akrasia today, though it may apply generally to a problem where you are stuck: For the longest time I thought to myself: "I know what I actually need to do, I just need to sit down and start working and once I've started it's much easier to keep going. I was thinking about this today and I had an imaginary conversation where I said: "I know what I need to do, I just don't know what I need to do, so I can do what I need to do." (I hope that makes sense). And then it hit me: I have no fucking clue what I actually need to do. It's like I've been trying to empty a sinking ship of water with buckets, instead of fixing the hole in the ship.

Reminds me in hindsight of the "definition of insanity": "The definition of insanity is doing the same thing over and over and expecting different results."

I think I believed, that I lacked the necessary innate willpower to overcome my inner demons, instead of lacking a skill I could acquire.

Replies from: Luke_A_Somers, Elo
comment by Luke_A_Somers · 2015-06-22T23:07:33.789Z · LW(p) · GW(p)

Once I was facing akrasia and I kind of had the same thing happen. I knew what I needed to do, and I ruminated on why I wasn't doing that.

I thought at first that I was just being lazy, but then I realized that I subconsciously knew that the strategy I was procrastinating from was actually pretty terrible. Once I realized that, I started thinking about how I might do it better, and then when I thought of something (which wasn't immediate, to be sure) I was actually able to get up and do it.

Replies from: Viliam
comment by Viliam · 2015-06-23T10:16:21.778Z · LW(p) · GW(p)

Sometimes "laziness" is being aware on some level that your current plan does not work, but not knowing a better alternative... so you keep going, but you find yourself slowing down, and you can't gather enough willpower to start running again.

comment by Elo · 2015-06-25T00:46:33.820Z · LW(p) · GW(p)

sounds like a growth mindset discovery! Congratulations!

For my benefit can you try to rephrase this sentence with alternative words or in a more verbose form:

I know what I actually need to do, I just need to sit down and start working and once I've started it's much easier to keep going. I was thinking about this today and I had an imaginary conversation where I said: "I know what I need to do, I just don't know what I need to do, so I can do what I need to do."

mainly a taboo on the multiple meanings of the word need that you tried to express. without knowing the tone; it just sounds confusing.

Meta: I suspect people have rewarded you for achieving an epiphany.

Replies from: ZeitPolizei
comment by ZeitPolizei · 2015-06-25T13:13:27.399Z · LW(p) · GW(p)

I know what I actually need to do, I just need to sit down and start working and once I've started it's much easier to keep going.

Let's say, I have some homework to do. In order to finish the homework, at some point I have to sit down at my desk and start working. And in my experience, actually starting is the hardest part, because after that I have few problems with continuing to work. And the process of "sitting down, opening the relevant programs and documents and starting to work" is not difficult per se, at least physically. In a simplified form, the steps necessary to complete my homework assignment are:

  1. Open relevant documents/books, get out pen and paper etc.
  2. Start working and don't stop working.

I know what I need to do, I just don't know what I need to do, so I can do what I need to do.

Considering how much trouble I have getting to the point where I can do step one (sometimes I falter between steps one and two), there must be at least one necessary step zero before I am able to successfully complete steps one and two. And knowing steps one and two does not help very much, if I don't know how to get to a (mental) state where I can actually complete them.

A different analogy: I know how I can create a checkmate if I only have a rook and king, and my opponent only a king. But that doesn't help me if I don't know how to get to the point where only those pieces are left on the board.

Replies from: Elo
comment by Elo · 2015-06-25T13:46:26.796Z · LW(p) · GW(p)

A suggestion. Commit to a small amount of the work. i.e. instead of committing to utilising a local gym, commit to arriving at the gym. after which if you decide to go home you can; but at least you break down the barrier to starting.

In the homework case, commit to sitting down and doing the first problem. Then see if you feel like doing any more than that.

comment by Richard_Kennaway · 2015-06-23T09:38:57.384Z · LW(p) · GW(p)

Deep Learning is the latest thing in AI. I predict that it will be exactly as successful at achieving AGI as all previous latest things. By which I mean that in 10 years it will be just another chapter in the latest edition of Russell and Norvig.

Replies from: Kaj_Sotala, Houshalter
comment by Kaj_Sotala · 2015-06-23T13:57:16.826Z · LW(p) · GW(p)

Purely on Outside View grounds, or based on something more?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-06-23T15:22:19.283Z · LW(p) · GW(p)

Outside View only. That's the way it's always worked out before, and I'm not seeing anything specific to Deep Learning to suggest that this time, it will be different. But I am not a professional in this field.

Replies from: Baughn
comment by Baughn · 2015-06-24T17:06:01.149Z · LW(p) · GW(p)

So, some Inside View reasons to think this time might be different:

  • The results look better, and in particular, some of Google's projects are reproducing high-level quirks of the human visual cortex.

  • The methods can absorb far larger amounts of computing power. Previous approaches could not, which makes sense as we didn't have the computing power for them to absorb at the time, but the human brain does appear to be almost absurdly computation-heavy. Moore's Law is producing a difference in kind.

That said, I (and most AI researchers, I believe) would agree that deep recurrent networks are only part of the puzzle. The neat thing is, they do appear to be part of the puzzle, which is more than you could say about e.g. symbolic logic; human minds don't run on logic at all. We're making progress, and I wouldn't be surprised if deep learning is part of the first AGI.

Replies from: RobFack, jsteinhardt
comment by RobFack · 2015-06-26T22:57:21.762Z · LW(p) · GW(p)

some of Google's projects are reproducing high-level quirks of the human visual cortex.

While the work that the visual cortex does is complex and hard to crack (from where we are now), it doesn't seem like being able to replicate that leads to AGI. Is there a reason I should think otherwise?

Replies from: Houshalter
comment by Houshalter · 2015-06-27T08:16:55.064Z · LW(p) · GW(p)

There is the 'one learning algorithm' hypothesis, that most of the brain uses a single algorithm for learning and pattern recognition. Rather than specialized modules for doing vision, and another for audio, etc.

The evidence experiments where they cut the connection from the eyes to the visual cortex in an animal, and rerouted it to the auditory cortex (and I think vice versa.) The animal then learned to see fine, and it's auditory cortex just learned how to do vision instead.

comment by jsteinhardt · 2015-06-25T05:38:15.453Z · LW(p) · GW(p)

which is more than you could say about e.g. symbolic logic; human minds don't run on logic at all

This seems an odd thing to say. I would say that representation learning (the thing that neural nets do) and compositionality (the thing that symbolic logic does) are likely both part of the puzzle?

comment by Houshalter · 2015-06-27T08:00:00.070Z · LW(p) · GW(p)

The outside view is not very good for predicting technology. Every technology has an eternity of not existing, until suddenly one day it exists out of the blue.

Now no one is saying that deep learning is going to be AGI in 10 years. In fact the deep learning experts have been extremely skeptical of AGI in all forms, and are certainly not promoting that view. But I think it's a very reasonable opinion that it will lead to AGI within the next few decades. And I believe sooner rather than later.

The reasons that 'this time it is different':

  • NNs are extraordinarily general. I don't think you can say this about other AI approaches. I mean search and planning algorithms are pretty general. But they fall back on needing heuristics to shrink the search space. And how do you learn heuristics? It goes back to being a machine learning problem. And they are starting to solve it. E.g. a deep neural net predicted Go moves made by experts 54% of the time.

  • The progress you see is a great deal due to computing power advances. Early AI researchers were working with barely any computing power, and a lot of their work reflects that. That's not to say we have AGI and are just waiting for computers to get fast enough. But computing power allows researchers to experiment and actually do research.

  • Empirically they have made significant progress on a number of different AI domains. E.g. vision, speech recognition, natural language processing, and Go. A lot of previous AI approaches might have sounded cool in theory, or worked on a single domain, but they could never point to actual success on loads of different AI problems.

  • It's more brain like. I know someone will say that they really aren't anything like the brain. And that's true, but at a high level there are very similar principles. Learning networks of features and their connections, as opposed to symbolic approaches.

And if you look at the models that are inspired by the brain like HTM, they are sort of converging on similar algorithms. E.g. they say the important part of the cortex is that it's very sparse and has lateral inhibition. And you see leading researchers propose very similar ideas.

Whereas the stuff they do differently is mostly because they want to follow biological constraints. Like only local interactions, little memory, only single bits of information at a time. And these aren't restrictions that real computers have too much, so we don't necessarily need to copy biology in those respects and can do things differently, and even better.

Replies from: jsteinhardt
comment by jsteinhardt · 2015-06-27T16:34:09.844Z · LW(p) · GW(p)

Several of the above claims don't seem that true to me.

  • Statistical methods are also very general. And neural nets definitely need heuristics (LSTMs are basically a really good heuristic for getting NNs to train well).

  • I'm not aware of great success in Go? 54% accuracy is very hard to interpret in a vaccuum in terms of how impressed to be.

  • When statistical methods displaced logical methods it's because they led to lots of progress on lots of domains. In fact, the delta from logical to statistical was probably much larger than the delta from classical statistical learning to neural nets.

Replies from: Houshalter
comment by Houshalter · 2015-06-28T00:29:48.166Z · LW(p) · GW(p)

I consider deep learning to be in the family of statistical methods. The problem with previous statistical methods is that they were shallow and couldn't learn very complicated functions or structure. No one ever claimed that linear regression would lead to AGI.

I'm not aware of great success in Go? 54% accuracy is very hard to interpret in a vaccuum in terms of how impressed to be.

That narrows the search space to maybe 2 moves or so per board. Which makes heuristic searching algorithms much more practical. You can not only generate good moves and predict what a human will do, but you can combine that with brute force and search much deeper than a human as well.

And neural nets definitely need heuristics

I mean that NNs learn heuristics. They do require heuristics in the learning algorithm, but not ones that are specific to the domain. Whereas search algorithms depend on lots of domain dependent, manually created heuristics.

comment by Epictetus · 2015-06-22T04:15:44.953Z · LW(p) · GW(p)

Revisited The Analects of Confucius. It's not hard to see why there's a stereotype of Confucius as a Deep Wisdom dispenser. Example:

The Master said, "It is Man who is capable of broadening the Way. It is not the Way that is capable of broadening Man."

I read a bit of the background information, and it turns out the book was compiled by Confucius' students after his death. That got me thinking that maybe it wasn't designed to be passively read. I wouldn't put forth a collection of sayings as a standalone philosophical work, but maybe I'd use it as a teaching aid. Perhaps one could periodically present students a saying of Confucius and ask them to think about it and discuss what the Master meant.

I've noticed this sort of thing in other works as well. Let's take the Dhammapada. In a similar vein, it's a collection of sayings of Buddha, compiled by his followers. There are commentaries giving background and context. I'm now getting the impression that it was designed to be just one part of a neophyte's education. There's a lot that one would get from teachers and more senior students, and then there are the sayings of the Master designed to stimulate thought and reflection

Going further west, this also seems to be the case with the Gospels.

With these works and those like them, there's this desire to stimulate reflection and provide a starting point for discussion. They're designed for initiates of a school of thought to progress further. Contrast this with works written by the masters themselves for their peers. It would be condescending to talk in short bursts of wisdom. No, this is where we get arguments clearly presented and spelled out. Short sayings are replaced with chains of reasoning designed to demonstrate the intended conclusion.

Replies from: None, Zubon
comment by [deleted] · 2015-06-29T12:26:30.317Z · LW(p) · GW(p)

It would be condescending to talk in short bursts of wisdom.

It would be condescending for the master too, to talk in short bursts of wisdom to his disciples, as long as he was alive. The issue is rather that once he dies, and the top level disciples gradually elevate the memory of the master into a quasi-deity, pass on the thoughts verbally for generations, and by the time they get around to writing it down the memory of the master is seen as such a big guy / deity and more or less gets worshipped so it becomes almost inconceivable to write it in anything but a condescending tone. But it does not really follow the masters were just as condescending IRL.

You can see this today. The Dalai Lama is really an easy guy, he does not really care how people should behave to him, he is just friendly and direct with everybody, but there is an "establishment" around him that really pushes visitors into high-respect mode. I had this experience with a lower lama, of a different school, I was anxious about getting etiquette right, hands together, bowing etc. then he just walked up to me, shook my hand in a western style, did not let it go but just dragged me halfway accross the room while patting me on the back and shaking with laughter at my surprise, it was simply his joke, his way of breaking the all too ceremonious mood. He was a totally non-condescending, direct, easy-going guy, who would engage everybody on an equal level, but a lot of retainers and helpers around him really put him and his boss (he was something of a top level helper of an even bigger guy too) on a pedestal.

Replies from: Epictetus
comment by Epictetus · 2015-06-29T14:42:58.720Z · LW(p) · GW(p)

It would be condescending for the master too, to talk in short bursts of wisdom to his disciples, as long as he was alive.

Good point. I suppose what I had in mind is that when the disciple asks the master a question, the master can give a hint to help the disciple find the answer on his own. Answering a question with a question can prod someone into thinking about it from another angle. These are legitimate teaching methods. Using them outside of a teacher/student interaction is rather condescending, however.

The issue is rather that once he dies, and the top level disciples gradually elevate the memory of the master into a quasi-deity, pass on the thoughts verbally for generations, and by the time they get around to writing it down the memory of the master is seen as such a big guy / deity and more or less gets worshipped so it becomes almost inconceivable to write it in anything but a condescending tone.

This is also a major factor. Disciples like to make the Master into a demigod and some of his human side gets lost in the process.

comment by Zubon · 2015-06-26T16:59:12.081Z · LW(p) · GW(p)

According to a distinction that originates with Aristotle himself, his writings are divisible into two groups: the "exoteric" and the "esoteric". Most scholars have understood this as a distinction between works Aristotle intended for the public (exoteric), and the more technical works intended for use within the Lyceum course / school (esoteric). Modern scholars commonly assume these latter to be Aristotle's own (unpolished) lecture notes (or in some cases possible notes by his students). ... Another common assumption is that none of the exoteric works is extant – that all of Aristotle's extant writings are of the esoteric kind.

Wikipedia on Aristotle

comment by Richard_Kennaway · 2015-06-23T09:26:21.372Z · LW(p) · GW(p)

Do people who take modafinil also drink coffee (on the same day)? Is that something to avoid, or does it not matter?

Replies from: drethelin
comment by drethelin · 2015-06-27T02:22:40.754Z · LW(p) · GW(p)

It seems to have a synergistic effect but I regularly drink coffee and take modafinil irregularly so it's hard to say. It doesn't seem bad by any means.

comment by Adam Zerner (adamzerner) · 2015-06-23T01:26:40.334Z · LW(p) · GW(p)

I went to the dermatologist and today and I have some sort of cyst on my ear. He said it was nothing. He said the options are to remove it surgically, to use some sort of cream to remove it over time, or to do nothing.

I asked about the benefits of removing it. He said that they'd be able to biopsy it and be 100% sure that it's nothing. I asked "as opposed to... how confident are you now?" He said 99.5 or 99.95% sure.

It seems clear to me that the costs of money, time and pain are easily worth the 5/1000(0) chance that I detect something dangerous earlier and correspondingly reduce the chances that I die. Like, really really really really really clear to me. Death is really bad. I'm horrified that doctors (and others) don't see this. He was very ready to just send me home with his diagnosis of "it's nothing". I'm trying to argue against myself and account for biases and all that, but given the badness of death, I still feel extremely strongly that the surgery+biopsy is the clear choice. Is there something I'm missing?

Also, the idea of Prediction Book for Doctors occurred to me. There could be a nice UI with graphs and stuff to help doctors keep track of the predictions they've made. Maybe it could evolve into a resource that helps doctors make predictions by providing medical info and perhaps sprinkling in a little bit of AI or something. I don't really know though, the idea is extremely raw at this point. Thoughts?

Replies from: drethelin, Lumifer, Manfred, None, adamzerner, Elo, Strangeattractor, Jiro, minusdash
comment by drethelin · 2015-06-23T03:22:24.900Z · LW(p) · GW(p)

1) surgery is dangerous. Even innocuous surgeries can have complications such as infection that can kill. There's also complications that aren't factored into the obvious math, for example ever since I got 2 of my wisdom teeth out, my jaw regularly tightens up and cracks if I open my mouth wide, something that never happened beforehand. I wasn't warned about this and didn't consider it when I was deciding to get the surgery.

2) If it's something dangerous, you're very likely to find out anyway before it becomes serious. eg, if it's a tumor, it's going to keep growing and you can come back a month later and get it out then with little problem.

3) even if it's not nothing, it might be something else that's unlikely to kill you. Thus the 5/1000 chance of death you're imagining is actually a 5/1000 chance of being not nothing.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-06-23T03:38:19.077Z · LW(p) · GW(p)

Are you just making these points as things to keep in mind, or are you making a stronger point? If the latter, can you elaborate? Are you particularly knowledgeable?

Replies from: drethelin, Zubon, Elo
comment by drethelin · 2015-06-23T03:56:56.491Z · LW(p) · GW(p)

The point is your consideration of "if surgery, definitely fine" vs "if no surgery, 5/1000 chance of death" are ignoring a lot of information. You're acting like your doctor is being unreasonable when in fact they're probably correct.

comment by Zubon · 2015-06-26T16:55:04.113Z · LW(p) · GW(p)

Stronger point: since we are at Less Wrong, think Bayes Theorem. In this case, a "true positive" would be cancer leading to death, and a "false positive" would be death from a medical mishap trying to remove a benign cyst (or even check it further). Death is very bad in either case, and very unlikely in either case.

P (death | cancer, untreated) - this is your explicit worry P (death | cancer, surgery) P (death | benign cyst, untreated) P (death | benign cyst, surgery) - this is what drethelin is encouraging you to note P (benign cyst) P (cancer)

My prior for medical mishaps is higher than 0.5% of the time, but not for fatal ones while checking/removing a cyst near the surface of the skin. As drethelin's #2 notes, this is not binary. If it is not a benign cyst, you will probably have indicators before it becomes something serious. Similarly, you have non-surgical options such as a cream or testing. Testing probably has a lower risk rate than surgery, although if it is a very minor surgery, perhaps not that much lower.

If the cyst worries you, having it checked/removed is probably low risk and may be good for your mental health. But now we might have worried you about the risks of doing that (sorry) when we meant to reduce your worries about leaving the cyst untreated.

Replies from: ChristianKl
comment by ChristianKl · 2015-06-26T19:10:26.601Z · LW(p) · GW(p)

In general if you list everything you can think of and give it probability scores, you ignore unknown unknowns. For medical interventions like surgery unknown unknowns are more likely to be bad than to be good.

As a result it's useful to have a prior against doing a medical intervention if there no strong evidence that the intervention is beneficial.

Replies from: None
comment by [deleted] · 2015-06-29T12:05:21.705Z · LW(p) · GW(p)

Maybe we need to visualize surgery different. I used to think about it like replacing a part in a car. Why not just do it if the part is not working too well.

Maybe we should see it as damage. It's like someone attacking you with a knife. Except that the intention is completely different, they know what they are doing, their implements are far more precise and so on, so the parallel is not very good either, I am just saying that "recovering from an appendicitis" could be at least visualized as something closer to "recovering after a nasty knife fight" than to "just had the clutch in my car replaced".

What do you think?

Replies from: ChristianKl
comment by ChristianKl · 2015-06-29T15:04:00.742Z · LW(p) · GW(p)

Why do you think we need to do so?

comment by Elo · 2015-06-25T00:24:33.685Z · LW(p) · GW(p)

agreed; if you are getting it done; and prefer the higher chance of life; get it done without being fully anaesthetized.

Possibly by a plastic surgeon; they seem to have profits to burn on quality equipment from people doing unnecessary (debatable) cosmetic procedures.

comment by Lumifer · 2015-06-23T14:53:40.611Z · LW(p) · GW(p)

You're probably misreading your doctor.

When he said "99.5 or 99.95%" I rather doubt he meant to give the precise odds. I think that what he meant was "There is a non-zero probability that the cyst will turn out to be an issue, but it is so small I consider it insignificant and so should you". Trying to base some calculations on the 0.5% (or 0.05%) chance is not useful because it's not a "real" probability, just a figurative expression.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-06-23T14:57:48.954Z · LW(p) · GW(p)

Great point. He did seem to pause and think about it, but still a good point. It seems notably likely that you're right, and even so, I doubt that his confidence is well-calibrated.

comment by Manfred · 2015-06-23T01:49:10.486Z · LW(p) · GW(p)

I think you should use the cream for a week, to start with.

Also, thought experiment: Suppose a person is going to live another 70 years. If undergoing some oversimplified miracle-cure treatment will cost, one way or another, 1 week of their life, what chance of "it's just a cyst" will they accept? 99.97%. So from the doctor's perspective (neglecting other risks or resources used, taking their '99.95%' probability estimate at face value, and assuming that a biopsy is some irreplaceable road to health), your condition is so likely to be benign that the procedure to surgically check spends your life at about the same rate as it saves it.

comment by [deleted] · 2015-06-23T02:21:58.462Z · LW(p) · GW(p)

The biggest thing is that the doctor's priorities are not your priorities. To him, a life is valuable... but not infinitely valuable -estimates usually puts the value of a life at (ballpark) 2 million dollars. When you consider the relative probability of you dying, and then the cost to the healthcare system of treatment, he's probably making the right decision (you of course, would probably value your own life MUCH MUCH higher). Btw, this kind of follows a blindspot I've seen in several calculations of yours - let me know if you're interested in getting feedback on it.

Finally, there are two other wrinkles - the possibility of complications and the possibility of false positives from a biopsy. The second increases the potential cost, and the first decreases the potential years added to your life. Both of these tilt the equation AGAINST getting it removed.

Replies from: ChristianKl, Unknowns
comment by ChristianKl · 2015-06-23T11:13:50.862Z · LW(p) · GW(p)

The biggest thing is that the doctor's priorities are not your priorities. [...] When you consider the relative probability of you dying, and then the cost to the healthcare system of treatment

The doctor has no incentive to minimize the cost of treatment. He makes money by having a high cost of treatment.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-06-23T21:24:50.351Z · LW(p) · GW(p)

Right, MattG is 100% backwards.

comment by Unknowns · 2015-06-23T03:34:05.387Z · LW(p) · GW(p)

Even adamzerner probably doesn't value his life at much more than, say, ten million, and this can likely be proven by revealed preference if he regularly uses a car. If you go much higher than that your behavior will have to become pretty paranoid.

Replies from: Silver_Swift, None
comment by Silver_Swift · 2015-06-23T14:55:29.768Z · LW(p) · GW(p)

That is an issue with revealed preferences, not an indication of adamzerners preference order. Unless you are extraordinarily selfless you are never going to accept a deal of the form: "I give you n dollars in exchange for me killing you." regardless of n, therefor the financial value of your own life is almost always infinite*.

*: This does not mean that you put infinite utility on being alive, btw, just that the utility of money caps out at some value that is typically smaller than the value of being alive (and that cap is lowered dramatically if you are not around to spent the money).

Replies from: Unknowns
comment by Unknowns · 2015-06-23T15:20:35.831Z · LW(p) · GW(p)

I think you are mistaken. If you would sacrifice your life to save the world, there is some amount of money that you would accept for being killed (given that you could at the same time determine the use of the money; without this stipulation you cannot be meaningfully be said to be given it.)

comment by [deleted] · 2015-06-23T03:51:27.528Z · LW(p) · GW(p)

Good point.

comment by Adam Zerner (adamzerner) · 2015-06-23T02:44:53.732Z · LW(p) · GW(p)

(Two people mentioned this so I figure I'll just reply here.)

Re: doctors perspective. I see how it might be rational from his perspective. My first thought is, "why not just give me the info and let me decide how much money I'm willing to invest in my health?". I could see how that might not be such a good idea though. From a macro perspective, perhaps those sorts of transaction costs might not be worth the benefits of added information -> increased efficiency? Plus it'd be getting closer to admitting to how much they value a life, which seems like it'd be bad from an image perspective

I guess what I'm left with is saying that I find it extremely frustrating, I'm disappointed in myself for not thinking harder about this, and I'm really really glad you guys emphasized this so I could do a better job of thinking about what the interests are of parties I interact with (specifically doctors, and also people more generally). I feel like it makes sense for me to be clear that I would like information to be shared with me and that I'm willing to spend a lot of money on my health. And perhaps that it's worth exercising some influence on my doctors so they care more about me. Thoughts?

Replies from: ChristianKl
comment by ChristianKl · 2015-06-23T11:46:20.294Z · LW(p) · GW(p)

The doctor you are with has a financial interest to treat you. When he advises you against doing something about the cyst he's acting against his own financial interests.

Overtreatment isn't good if you value life very much. Every medical interventions comes with risks. We don't fully understand the human body, so we don't know all the risks.

From the perspective of the doctor the question likely isn't: "How much money is the patient willing to invest in health" but "How much is the patient willing to invest for the cosmetic issue of getting rid of an ugly cyst".

Replies from: philh
comment by philh · 2015-06-23T16:13:42.045Z · LW(p) · GW(p)

If the surgery isn't necessary, and something goes wrong during it, does the doctor need to worry about getting sued?

Replies from: ChristianKl
comment by ChristianKl · 2015-06-23T20:06:13.913Z · LW(p) · GW(p)

If I remember right the best predictor for a doctor getting sued is whether patients perceive the doctor to be friendly.

Advising against a unnecessary practice might be malpractice but informing a patient about the option to do so, especially when there are cosmetic reasons for it, shouldn't be a big issue.

Replies from: Elo
comment by Elo · 2015-06-25T00:28:23.908Z · LW(p) · GW(p)

Even good doctors can get sued. But it speaks to more about why people sue; (doctors did a bad human-interaction job rather than they did a negligent job)

I do wonder about the nature of doctoring. Do you happen to get 3% (arbitrary number) wrong; and if you are also bad at people-skills, this bites you. whereas if you get 3% wrong and you are good at people skills you avoid being sued 99% of those 3% of cases.

comment by Elo · 2015-06-25T00:41:54.720Z · LW(p) · GW(p)

A perspective on the nature of medical advice: There exist people who are so concerned about not dying that they would do anything in their power to survive medically, and organise for themselves regular irrelevant medical tests. They are probably over-medicated and wasting a lot of time. i.e. a brain scan for tumours (where no reason to think they exist is present). There exist people who get yearly mammograms. there exist people who probably get around to their (reccomended yearly) mammogram every few years. There exist people who have heart attacks from long term lifestyle choices. There exist people who are so not concerned about dying that they smoke.

This is the range of patients that exist. You sound like you are closer to the top in terms of medical concern. The dermatologist has to consider where on the spectrum you are when devising a treatment as well as where the condition is on the spectrum of risk.

For a rough estimate (not a doctor) I would say the chance of a cyst on your ear killing you in the next 50 years would be less than the chance of getting an entirely different kind of cancer and having it threaten your life. (do you eat burnt food? bowel cancer risk. Do you go in the sun? skin cancer risk)

If it can be removed by cream; it will still be gone. The specialist should suggest a biopsy to cover their ass, but really; it could be 99 different types of skin growths or few type of cancerous growth. With no other symptoms there is no reason to suspect any danger exists.

the numbers you suggested sound like they were fabricated when given to you. Which is a reason to not mathematically attack them; but take them on the feeling value of 99.99% thumbs up. (and its really hard and almost impossible to find 0.01% so medically we don't usually bother)

comment by Strangeattractor · 2015-06-26T08:15:57.011Z · LW(p) · GW(p)

My advice would be:

1) See another doctor to get a second opinion. (And possibly a third opinion, if you don't like the second doctor.) Keep looking for a doctor until you find one that explains things to you in enough detail so that you understand thoroughly. Write down the questions you want answered ahead of time, and take notes during your appointment. "I am confident" is a bullshit answer unless you understand what possibilities the doctor considered, why the doctor thinks this one is the most likely, what the possible approaches to dealing with it if it turns out to be "not fine" are, and their advantages and disadvantages, what warning signs to look for that might indicate it is not fine, and the mechanism by which the cream option would work.

Unfortunately, the state of medical knowledge is such that there may not be good answers to all of the questions. The best the doctor may be able to do is "I don't know" for some of them. But you can get a better understanding of the situation than you have now, and a better understanding of where there are gaps in the medical knowledge.

2) Read a bunch of scientific papers about cysts and biopsies and tests so that you understand the possibilities and the risks better.

3) Also read about medical errors and risks of surgeries. People following doctor's instructions is one of the leading causes of death in the USA. I read an article about it in JAMA a few years ago. There might be more up-to-date papers about it by now. Having a medical procedure done is not a neutral option when it comes to affecting your chances to continue living.

For example, here's a paper that indicates that prostate biopsies could increase the mortality rate in men. This is just one study, not enough information to make an informed decision.

Boniol M, Boyle P, Autier P, Perrin P. Mortality at 120 days following prostatic biopsy: analysis of data in the PLCO study. Program and abstracts of the 2013 American Society of Clinical Oncology Annual Meeting and Exposition; May 31-June 4, 2013; Chicago, Illinois. Abstract 5022. http://onlinelibrary.wiley.com/doi/10.1002/ijc.23559/full

comment by Jiro · 2015-06-23T18:53:55.320Z · LW(p) · GW(p)

(deleted--everything I said was said by others already)

comment by minusdash · 2015-06-23T15:52:34.232Z · LW(p) · GW(p)

Saying 99.9999% seems a mouthful. Would you have preferred an answer like this instead: https://www.youtube.com/watch?v=7sWpSvQ_hwo :)

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-06-23T16:03:51.074Z · LW(p) · GW(p)

If brevity was the issue, I wouldn't have expected him to say 5 instead of 9. And I would have expected him to use stronger language than he did. My honest impression is that he thinks that the chances that it's something are really small, but nothing approaching infinitesimally small.

Replies from: minusdash
comment by minusdash · 2015-06-23T16:38:28.088Z · LW(p) · GW(p)

I'd say an expert in any field has better intuitions (hidden, unverbalized knowledge) than what they can express in words or numbers. Therefore, I'd assume that the decision that it's not worth doing the examination should take priority over the numerical estimate that he made up after you asked.

It may be better to ask the odds in such cases, like 1 to 10,000 or 1 to a million. Anyway, it's really hard to express our intuitive, expert-knowledge in such numbers. They all just look like "big numbers".

Another problem is that nobody is willing to put a dollar value on your life. Any such value would make you upset (maybe you are the exception, but most people probably would). Say the examination costs $100 (just an example). Then if he's 99.95% sure you aren't sick, and 0.05% sure you are dying and sends you home, then he (rather your insurance) values your life at less than $200,000. This is a very rough estimation, but it seems in the right ballpark for what a general stranger's life seems to be valued by the whole population. Of course it all depends on how much insurance you pay, how expensive the biopsy is etc. Maybe you are right that you deserve to be examined for your money, maybe not. But people tend to avoid this sort of discussion because it is very emotionally-loaded. So we mainly mumble around the topic.

People are dying all the time out of poverty, waiting on waiting lists, not having insurance, not being able to pay for medicaments. But of course people who have more money can override this by buying better medical care. Depending on the country there are legal and not-so-legal methods to get better healthcare. You could buy a better package legally, put some cash in the doctor's coat, etc.

You need to consider that the people who'd do your biopsy can do other things as well, for example work on someone's biopsy who has a chance of 1% of dying instead of your 0.05% (assuming this figure is meaningful and not just a forced, uncalibrated guess).

If you confronted your doctor with these things, he'd probably prefer to just revoke that probability estimate and just say his expert opinion is that you don't need the biopsy, end of story. It would be very hard for you to argue with this.

Replies from: ChristianKl, adamzerner
comment by ChristianKl · 2015-06-23T19:34:34.909Z · LW(p) · GW(p)

Depending on the country there are legal and not-so-legal methods to get better healthcare. You could buy a better package legally, put some cash in the doctor's coat, etc.

It's quite easy to get more expensive healthcare. On the other hand that doesn't mean the healthcare is automatically better.

If you are willing to pay for any treatment out of your own pocket then a doctor can treat you in a way that's not being payed for by an insurance company because it's not evidence-based medicine.

Replies from: minusdash
comment by minusdash · 2015-06-23T22:38:17.302Z · LW(p) · GW(p)

It can still be evidence-based, just on a larger budget. I mean, you can get higher quality examinations, like MRI and CT even if the public insurance couldn't afford it. Just because they wouldn't do it by default and only do it for your money doesn't mean it's not evidence based. Evidence-based medicine doesn't say that this person needs/doesn't need this treatment/examination, it gives a risk/benefit/cost analysis. The final decision also depends on the budget.

comment by Adam Zerner (adamzerner) · 2015-06-23T16:47:42.440Z · LW(p) · GW(p)

Therefore, I'd assume that the decision that it's not worth doing the examination should take priority over the numerical estimate that he made up after you asked.

It seemed to me that the proposition was made under false assumptions. Specifically, I value my life way more than most people do, and I value the costs of time/money/pain less than most people do. He seemed to have been assuming that I value these things in a similar way to most people.

But people tend to avoid this sort of discussion because it is very emotionally-loaded. So we mainly mumble around the topic.

Yeah, I understand this now. Previously I hadn't thought enough about it. So given that I am willing to spend money for my health, and that I can't count on doctors to presume that, it seems like I should make that clear to them so they can give me more personalized advice.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2015-06-23T19:41:57.131Z · LW(p) · GW(p)

Specifically, I value my life way more than most people do, and I value the costs of time/money/pain less than most people do. He seemed to have been assuming that I value these things in a similar way to most people.

How do you know? Because you do things like flossing every day? Healthcare economics quite frequently mean that a person prefers to pay more rather than less to signal to themselves that they do everything in their power to stay alive.

People quite frequently make bad health decisions because buying an expensive treatment feels like they do something to stay healthy will it's much more difficult emotionally to do nothing.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-06-23T20:00:59.054Z · LW(p) · GW(p)

I understand that for a lot of people, the X isn't about Y thing applies. That investing in health might be about signaling to oneself/others something. But I assure you that I genuinely do care. Maximizing expected utility is a big part of how I make decisions, and I think that things that reduce the chances of dying have very large expected utilities (given the magnitude of death). That said, I'm definitely not perfect. I ate pizza for lunch today :/

comment by Lumifer · 2015-06-23T17:10:41.135Z · LW(p) · GW(p)

So given that I am willing to spend money for my health, and that I can't count on doctors to presume that, it seems like I should make that clear to them so they can give me more personalized advice.

"Willing to spend money" meaning that you're willing to pay out of pocket for medical procedures? Or that you are willing to fight your insurance so that it pays for things it doesn't think necessary?

And doctors are supposed to ignore money costs when recommending treatment (or lack of it) anyway. If you want "extra attention", I suspect that you would need to proactively ask for things. For example, you can start by doing a comprehensive blood screen -- and I do mean comprehensive -- including a variety of hormones, a metals panel, a cytokine panel, markers for inflammation, thryroid, liver, etc. etc. You will have to ask for it, assuming you're reasonably healthy a normal doctor would not prescribe it "just so".

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-06-23T17:13:17.530Z · LW(p) · GW(p)

I'm willing to spend out of pocket. More generally, I value my life a lot, and so I'm willing to undergo costs in proportion to how much I value my life.

Replies from: Lumifer
comment by Lumifer · 2015-06-23T17:46:42.855Z · LW(p) · GW(p)

I'm willing to undergo costs in proportion to how much I value my life.

You're constrained by the size of your pocket :-) Being willing to spend millions on saving one's life is not particularly relevant if you current bank balance is $5.17.

Very rich people can (and do) hire personal doctors. That, however, has its own failure modes (see Michael Jackson).

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-06-23T18:47:18.586Z · LW(p) · GW(p)

You're constrained by the size of your pocket :-)

Yeah, I know. It's just hard to be more specific than that. I guess what I mean is that I am willing to spend a much larger portion of my money on health than most people are.

Replies from: Lumifer
comment by Lumifer · 2015-06-23T19:24:04.962Z · LW(p) · GW(p)

Is that a revealed preference? ;-)

comment by selylindi · 2015-06-25T22:59:36.473Z · LW(p) · GW(p)

Inspired by terrible, terrible Facebook political arguments I've observed, I started making a list of heuristic "best practices" for constructing a good argument. My key assumptions are that (1) it's unreasonable to expect most people to acquire a good understanding of skepticism, logic, statistics, or the ways the LW-crowd thinks of as how to use words rightly, and (2) lists of fallacies to watch out for aren't actually much help in constructing a good argument.

One heuristic captured my imagination as it seems to encapsulate most of the other heuristics I had come up with, and yet is conceptually simple enough for everyone to use: Sketch it, and only draw real things. (If it became agreed-upon and well-known, I'd shorten the phrase to "Sketch it real".)

Example: A: "I have a strong opinion that increasing the minimum wage to $15/hr over ten years (WILL / WON'T) increase unemployment." B: "Oh, can you sketch it for me? I mean literally draw the steps involved with the real-world chain of events you think will really happen."

If you can draw how a thing works, then that's usually a very good argument that you understand the thing. If you can draw the steps of how one event leads to another, then that's usually a good argument that the two events can really be connected that way. This heuristic requires empiricism and disallows use of imaginary scenarios and fictional evidence. It privileges reductionist and causal arguments. It prevents many of the ways of misusing words. If I try to use a concept I don't understand, drawing its steps out will help me notice that.

Downsides: Being able to draw well isn't required, but it would help a lot. The method probably privileges anecdotes since they're easier to draw than randomized double-blind controlled trials. Also it's harder than spouting off and so won't actually be used in Facebook political arguments.

I'm not claiming that a better argument-sketch implies a better argument. There are probably extremely effective ways to hack our visual biases in argument-sketches. But it does seem that under currently prevailing ordinary circumstances, making an argument-sketch and then translating it into a verbal argument is a useful heuristic for making a good argument.

Replies from: ChristianKl, None, Elo
comment by ChristianKl · 2015-06-26T00:35:27.461Z · LW(p) · GW(p)

As far as I understand CFAR teaches this heuristic under the name "Gears-Thinking".

Replies from: selylindi
comment by selylindi · 2015-06-28T01:22:28.236Z · LW(p) · GW(p)

Does that name come from the old game of asking people to draw a bike, and then checking who drew bike gears that could actually work?

comment by [deleted] · 2015-06-27T16:46:18.996Z · LW(p) · GW(p)

One thing you might want to consider is the reason people or posting on Facebook... usually, it's NOT to create a good argument, and in fact, sometimes a good, logical argument is counterproductive to the goal people have (to show their allegiance to a tribe).

comment by Elo · 2015-06-26T01:01:41.360Z · LW(p) · GW(p)

you might like www.yourlogicalfallacy.com

comment by NancyLebovitz · 2015-06-24T18:22:46.268Z · LW(p) · GW(p)

Can anyone think of a decision which might come up in ordinary life where Baysian analysis and frequentist analysis would produce different recommendations?

Replies from: Vaniver, Douglas_Knight
comment by Vaniver · 2015-06-25T13:05:53.757Z · LW(p) · GW(p)

The core difference between B and F is what they mean by "probability." If you go to the casino, the Bs and the Fs will interpret everything the same way, but when you go to the stock market, the Bs and the Fs will want to use their language differently. It seems likely to me that most of the uncertainties that show up in everyday life are things that Bs would be comfortable assigning probabilities to, but Fs would be hesitant about.

comment by Douglas_Knight · 2015-06-25T22:22:56.585Z · LW(p) · GW(p)

When it comes to an action you must structure your knowledge in Bayesian terms to use to compute an expected utility. It is only when discussion detached knowledge that other options become available.

Replies from: jsteinhardt
comment by jsteinhardt · 2015-06-26T04:42:24.020Z · LW(p) · GW(p)

??? This isn't true unless I misunderstood you. There are frequentist decision rules as well as Bayesian ones (minimax is one common such rule, though there are others as well).

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-06-26T06:40:36.566Z · LW(p) · GW(p)

In what sense is minimax frequentist?

Replies from: jsteinhardt
comment by jsteinhardt · 2015-06-26T07:38:12.720Z · LW(p) · GW(p)

From Wikipedia:

Consider the problem of estimating a deterministic (not Bayesian) parameter...

ETA: While that page talks about estimating parameters, most of the math holds for more general actions as well.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-06-26T16:45:07.986Z · LW(p) · GW(p)

I don't think that "non-bayesian" is a common definition of "frequentist." In any event, it's not a useful category.

comment by JoshuaZ · 2015-06-23T13:35:51.837Z · LW(p) · GW(p)

Philosophers are apparently about as vulnerable as the general population to certain cognitive biases involved in making moral decisions according to new research. Apparently, they are as susceptible to the order of presentation impacting how moral or immoral they rate various situations. See summary of research here. Actual research is unfortunately behind a paywall.

Replies from: Sarunas
comment by Sarunas · 2015-06-23T14:31:09.762Z · LW(p) · GW(p)

A paper "Philosophers’ Biased Judgments Persist Despite Training, Expertise and Reflection" (Eric Schwitzgebel and Fiery Cushman) is available here: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/Stability-150423.pdf

Replies from: None
comment by [deleted] · 2015-06-23T15:47:07.247Z · LW(p) · GW(p)

Very interesting, thanks for finding it.

The methods and statistics look good (feel free to correct me). However, I wish the authors would have controlled for gender. I don’t think it would significantly change the results, but behavioral finance research indicates that men are more susceptible to certain behavioral biases than women:

https://faculty.haas.berkeley.edu/odean/papers/gender/BoysWillBeBoys.pdf

Admittedly, “Boys Will Be Boys” addresses overconfidence bias rather than framing and order biases.

comment by Houshalter · 2015-06-27T05:02:42.646Z · LW(p) · GW(p)

An interactive twitch stream of a neural network hallucinating. Or twitch plays Large Scale Deep neural net.

EDIT: Fixed link.

Replies from: ZankerH, Douglas_Knight
comment by ZankerH · 2015-06-27T19:59:18.762Z · LW(p) · GW(p)

You've messed up the link, this is it

http://www.twitch.tv/317070

comment by Douglas_Knight · 2015-06-28T21:41:26.488Z · LW(p) · GW(p)

Some more links: the blog post and a ten minute sample that you put on youtube. I imagine that there are many people who prefer youtube to twitch. In particular, I like the 2x setting on youtube.

Replies from: Houshalter
comment by Houshalter · 2015-06-30T06:24:36.348Z · LW(p) · GW(p)

I'm amazed you found that video since I haven't posted it anywhere yet. I'm still trying to figure out how to add more than 2 minutes of music to it.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-06-30T21:17:14.871Z · LW(p) · GW(p)

I found it by putting the twitch title into the youtube search bar. I tried it because people copy all sorts of videos to youtube,

comment by robot-dreams · 2015-06-24T14:22:32.235Z · LW(p) · GW(p)

What do you all think of "General Semantics"? Is it worth e.g. trying to read "Science and Sanity"? Are there insights / benefits there that can't be found in "Rationality: AI to Zombies"?

Replies from: ChristianKl
comment by ChristianKl · 2015-06-24T15:30:02.965Z · LW(p) · GW(p)

Science and Sanity contains a lot of good insights that aren't in the sequences. The problem is that it's not an accessible book. It hard to read and a substantial time investment.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2015-06-25T04:55:06.344Z · LW(p) · GW(p)

Do you think this is an intrinsic property of the insights, or could someone compress the book in to something shorter, more readable, and almost as useful?

Replies from: ChristianKl
comment by ChristianKl · 2015-06-25T12:04:24.510Z · LW(p) · GW(p)

I don't think the problem is that the book is long. It's that it basically defines it's own language and is written in that language. It's similar to a math textbook defining terms and then using those terms.

It defines for example the term "semantic reaction" and then goes to abbreviate it as s.r The gist is that if you say something the meaning of what you say is the reaction that happens in the brain of your listener when he hears the words.

It's not hard to understand that definition on a superficial level. On the other hand it's hard to really integrate it. It's a fundamental concept used throughout the book.

comment by Lumifer · 2015-06-23T15:11:40.185Z · LW(p) · GW(p)

There is a paper out, the abstract of which says:

...Second, respondents significantly underestimated the proportion of [group X] among their colleagues. Third, [members of group X] fear negative consequences of revealing their ... beliefs to their colleagues. Finally, they are right to do so: In decisions ranging from paper reviews to hiring, many ... said that they would discriminate against openly [group X] colleagues. The more [group anti-X] respondents were, the more they said they would discriminate.

Before you go look at the link, any guesses as to what the [group X] is? X-/

Replies from: Jiro, ahbwramc, Vaniver, NancyLebovitz, Larks
comment by Jiro · 2015-06-23T18:56:16.839Z · LW(p) · GW(p)

I correctly guessed what X was. Because there's only one thing it could ever be, unless the paper was talking about very unusual subgroups like Jehovah's Witnesses in Mormon territory.

Replies from: Manfred, Lumifer, philh
comment by Manfred · 2015-06-23T21:11:51.525Z · LW(p) · GW(p)

Well, it could be creationist zoologists, or satanist school teachers, or transgender fashion models. But of course it's psychologists studying psychologists, and of course it's reiterating an interesting narrative we've seen before.

Replies from: fubarobfusco
comment by fubarobfusco · 2015-06-23T23:35:00.471Z · LW(p) · GW(p)

One would expect creationists to be underrepresented in zoology for a number of reasons, only one of which is that zoologists have negative beliefs about creationists and tend not to hire or encourage them. Others would include that creationists may avoid studying zoology because they find the subject matter unpleasantly contradictory to their existing commitments; and that some people previously inclined to creationism who study zoology cease to be creationists.

Replies from: None
comment by [deleted] · 2015-06-24T10:59:38.946Z · LW(p) · GW(p)

Anecdotally, I know at least one creationist zoologist, although I don't think he publishes creationist stuff. He doesn't stand out at all or has any noticeable trouble because of it. All zoologists I know are weirder than the average person.

comment by Lumifer · 2015-06-23T19:22:14.908Z · LW(p) · GW(p)

there's only one thing it could ever be

That's an interesting observation, isn't it?

Replies from: Nornagest
comment by Nornagest · 2015-06-23T23:39:57.786Z · LW(p) · GW(p)

Between the word "beliefs" (which rules out most demographic groups), the word "openly" (which rules out anything you can't easily hide), and the existence of a plausible "anti-X" group (which rules out most multipolar situations), there's not too many possibilities left. The correct answer is the biggest, and most of the other plausible options are subsets of it.

I suppose it could also have been its converse, but you don't hear too much about discrimination cases going that way.

comment by philh · 2015-06-24T11:03:04.123Z · LW(p) · GW(p)

I think that ngurvfgf would have been a plausible X in some places (and perhaps the opposite in others), but the correct one was the first that came to mind and the one I considered most likely.

comment by ahbwramc · 2015-06-23T15:25:09.480Z · LW(p) · GW(p)

ROT13: V thrffrq pbafreingvirf pbeerpgyl, nygubhtu V'z cerggl fher V unq urneq fbzrguvat nobhg gur fghql ryfrjurer.

comment by Vaniver · 2015-06-23T16:12:10.913Z · LW(p) · GW(p)

Cbyvgvpnyyl pbafreingvir.

comment by NancyLebovitz · 2015-06-24T12:30:45.487Z · LW(p) · GW(p)

I haven't looked. Pbafreingvirf.

comment by Larks · 2015-06-23T23:26:54.503Z · LW(p) · GW(p)

fbpvny pbafreingvirf

comment by John_Maxwell (John_Maxwell_IV) · 2015-06-25T22:49:53.191Z · LW(p) · GW(p)

Sam Altman's advice for ambitious 19 year olds.

Replies from: gudamor, None
comment by gudamor · 2015-06-26T01:16:27.189Z · LW(p) · GW(p)

I don't know of Sam Altman, so maybe this criticism is wrong, but the quote: "If you join a company, my general advice is to join a company on a breakout trajectory. There are a usually a handful of these at a time, and they are usually identifiable to a smart young person." Absent any guides on how to identify breakout trajectory companies, this advice seems unhelpful. It feels like: "Didn't work for you? You must not have been a smart young person or you would have picked the right company."

Paired with the paragraph below on not letting salary be a factor, I am left with the suspicion that Sam runs what he believes to be a company with a 'breakout trajectory' and pays noncompetitive salaries.

Now to find a way to test that suspicion.

Replies from: None, John_Maxwell_IV
comment by [deleted] · 2015-06-29T12:48:25.540Z · LW(p) · GW(p)

I have read something like this on a rationalist blog somewhere. Basically it was a type of advice like "you want to win the race? well, just run fast! just put one foot in front of the other quicker than others do, d'uh!"

Maybe we need a name for this.

comment by John_Maxwell (John_Maxwell_IV) · 2015-06-26T01:24:06.849Z · LW(p) · GW(p)

Sam Altman is the president of Y Combinator.

I think the way to look for a company on a breakout trajectory is to find a company that is growing fast and getting a lot of buzz but has not become established and is not thoroughly proven yet. Even better might be to find a company that's growing fast but not getting a lot of buzz, but that's probably trickier.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-06-26T06:46:10.410Z · LW(p) · GW(p)

As the president of YC, he doesn't really hire anyone, but he does fund lots of companies, and his advice could be interpreted as: work for a YC company.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2015-06-26T09:08:29.666Z · LW(p) · GW(p)

The more precise cynical interpretation would be "work for a promising early stage YC company". Note that he also could have told you to work for a late stage one or apply to YC in order to start one. But it's probably true that working at a promising early-stage YC company is what would most benefit YC on the margin. (Although if what benefits YC most on the margin is what generates the most value, then generating more value for YC also seems like a good way to generate enough value that you capture a significant chunk.)

comment by [deleted] · 2015-06-29T12:40:30.633Z · LW(p) · GW(p)

These types of advices are really not honest enough, I think. Let me try a honest one:

1) Move to America if you don't already live there. Bluff your way through immigration officers and whatnot.

2) Move to the Silicon Valley if you don't already live there. Deal with the costs of living there / outside your parents house anyhow.

3) Acquire enough money, lump sum or regular income that you can can focus on chasing shiny things for years without pay. Consider getting reincarnated in a well-to-do family, that helps.

4) Above is still true if you intend to join a company. Unless you want to join the kind of company where you are okay with HR drones keyword-buggering and credential-combing your resume and requiring 3 years of experience in technologies 2 years old, which is not really what the truly ambitious like to, those years will be spent on getting to know excellent founders, and making the kind of stuff on your own that convinces them to let you join them. I.e. chasing shiny things without pay before you can join the right kind of company.

5) Be a programmer, because there are very few professions where you can just casually build things as you see fit. As a programmer you get away with not having access to anything but your brain, the net, and a laptop. If you are e.g. a sculptor and your dream is to build a 50m tall Darth Vader statue out of bronze, well that is going to require some harder to acquire stuff as well. If you took all these building a bit too literally and you graduated in civil engineering, your chances of starting out on your own after graduating are nil, these kinds of startups don't exist, and you will probably work 10 years at the construction equivalent of Microsoft before you can try to start out on your own and finally do something interesting. So be a programmer. Don't like programming and computer technology so much? Still be a programmer or at the very least figure out real hard how to graduate in something that 1) can be made without really expensive inputs 2) scales up readily to serving many customers simply by gradually renting more stuff and hiring more people, staying ahead of cash-flow. (John, this MealSquares is an excellent example of a non-programming activity that satisfies these criteria. But imagine if your knack was for designing dams. People must invest a ton of money into building them so they will not hire a young nobody, and you can only design one dam at a time, it does not scale up to serving many customers simultaneously.) Have no idea what could be like this, beside programming? Be a programmer. Or a musician.

6) After all these preconditions are done, then you are ready to read Sam Altman and similar folks (Graham etc.)

comment by DataPacRat · 2015-06-24T18:32:46.318Z · LW(p) · GW(p)

Seeking writing advice: Tropes vs writing block?

I've started writing bits and pieces for S.I. again, but not nearly at the rate I was writing before my hiatus.

I'm beginning to wonder if I should cheat a bit, and deliberately leave some of the details I'm having trouble getting myself to write about vague, and explain it away with some memory problems of Bunny-the-narrator for that period. Goodness knows there are plenty of ways Bunny's brain has been fiddled with so far, so it's not without precedent; and if it gets me over the hump and into full-scale writing again, it might be worth including the trope for that reason alone, let alone adding another mental issue to play with narratively.

Anyone have any thoughts?

Replies from: ZeitPolizei, None
comment by ZeitPolizei · 2015-06-25T13:17:46.615Z · LW(p) · GW(p)

Would it maybe help, if you left some of the details vague at first, to get back into writing, and go back later to rewrite those parts?

Replies from: DataPacRat
comment by DataPacRat · 2015-06-25T17:45:37.148Z · LW(p) · GW(p)

That seems to be the default that I'm settling on. I'm jotting down the plot points I want to happen in such sections, marking them so I know that I have to go back to that, and working on whatever I /can/ get myself to work on in the meantime.

comment by [deleted] · 2015-06-26T12:08:36.836Z · LW(p) · GW(p)

From the way things seem given your recent posts about struggling with getting words onto the page, I would suggest doing anything that actually gets you moving in that direction. If you are stuck on one particular bit, by all means skip it for now. Whether that means incorporating this into the narrative, or coming back later for clean-up, depends on the product itself (I haven't read the work you are talking about).

A more general aside: I've found myself in a very similar position, finding it incredibly hard to put words on the page yet needing to do so more and more urgently. I've seen a few comments you made before about preparing optimal writing situations and planning for them - I did exactly the same and in retrospect it seems this was a bad strategy for me. Mainly because such preparations got me thinking more and more about providing an optimal situation for written productivity: in essence setting up small "writing retreats" now and again. This became a self-perpetuating loop of non-writing, because doing so provided perfect excuses for NOT writing at any other time.

A friend who is a (now retired) writer suggested that instead, I work on writing despite distractions, rather than constraining my writing effort to those situations where all distractions are minimised. In alternating weeks I tried the different techniques (A,B,B,A, where A=my old approach of writing in optimal situations and B=explicit attempt to write in distracting environments I wouldn't consider suitable for "A"). It turned out that B>A both in minutes spent writing (+125%) and in wordcount (+160%). Quality of work under "B" might have been lower but I don't seem to have a block in editing and revising, only in first drafting.

comment by hydkyll · 2015-06-23T21:11:16.041Z · LW(p) · GW(p)

I want to do a PhD in Artificial General Intelligence in Europe (not machine learning or neuroscience or anything with neural nets). Anyone know a place where I could do that? (Just thought I'd ask...)

Replies from: Kaj_Sotala, jsteinhardt, jsteinhardt
comment by Kaj_Sotala · 2015-06-24T04:04:02.840Z · LW(p) · GW(p)

IDSIA / University of Lugano in Switzerland is where e.g. Schmidhuber is. His research is quite neural network-focused, but also AGI-focused. Also Shane Legg (now at DeepMind, one of the hottest AGI-ish companies around) graduated from Lugano with a PhD thesis on machine superintelligence.

"AGI but not machine learning or neuroscience or anything with neural nets" sounds a little odd to me, since the things you listed under the "not" seem like the components you'll need to understand if you want to ever build an AGI. (Though maybe you meant that you don't want to do research focusing only on neuroscience or ML without an AGI component?)

comment by jsteinhardt · 2015-06-25T05:41:29.375Z · LW(p) · GW(p)

Just wondering why you don't want to do machine learning? Many ML labs have at least some people who care about AI, and you'll get to learn a lot of useful technical material.

comment by Algon · 2015-06-26T19:14:35.888Z · LW(p) · GW(p)

A little while back, someone asked me 'Why don't you pray for goal X?' and I said that there were theological difficulties with that and since we were about to go into the cinema, it was hardly the place for a proper theological discussion.

But that got me thinking, if there weren't any theological problems with praying for things, would I do it? Well, maybe. The problem being that there's a whole host of deities, with many requiring different approaches.

For example, If I learnt that the God of the Old Testament was right, I would probably change my set of acceptable actions very, very quickly. Perhaps another reasonable response would be to try and very carefully convince this God to change its mind about a couple of things, as though the God of the Old Testament is capable of change if I remember rightly.

On the other end of the spectrum, what about the Greek gods? Well, I think it would still be a good idea to try and convince them not be, you know, egotistical tyrants. Or failing that, humanity should probably try and contain them in some fashion, because who'd want someone like Zeus going about as they pleased?

And if Aristotle's Prime mover were real... Well, I guess you'd just ignore it.

Anyway, I think Its a pretty interesting topic, if not a very useful one.

Any thoughts on how you'd react to any of humanities collection of deities?

comment by William_S · 2015-06-24T14:49:26.854Z · LW(p) · GW(p)

Does anyone know about any programs for improving confidence in social situations and social skills that involve lots of practice (in real world situations or in something scripted/roleplayed)? Reading through books on social skills (ie. How to Win Friends and Influence People) seems to provide a lot of tips that would be useful to implement in real life, but they don't seem to stick without actually practicing them. The traditional advice to find a situation in your own life that you would already be involved in hasn't worked well for me because it is missing features that would be good for learning (sporadic, not repeatable, can't get feedback from someone who knows what they are doing on your performance, have a lot of things going on beyond the aspects you want to focus on, things can move on without giving you time to think, etc.). For example, this might look like a workshop that involved a significant amount of time pairing up with other participants and practicing small talk, with breaks in between to cool down, get feedback, and learn new tips to practice in later rounds.

Replies from: None, Bryan-san, ChristianKl, CAE_Jones, Strangeattractor
comment by [deleted] · 2015-06-27T17:02:53.328Z · LW(p) · GW(p)

There's a number of "game" related courses that take this approach. Most of these programs involve going out, and continually approaching and interacting with people, with specific goals in mind.

There's the connection course (This one is probably the closest you're looking for, as he's reworked it to remove all "gamey" stuff, and just focused on social interactions): http://markmanson.net/connection-course

There's the Collection of Confidence: http://www.amazon.com/The-Collection-Of-Confidence-HYPNOTICA/product-reviews/B000NPXWT8

Stylelife academy: http://web.stylelife.com/

Ars Amorata: http://www.zanperrion.com/arsamorata.php

and a whole bunch more.

Edt: in my area at least, there's also practice groups for Non-violent communication on meetup.com

comment by Bryan-san · 2015-06-24T16:39:43.284Z · LW(p) · GW(p)

The Rejection Game using Beeminder can be a good start for social skills development in general

If you're interested in a specific area of social interactions then finding a partner or two in that area could help out. Toastmasters, pua groups, book clubs, and improv groups fall into this category.

Alternatively, obtaining a job in sales can take you far

Replies from: William_S
comment by William_S · 2015-06-24T16:55:12.520Z · LW(p) · GW(p)

My impression of Toastmasters is that it might be similar to what I'm looking for, but only covers public speaking.

comment by ChristianKl · 2015-06-24T16:31:56.420Z · LW(p) · GW(p)

Advice about picking in person training is location dependent. Without knowing what's available where you live it's impossible to give good recommendations.

Replies from: William_S
comment by William_S · 2015-06-24T16:48:44.058Z · LW(p) · GW(p)

Recommendations for in person training around the Bay Area would be useful (as I'm likely to end up there).

Replies from: ChristianKl
comment by ChristianKl · 2015-06-26T14:59:48.515Z · LW(p) · GW(p)

California is a good place. A lot of personal development framework come from California. It very likely that there are good things available in California that are not known outside of it. Asking locals at LW meetups for recommendations.

There seem to be regular Authentic Relating/Circling event in Berkeley: https://www.facebook.com/ARCircling We had a workshop in that paradigm at our European LW Community Event and it was well liked. I also attended another workshop in that framework in Berlin. Describing the practice isn't easy but it's goal is about having deep conversations with other people that produce the feeling of having a relationship with them.

I have spent multiple years in Toastmasters and wouldn't recommend it if your goal isn't being on stage. Toastmasters Meetings usually have 20+ people in a room and only one person speaking at a time. That means relatively little speaking time per person.

Toastmasters is also very structured. The ability to give a good 2 minute Table Topic speech for me didn't create the ability to tell a funny story in a corresponding way in a small talk context. Toastmasters have a nice and fun atmosphere but it feels a bit artificial in a way that Circling isn't. Trying to cut the number of "Ahm" in a speech by focusing on the "Ahm" instead of focusing on the underlying emotional layer is from my perspective suboptimal.

Bryan-san also gave the recommendation of attending PUA groups. It's hard to really know the relevant outcomes. There are people who do have some success via that framework but it also makes some people more awkward. If you do PUA cold approaching you might get feedback from another PUA but you usually don't get honest feedback from the actual woman with whom you are interacting. Authentic Relating on the other hand provides a framework that isn't antagonistic.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-06-26T16:02:28.607Z · LW(p) · GW(p)

PUA success varies by region and local culture. In some urban areas, anecdotally, women have started judging men's PUA "game".

I think it pattern-matches on a "correct" behavior, but is self-defeating; it pattern-matches on the idea that women, like men, want to have casual sex. The "correct" behaviors, are indeed, being something of a jerk, but is self-defeating because it assumes rudeness is the desired quality, rather than a signal of a desired quality: Jerks aren't likely to pester you for follow-up dates, which is to say, they are actually interested in strictly casual sex.

It's self-defeating, because as soon as men who are interested in more meaningful relationships start utilizing the technique of being a jerk, being a jerk stops being a useful signal of -not- being interested in more meaningful relationships. (Being -very good- at being a jerk, on the other hand, probably -does- pattern-match pretty well with interest in strictly casual sex, hence the anecdotal accounts of women judging PUA "game".)

The whole thing gets messier on account of individual differences. Some women want to be hit on, some don't, some want one approach, some want another, some are receptive to the idea of longer-term relationships, some aren't - in short, women are people, too. No single "framework" is going to accommodate everybody's desires, and those who push a monoculture ideal are being narrow-minded. And dating signaling is, frankly, terrible, and often abused, intentionally or unintentionally. (Women signaling desire for casual sex to get free drinks, men signaling desire for long-term relationships to get casual sex, for two of the common complaints.)

Getting outside that, my personal practice is to strike up random conversations with strangers; small talk is the grease the gets conversation going. Treat small talk as a skill with a toolbox of techniques. Your toolbox should contain a list of standard questions for strangers; what do you do for a living, who are you rooting for in (current sports competition), where were you born, how did you end up in this hellhole, etc. The more you do it, the better you get, or at least the more comfortable. Small talk with other smokers while smoking helped my conversational abilities immensely, although for obvious reasons I wouldn't necessarily advocate that.

Replies from: ChristianKl, VoiceOfRa
comment by ChristianKl · 2015-06-26T20:02:15.160Z · LW(p) · GW(p)

The problem is not only about the woman but about the man. Quite many man who go into PUA never end up in a state where they are comfortable striking up random conversations with strangers.

Recently I went to a local "get out of your comfort zone" meetup in Berlin lead by someone who authored a book on comfort zone expansion and who has a decade in the personal development industry. Surprisingly we didn't went out to start conversations with strangers. His main argument against going down that road was that it often makes people without previous experience often experience those exercises in a disassociated way instead of in an associated way.

PUA quite often leads to people trying to influence the woman instead of paying attention to their own emotions and dealing with those emotions in a constructive fashion.

It's certainly possible to have toolbox smalltalk and do okay with it. Developing genuine curiosity for the other person and letting that curiosity guide your questions is both more fun and more likely to create a connection.

No single "framework" is going to accommodate everybody's desires, and those who push a monoculture ideal are being narrow-minded.

I'm not advocating monoculture. I also don't think nobody should do PUA. It's just worth noting that PUA doesn't deliver for many people who buy into it.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-06-26T20:10:18.304Z · LW(p) · GW(p)

It's certainly possible to have toolbox smalltalk and do okay with it. Developing genuine curiosity for the other person and letting that curiosity guide your questions is both more fun and more likely to create a connection.

The toolbox gives you a starting point; it's not meant to be the entirety of the conversation, but rather starting points. It's relatively easy to maintain a conversation, harder to start one. Curiosity doesn't begin until you have something to be curious about in the first place.

I agree that PUA doesn't give people what they're looking for, most of my comment was intended to explain why. (Short summary: It's about sex, not conversation.)

Replies from: ChristianKl
comment by ChristianKl · 2015-06-26T20:46:24.554Z · LW(p) · GW(p)

When standing at a bus stop are you asking a stranger: "What do you do for a living?" To me that doesn't seem like a good conversation starter.

"Do you know in many minutes the bus will arrive" can be a curiosity based question, that's socially acceptable to ask. I'm standing next to a stranger and that question comes into my mind, I notice that I have a question were I'm interested in the answer. I can either look at my phone and look at the bus timetable to figure out the answer or I can ask the other person.

There are many instances like that were you can choose the social way to deal with the situation.

I agree that PUA doesn't give people what they're looking for, most of my comment was intended to explain why. (Short summary: It's about sex, not conversation.)

I think even for people who think they want sex, it often doesn't deliver on it's promise.

comment by VoiceOfRa · 2015-07-01T02:40:40.815Z · LW(p) · GW(p)

I think it pattern-matches on a "correct" behavior, but is self-defeating; it pattern-matches on the idea that women, like men, want to have casual sex. The "correct" behaviors, are indeed, being something of a jerk, but is self-defeating because it assumes rudeness is the desired quality, rather than a signal of a desired quality: Jerks aren't likely to pester you for follow-up dates, which is to say, they are actually interested in strictly casual sex.

The reason women who want causal sex are attracted to Jerks isn't because they aren't likely want follow up dates, it's because if getting the father to help raise the kids id out of the question, you want the best possible sperm. Granted today the women is likely to use a condom or abort because she doesn't want children, but that's adaptation execution for you.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-07-01T13:55:20.649Z · LW(p) · GW(p)

Are you an evolutionary strategy? Do your preferences all reduce down to evolutionary strategies?

Replies from: VoiceOfRa
comment by VoiceOfRa · 2015-07-02T03:40:26.008Z · LW(p) · GW(p)

My preferences are shaped by my genes (which were shaped by evolution), and my experiences as interpreted by the systems built by my genes.

comment by CAE_Jones · 2015-06-24T15:21:33.266Z · LW(p) · GW(p)

A subset of Speech Therapy (especially for Autism Spectrum) covers exactly this sort of thing. I rather doubt it's what you're looking for, even if it's an option, but it fits what you described almost perfectly. The major issues would be the tendency toward a more clinical setting, only being an hour or so a week, the limited pool of people to practice with, and establishing your existing skills.

comment by Strangeattractor · 2015-06-26T07:41:45.921Z · LW(p) · GW(p)

Sometimes career centres at universities or community colleges have workshops to practice job interviewing and networking. You could see if there's something like that near you.

comment by polymathwannabe · 2015-06-23T19:44:16.960Z · LW(p) · GW(p)

Can we look at Orbán's Hungary as a real-life laboratory of whether NRx works in practice?

Replies from: knb, Lumifer, ChristianKl
comment by knb · 2015-06-28T00:31:04.176Z · LW(p) · GW(p)

I recall reading somewhere (slatestarcodex, I think) that the neoreactionists have three main strains: ethnocentric, techno-futurist/capitalist, and religious-authoritarian. In light of that, I wonder if Israel isn't a better example than Hungary. Israel is technologically advanced but also a strict ethnostate with some theocratic elements. Like Hungary, Israel is probably too democratic to really qualify.

Replies from: VoiceOfRa
comment by VoiceOfRa · 2015-07-01T01:59:30.932Z · LW(p) · GW(p)

The problem with Israel is that the religious elements are based on a religion still optimized for exile rather than being a national religion. They still haven't rebuilt the temple, for crying out loud.

comment by Lumifer · 2015-06-23T19:56:47.159Z · LW(p) · GW(p)

Why do think Orbán's Hungary is a good example of NRx ideas implemented?

Replies from: polymathwannabe
comment by polymathwannabe · 2015-06-23T20:03:43.533Z · LW(p) · GW(p)

Example and example.

Replies from: Lumifer, None
comment by Lumifer · 2015-06-23T20:06:59.293Z · LW(p) · GW(p)

So why not Putin himself? Or the Belorussian guy? Or any of the Central Asian rulers? If the criterion is rejection of liberal democracy, why not China?

Replies from: polymathwannabe
comment by polymathwannabe · 2015-06-23T20:13:23.011Z · LW(p) · GW(p)

Those countries were never very liberal to begin with, so their departure from Western values doesn't look like what the experiment needs. Hungary, on the other hand, has a solid history of resistance to totalitarianism that only in the past half decade has had to face the threat of dictatorship.

Replies from: Viliam
comment by Viliam · 2015-06-24T15:08:17.308Z · LW(p) · GW(p)

There is more to NRx than just giving up liberal values. For example, Hungary still has elections that this guy has to win, so I guess they would still classify the country as "demotist".

When they make a revolution, abolish democracy, declare Orbán a hereditary king, and possibly when he hires Ernő Rubik as a Chief Royal Scientist to solve all country's problems, then we'll have a good example.

comment by [deleted] · 2015-06-30T07:58:30.082Z · LW(p) · GW(p)

AFAIK NRx are quasi-libertarians in the Hoppean sense (or Pinochetian sense), who want to use political authoritarianism for economic libertarianism largely. Orban is pretty much the opposite - economic statist, on a nationalist basis. Socially they can be similar but economically not. Orban is closer to US palecons like Pat Buchanan who are not full believers of free markets, they accept economic intervention just not on a left-wing / egalitarian basis, but a nationalist-protectionist basis e.g. not shipping jobs abroad.

I admit this is a bit complicated, because economic libertarianism and illibertarianism meshes with different ideologies depending on what aspect of non-intervention they focus on. For example those US righ-wingers who focus primarily on low taxes and social spending, are closer to Orban, those want all kinds of spending low not just social are not so close, those who focus on free trade are far away from him, and those who focus on privatizing things are the farthest - Eastern European right-wing tends to be anti-privatization because privatization tends to lead to foreigners acquriing things and it does not mesh with their nationalism well.

It's a bit complicated.

But I see the primary difference as Orban is playing the man-of-the-people role, talks about a "plebeian" democracy, asks voters frequently about their opinion of issues, so he would be an NRx "demotist", he plays that role of the Little Guy against liberal elites type of thing that is closer to perhaps Tea Party folks. In short, far more anti-liberal than anti-democratic, he plays more of the role of a rural conservative democrat against aristocratic liberal elites, and his primary goal seems to be strengthening the national state against international liberal capitalism. He is very much the anti-Soros, and that is explicit (there are few people the Eastern European Right hates more than George Soros, and both because of his liberal views and capitalist exploits).

European terminology tends to call this all populism. Anti-liberalism both in lifestyle and economics, focusing on the working class guy who is both anti-capitalist and conservative/traditional in lifestyle, with a rural tinge.

And I don't think populism and NRx would mesh well unless I really ignored a big aspect of NRx but e.g. Anissinov looks like an anti-populist pro-aristocrat to me.

Replies from: VoiceOfRa
comment by VoiceOfRa · 2015-07-01T01:55:00.852Z · LW(p) · GW(p)

I'm not quite a NRx but from what I hear about him I like Orban.

Replies from: None
comment by [deleted] · 2015-07-01T07:33:20.723Z · LW(p) · GW(p)

As long as you don't care much about economic libertarianism, privatizing all the things etc. but only social conservatism, you can be on the same page.

Admittedly, the whole economic libertarianism thing is different in the center vs. peripheria of globalization. In the center, such as the US where businesses are owned by people of those countries, anti-libertarianism usually means egalitarianism. In the peripheria, where businesses are usually foreign-owned, anti-libertarianism usually means economic nationalism, protectionism. The later is culturally far more palatable for culturally conservative people, but Rothbard types would still be disgusted by it.

BTW you see the same story on a far larger and transparenter case in Russia. Classical liberalism / libertarianism is equated with Yeltsin and that equated with selling all the things to foreigners and his memory very much hated on the Russian Right. They may be down with those types of libertarianism that is mostly about tax cuts, but they really draw lines at not letting foreigners get a lot of economic influence. (Not that Yeltsin was anywhere near being a principled libertarian - he just really liked selling things. I think the only principled libertarian to the east from Germany is Vaclav Klaus.)

comment by ChristianKl · 2015-06-26T14:13:19.657Z · LW(p) · GW(p)

It's still a democracy which has elections that the OCED can inspect Hungary. It's also still a member of the EU. That means it's subject to all sorts of legislation from Brussels and action by the EU Court of Human Rights.

It seems like Hungary has to pay billions to the churches due to a verdict of the EU Court of Human Rights.

comment by [deleted] · 2015-06-23T13:26:13.008Z · LW(p) · GW(p)

this was an unhelpful comment, removed and replaced by this comment

Replies from: None, Strangeattractor
comment by [deleted] · 2015-06-27T16:54:18.277Z · LW(p) · GW(p)

If you just want a basic "display information" website, go with wordpress.

If you're looking to do a full web-app, I'd recommend either Drupal, or Wordpess with the Toolset plugins.

comment by Strangeattractor · 2015-06-26T07:43:53.907Z · LW(p) · GW(p)

Wordpress is open source. That's a good thing, and important.

comment by cgag · 2015-06-23T05:29:02.348Z · LW(p) · GW(p)

I've mostly been here for the sequences and interesting rationality discussion, I know very little about AI outside of the general problem of FAI, so apologies if this question is extremely broad.

I stumbled upon this facebook group (Model-Free Methods) https://www.facebook.com/groups/model.free.methods.for.agi/416111845251471/?notif_t=group_comment_reply discussion a recent LW post, and they seem to cast LW's "reductionist AI" approach to AI in a negative light compared to their "neural network paradigm".

These people seem confident deep learning and neural networks are superior to some unspecified LW approach. Can anyone give a high level overview of what the LW approach to AI is, possibly contrasted with theirs?

Replies from: Manfred, hydkyll
comment by Manfred · 2015-06-23T17:20:27.666Z · LW(p) · GW(p)

There isn't really a "LW approach to AI," but there are some factors at work here. If there's one universal LW buzzword, it's "Bayesian methods," though that's not an AI design, one might call it a conceptual stance. There's also LW's focus on decision theory, which, while still not an AI design, is usually expressed as short, "model-dependent" algorithms. It would also be nice for a self-improving AI to have a human-understandable method of value learning, which leads to more focus diverted away from black-box methods.

As to whether there's some tribal conflict to be worried about here, nah, probably not.

comment by hydkyll · 2015-06-23T21:08:47.376Z · LW(p) · GW(p)

I think this sums up the problem. If you want to build a safe AI you can't use neural nets because you have no clue what the system is actually doing.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2015-06-24T03:54:05.692Z · LW(p) · GW(p)

If we genuinely had no idea of what neural nets were doing, NN research wouldn't be getting anywhere. But that's obviously not the case.

More to the point, there's promising-looking work going on at getting a better understanding of what various NNs actually represent. Deep learning networks might actually have relatively human-comprehensible features on some of their levels (see e.g. the first link).

Furthermore it's not clear that any other human-level machine learning model would be any more comprehensible. Worst case, we have something like a billion variables in a million dimensions: good luck trying to understand how that works, regardless of whether it's a neural network or not.

comment by G0W51 · 2015-06-28T06:20:46.998Z · LW(p) · GW(p)

Perhaps it would be beneficial to introduce life to Mars in the hope that it could eventually evolve into intelligent life in the event that Earth becomes sterilized. There are some lifeforms on Earth that could survive on Mars. The outer space treaty would need to be amended to make this legal, though, as it currently prohibits placing life on Mars. That said, I find it doubtful that intelligent life ever would evolve from the microbes, given how extreme Mar's conditions are.

Replies from: Unknowns
comment by Unknowns · 2015-06-28T09:17:18.669Z · LW(p) · GW(p)

If you want to establish intelligent life on Mars, the best way to do that is by establishing a human colony. Obviously this is unlikely to succeed but trying to evolve microbes into intelligent life is less likely by far.

Replies from: ChristianKl
comment by ChristianKl · 2015-06-28T11:16:33.905Z · LW(p) · GW(p)

The likelihood of success of establishing a human colony depends on the timeframe.

If there's no major extinction event I would be surprised if we don't have a human mars colony in 1000 years. On the other hand having a colony in the next 50 years is a lot less likely.

comment by eternal_neophyte · 2015-06-27T00:51:04.104Z · LW(p) · GW(p)

Can anyone help me understand the downvote blitz for my comments on http://lesswrong.com/lw/mdy/my_recent_thoughts_on_consciousness/ ?

I understand that I'm arguing for an unpopular set of views, but should that warrant some kind of punishment? Was I too strident? Grating? Illucid? How could I have gone about defending the same set of views without inspiring such an extreme backlash?

The downvotes wouldn't normally concern me too much but I received so many that my karma for the last 30 days has dropped to 30% positive from of 90%. I'd like to avoid this happening again when the same topic is under discussion.

Replies from: fubarobfusco
comment by fubarobfusco · 2015-06-27T17:01:38.929Z · LW(p) · GW(p)

You note: "I did not really put forth any particularly new ideas here, this is just some of my thoughts and repetitions of what I have read and heard others say, so I'm not sure if this post adds any value."

Many readers (myself included) are already familiar with these sources, and so the post comes across as unoriginal. It is basically you rephrasing and summarizing things that a lot of people have already read. In other words, it's probably not that people are downvoting to disagree, but because they don't see a response-journal reiterating well-known views as a good Main post. It's not "Go away, you are not smart enough to post here!" but "Yes, yes, we know these things; this particular post here is not news."

The post has far too much "I think", "I realized", "it seems to me" language in it. It's your post; of course it is about what you think. In conversation those kind of phrases are used to soften the impact of a weird view strongly stated, but in writing they make it sound like the writer is excessively wrapped up in themselves.

(On the other hand, if the important part is the sequence of your realizations, then present the evidence that convinced you, not just assertions that you had those realizations.)

While different language communities have different standards for paragraph length, by the standards of current Web writing, your paragraphs are often way too long. To me, long block paragraphs come across as "kook sign" — that is, they lead me to think that the writer's thinking is disorganized.

Replies from: eternal_neophyte
comment by eternal_neophyte · 2015-06-27T17:06:55.761Z · LW(p) · GW(p)

I am not the OP of the thread I linked to. Most of the downvotes I received (in the comments) of that post have been reversed. Thanks for replying though.

Replies from: fubarobfusco
comment by fubarobfusco · 2015-06-27T19:37:54.597Z · LW(p) · GW(p)

Ah, oops. Indeed, I thought you were the poster and were asking for an explanation of the downvotes to the post.

comment by skeptical_lurker · 2015-06-23T06:14:58.853Z · LW(p) · GW(p)

If someone on LW mentions taking part in seriously illegal activities (in all jurisdictions), am I morally obliged to contact the police/site admin? I don't think the person in question is going to hurt anyone directly.

Speaking of which, who is the site mod? Vladmeir someone?

EDIT: I think I misunderstood and the situation isn't bad enough to need reporting to anyone. He was only worrying about whether he wanted to do certain things, rather than actually doing them.

Replies from: None, Lumifer, Richard_Kennaway, Dahlen, ChristianKl
comment by [deleted] · 2015-06-23T09:55:07.320Z · LW(p) · GW(p)

NancyLebovitz is the newest moderator at present.... and I believe the only really active one at least in day-to-day operations. Viliam_Bur was previously in that role but he backed off in January due to other time commitments.

There is a moderator list here

comment by Lumifer · 2015-06-23T14:48:37.251Z · LW(p) · GW(p)

am I morally obliged to contact the police/site admin?

I hope not (though your morality is your morality, of course). Bringing in the cops into an online discussion is very VERY rarely a proper thing to do, IMHO.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-07-21T06:56:48.964Z · LW(p) · GW(p)

This was a very serious matter - I would not have considered calling the police for most things.

comment by Richard_Kennaway · 2015-06-23T09:10:29.111Z · LW(p) · GW(p)

I didn't see the post I believe you're referring to before the author redacted it, but for me the line would be real danger to other people, which you say you don't think is the case. It any case, it would be best to go through the mods first. A pseudonymous post recounting deeds in (perhaps) unspecified places and times isn't something the police can work with. Also, summoning the police is not to be done lightly, for once summoned, no power can banish them whence they came.

Replies from: skeptical_lurker, Elo
comment by skeptical_lurker · 2015-06-23T09:58:53.701Z · LW(p) · GW(p)

I have thought about it, and contacting the police straight away would only be the right thing to do if there was some imminent danger. I probably wouldn't have mentioned it, except that it was the sort of thing which can pose an indirect threat or lead to behaviour which does hurt other people.

Anyway, it appears I got the wrong impression anyway, and he was only obsessing over the hypothetical possibility of doing things, rather than actually doing them. So this is one of the times when its good that I didn't impulsivly do the first thing that popped into my head and instead stopped to think about it.

comment by Elo · 2015-06-25T00:19:49.034Z · LW(p) · GW(p)

I saw the post; it was a mix-up.

comment by Dahlen · 2015-06-26T12:28:13.808Z · LW(p) · GW(p)

You may or may not be legally obligated (although this obligation is not realistically enforceable by law); as for morally obligated, it depends upon the nature of the act. There is imperfect overlap between things defended by law and by morality. If we're talking piracy or jaywalking or buying modafinil on the black market, you may be overexerting your civic powers here, and your conscience can relax. If the matter at hand involves violence, and you can expect to save some people through your report, then maybe it's better to get involved. For all matters in-between, both approaches may be valid in certain proportions, so apply common sense.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2015-06-28T10:28:44.681Z · LW(p) · GW(p)

It was a misunderstanding, but for the record, what I thought was going on was far worse than buying modafinil, or any other illicit substance, and sort of indirectly involves violence.

comment by ChristianKl · 2015-06-24T11:29:39.597Z · LW(p) · GW(p)

A site mod theoretically has IP addresses that allow him to pick the right country when reporting to the police. As a result it makes sense to report to a mod.

If you would contact the police and then they contact the side mod, things can get more messy when the mod doesn't reply fast enough.

comment by Xerographica · 2015-06-23T00:17:41.582Z · LW(p) · GW(p)

Economics, bias and fallacies.

Replies from: John_Maxwell_IV, Elo
comment by John_Maxwell (John_Maxwell_IV) · 2015-06-25T05:25:20.627Z · LW(p) · GW(p)

I thought this comment was pretty good.

comment by Elo · 2015-06-25T00:43:22.715Z · LW(p) · GW(p)

yep. someone else downvoted this. I agree in downvoting because of the lack of description or information given with the random link.

Replies from: John_Maxwell_IV, Xerographica
comment by John_Maxwell (John_Maxwell_IV) · 2015-06-25T05:17:04.352Z · LW(p) · GW(p)

Seems to work fine for reddit...

Replies from: Elo
comment by Elo · 2015-06-25T10:59:44.963Z · LW(p) · GW(p)

If I wanted to be on reddit I would be on reddit.

comment by Xerographica · 2015-06-27T15:33:11.644Z · LW(p) · GW(p)

"Nobody made a greater mistake than he who did nothing because he could do only a little." - Edmund Burke

Replies from: Elo
comment by Elo · 2015-06-27T23:06:43.871Z · LW(p) · GW(p)

If that's a related quote you should say so; if that's a meta comment about my comment you should also say so.

Downvoted this: I could give you an equally pretty sounding quote about walking blindly forward or repeating teachers passwords but I am too lazy to find a link. This is the Internet; don't be cryptic, be obvious, be helpful and be clear.

How about this description:

The author Russ Roberts is a research fellow at Stanford University's Hoover Institution well known for communicating economics to non-economists. He applied in-group biases to the real world perspectives of two parties on the budget stimulus package in america. He comments on how both sides can be perceiving that the same thing both worked and didn't work.

And the most important part of sharing a link:

I am sharing this because... I think he is right and everyone should read what he has to say//I think he is wrong and here is why...

Replies from: Xerographica
comment by Squark · 2015-06-27T18:25:48.988Z · LW(p) · GW(p)

Regarding the recent development in the US. To me it seems marriage shouldn't be part of the legal system at all. If anything, legal marriage is a legacy of the days when women were treated as chattel.

EDIT: I dont know why this comment received downvotes, but maybe some readers took it to be criticisim of same sex marriage. That can hardly be further from the intent! Allowing same sex marriage is a great improvement that I applaud, but abolishing legal marriage entirely would be even better.

EDIT: I was asked to provide reasoning for my position. Well, it seems to me that in some sense the burden of proof is on the other side. In general, the less complexity we have in the legal system the better. IMO the primary reason marriage is a legal status is historical: in previous eras, a woman who entered into marriage became something close to a slave of her husband, and this relationship was legally important for approximately the same reasons property rights are legally important. Nowadays there are all sorts of laws associated with marriage but IMO they are all better implemented differently. To put in other words, if we had to reinvent the state without relying on history, I see no reason we would have invented legal marriage at all.

Practically speaking, even after resolving the issue of same sex marriage we still have the issue of polyamorous marriage. And trying to legalize it would lead to all sorts of complications (How do we formalize polyamorous marriage? Is it a graph? Is it a system of subsets? Are the subsets disjoint?) It seems much simpler to get rid of the entire concept.

Of course, if I'm missing some really good arguments why we should have legal marriage after all, I would be glad to hear them.

EDIT: To clarify, my comment's intent was starting a discussion rather than stating a final verdict on the subject. Obviously a well-grounded conclusion would require a much deeper analysis than the few paragraphs above.

Replies from: ChristianKl, None, None, ChristianKl, Epictetus, ChristianKl
comment by ChristianKl · 2015-06-28T11:27:23.421Z · LW(p) · GW(p)

EDIT: I dont know why this comment received downvotes

You voiced a political opinion on LW and you provided no proper reasoning for why other people should agree with you. You didn't steelman the opposing site and showed why you think they are wrong.

Replies from: Squark
comment by Squark · 2015-06-28T18:17:41.370Z · LW(p) · GW(p)

Fair enough. See edit.

comment by [deleted] · 2015-06-29T13:02:23.158Z · LW(p) · GW(p)

in previous eras, a woman who entered into marriage became something close to a slave of her husband

If it was Reddit, I would reach for the downvote button. Since it is not, I will try to make a shot why this type of argument is problematic. The Past is a big place, ranging from the beginnings of written history to the recent minute and over the planet. While on the conservative side there is an equally erroneous tendency to glorify the whole of this range, often on the progressive side there is an equally erroneous tendency to vilify the whole of it. These tendencies come from various philosophies of history, "kali yuga" in the first case and "whig theory" in the second case, your one. This - both - simply puts the politics of today into an unrealistic perspective. Both errors set up the mood of discussing political changes into a distorted "one more step away from our glorious past" vs. "one more step away from the horrors of our past".

Your example illustrates this meta problem excellently. The last time I remember men were actually allowed to sell their wives to slave traders was Pagan Rome. What matters of the past for current politics is largely the last 250 years of largely western nations, so the Enlightenment era, where none of the actual characteristics of slavery were present in marriage. What there was instead is broadly the status of women in marriage as minors, not slaves, i.e. comparable to children but even that was changing was early as 1809-1848 in the US and in similar developed nations. So plain simply in that kind of past that matters, that is relevant, because it affects the present through the weight of being an established tradition it is not so. None of your grandmothers even remembers what it was like to not own property in a marriage and similar things. Non-equality does not imply being a slave unless you felt like a slave at 17.

Replies from: Douglas_Knight, Squark
comment by Douglas_Knight · 2015-07-01T15:21:17.415Z · LW(p) · GW(p)

You seem to define slavery as the right to sell slaves. This is usually called "chattel slavery" because it is a very small fraction of all the people called "slaves" throughout history.

It is true that a Roman husband had great rights over his wife, but that has nothing to do with marriage. The husband simply assumed the rights previously held by the father, the same rights the father had over his sons.

Replies from: None
comment by [deleted] · 2015-07-01T15:31:04.632Z · LW(p) · GW(p)

This is true but also true that non-chattel slavery used to have a lot of other names as well, serfdom, indentured servitude etc. I generally don't know many examples where non-chattel slavery did not have some other name as well.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-07-01T15:48:00.605Z · LW(p) · GW(p)

No, I am not talking about serfs and indentured servants. I am talking people called slaves. Almost every example where you think slaves are chattel is because you are wrong about history. For example, the great diversity of slaves in the Bible are not chattel.

comment by Squark · 2015-06-29T16:12:56.796Z · LW(p) · GW(p)

I mostly agree with the object level statements. IMO an adult treated a minor qualifies as "something close to a slave", but let's not argue over terminology.

comment by [deleted] · 2015-06-29T13:14:15.489Z · LW(p) · GW(p)

The problem of your view is that you really see marriage as being about the people who marry. In reality it is largely about their children. Even gay marriage is seen as a way to pave the way to allowing adoption / surrogate parenthood and thus enabling gays to have full families, including children, although not necessarily biologically theirs - at least that is part of the story, although using it as a vehicle for social validation, and some weird US-specific rules like hospital visits play a role too. While childfree and old people marry too, this is broadly the same as eating ice-cream vs. actually eating a meal. The meat is missing. Which does not mean that it should not be allowed, because just why not, but it does not mean either that it is valid to see marriage as an institutionalization of a relationship of adults and see how could we make better institutions for this? But marriage is not for adults primarily. For adults the whole thing is simple - ideally everybody should be able to marry but people who are dedicatedly childfree should probably realize there is no good reason to. There is hardly any good reason for two modern, income-earning people to pool resources unless one of them is becoming a housewife / houseman and really the only good reason to do that ever is children otherwise you are just being a maid. The primary thing marriage is optimized for is children. I predict most gay couples who bother about the whole marriage thing intend to adopt or have a surrogate child. Otherwise there would be little point to.

Gay marriage does not hurt children but abolishing marriage would. It would be one step towards making it less and less sure that children will always have their mother and father, and their property, around.

The answer to poly marriage is that first figure out how to sort out parenthood and then you will have your answer. If you would see it as an "it takes a village to raise a child" kind of setup, sure, just consider it a group thing, everybody pooling their property for the sake of raising children, no matter who is the father or the mother. I think Robert Heinlein proposed this in The Moon Is A Harsh Mistress in 1966. However if you think even in a poly thing primarily the two biological parents would be responsible for the children, things could really get a bit complicated.

In short, I think you really need to update the view that marriage is about two or more adults for some weird reason wanting to make their love institutional. No, it is primarily about children, own or adopted, although due to the social customs associated to it it is often used for other purposes, but that is not the main purpose.

I should add that in the wedding ceremony where my wife and me are from this is halfway explicit. After the promise we gave our parents flower / wine thanking them to raising us - this can be seen as the childhood being over (at 34 it was about time) and now we are going to take up the mantle of becoming parents and continuing the family lines. During the dinner and party, people kept asking when do we plan the first kid. So the generic mood was "nice you guys chose to reproduce" and not something like "nice you guys made your love public". I don't need to make my love public and I could do that without wearing a ridiculous penguin costume...

Replies from: Squark
comment by Squark · 2015-06-29T17:27:40.272Z · LW(p) · GW(p)

Gay marriage does not hurt children but abolishing marriage would. It would be one step towards making it less and less sure that children will always have their mother and father, and their property, around.

I don't understand why. Possibly you misunderstood me: I was arguing for abolishing legal marriage, not abolishing the cultural institution of marriage. I am not legally married to my wife, we have a 5 year old son and everything seems to be ok.

Replies from: None
comment by [deleted] · 2015-06-30T07:52:32.009Z · LW(p) · GW(p)

It does not make much of a difference. In the jurisdictions I am familiar with, cohabitation esp. with a child is practically interpreted as marriage, such as in case of separation commonly acquired property gets split etc. Let me ask, precisely what aspect of legal marriage you object to? Because there is a chance your cohabitation already has that legally.

comment by ChristianKl · 2015-06-28T19:16:17.909Z · LW(p) · GW(p)

Well, it seems to me that in some sense the burden of proof is on the other side.

No. If you call for the abolition of an significant public institution you have to provide proof.

In general, the less complexity we have in the legal system the better.

You haven't shown how handle every single aspect in which marriage is involved by a new rule will reduce complexity.

To put in other words, if we had to reinvent the state without relying on history, I see no reason we would have invented legal marriage at all.

That's argument from ignorance. "I'm to stupid or uninformed about the subject to think of arguments for the opposing site" is not something that should encourage people to adopt your position. Or to reference the sequences: Policy Debates Should Not Appear One-Sided

Replies from: Squark
comment by Squark · 2015-06-28T19:29:48.447Z · LW(p) · GW(p)

I don't think we are going to make progress without going to object level.

Replies from: ChristianKl
comment by ChristianKl · 2015-06-28T19:35:47.369Z · LW(p) · GW(p)

On LW rational debate is a core goal. How to reason about political issues matters more than the question of whether or not marriage should be abolished.

Posts that advocate for good political ideas but do so in an irrational way have no place on LW.

Replies from: Squark
comment by Squark · 2015-06-28T19:39:15.140Z · LW(p) · GW(p)

Rational debate is indeed a core goal. Object level arguments are essential to rational debate in most cases. Avoiding ad hominem subtext is also important.

Replies from: ChristianKl
comment by ChristianKl · 2015-06-28T19:46:05.414Z · LW(p) · GW(p)

You said that you can't think of any reason. I can't address that without using the word "you".

There are indeed two options:
1) You didn't put enough time into understanding the subject.
2) You lack ability to understand it.

Okay maybe there a third:
3) You lied about not seeing any reason

"Argument from ignorance" does not have any place on LW. Discouraging it by calling it out is valuable. It's not something that should stand unchallenged. Not just because it's wrong, but for garden purposes. It lowers the quality of the debate.

Replies from: Squark
comment by Squark · 2015-06-29T05:49:46.926Z · LW(p) · GW(p)

I disagree. It seems to me completely rational to say "Guys, why are we doing X? It looks like there was a reason why we were doing X before but the reason is irrelevant by now and we still keep doing it. Since I see no reason to keep doing it, I suspect it is pure inertia and we should stop doing it. If there is a reason I missed, please point it out".

Imagine you start working in a software company, and you discover the codebase is a jumble of spaghetti. You say "what is going on here? why don't we remove all of this legacy code?" and the other person goes "this is a arguing from ignorance, the fact you don't know why we need this code doesn't mean there is no reason!".

Instead the other person should have either

a. Agreed that we need to schedule refactoring

or

b. Explained the reasons why we need all this complex code

And in case b it might still turn out the reasons are mere rationalisations i.e. the code would never have been written this way if we wrote the system from scratch. Or not. But establishing which requires an actual object level debate.

Replies from: None, ChristianKl
comment by [deleted] · 2015-06-29T13:31:58.323Z · LW(p) · GW(p)

https://en.wikipedia.org/wiki/Wikipedia:Chesterton's_fence

http://unenumerated.blogspot.com/2012/08/proxy-measures-sunk-costs-and.html

I think the true issue here is that you may not have much of a trust in other people's rationality. In this example you sound like you work from the assumption that they have no reasons at all, while in your marriage opinion it sounds like people of "previous eras" (too unspecific) had largely unethical reasons (marriage-as-slavery).

Well this sounds like me when I was 20 :) But what I have learned since is that it is better to assume people are not stupid and not evil unless evidenced otherwise. Now of course this sounds entirely trivial, but at 20 I did not realize the full extent of that principle of charitability. Namely that this also implies that may have entirely valid reasons of which I am entirely ignorant of, and that implies I am not as smart and knowledgeable as I like to think. I had to realize the whole chain of it. Starting from liking to think I am smart and knowledge, when I was younger I too easily went to thinking if I don't understand the reasons for a thing then there aren't any or no good ones just stupid or evil ones, and this led to me ignoring the principle of charity and implicitly thinking other people are stupid and / or evil.

Antoher thing I learned since that reasons are not always explicit. I learned to accept reasons like "because we tried stuff, and this one worked, we have no idea why but it did".

Replies from: Squark
comment by Squark · 2015-06-29T17:32:25.234Z · LW(p) · GW(p)

I do not assume other people are stupid or evil. However, in this particular case my best current hypothesis is that the reasons are mostly historical. That said, I will gladly update on information to the contrary.

comment by ChristianKl · 2015-06-29T10:07:05.613Z · LW(p) · GW(p)

I disagree. It seems to me completely rational to say "Guys, why are we doing X? It looks like there was a reason why we were doing X before but the reason is irrelevant by now and we still keep doing it. Since I see no reason to keep doing it, I suspect it is pure inertia and we should stop doing it. If there is a reason I missed, please point it out".

It's rational to say that about a topic that you don't understand. It's no sin to not put significant time into understanding every topic one wants to speak about and asking other people for insights.

Imagine you start working in a software company, and you discover the codebase is a jumble of spaghetti. You say "what is going on here? why don't we remove all of this legacy code?" and the other person goes "this is a arguing from ignorance, the fact you don't know why we need this code doesn't mean there is no reason!".

If you start working at a company you are ignorant about why the company is acting the way it is. If you are starting at a company you haven't put significant time into understand it's inner workings.

You didn't focus on asking a question. Your posts doesn't contain any question marks expect in the part about polyamous marriage.

There a huge difference between: "I don't know why we do X, we shouldn't do it." and "Can you please explain to me why we do X?"

Replies from: Squark
comment by Squark · 2015-06-29T16:10:26.678Z · LW(p) · GW(p)

Fair enough. See edit.

comment by Epictetus · 2015-06-28T16:13:15.456Z · LW(p) · GW(p)

When all is well and people are living peacefully and amicably, you don't really need the law. When problems come up, you want clear laws detailing each party's rights, duties, and obligations. For example, when a couple lives together for a decade while sharing assets and jointly building wealth, what happens when one party unilaterally wants to end the relationship? This situation is common enough that it's worth having legal guidelines for its resolution.

The various spousal privileges are also at issue. Sure, you can file all kinds of paperwork to grant the individual legal rights to a romantic partner. At this point the average person needs to consult an attorney to make sure nothing is missed. What happens when someone doesn't? You can expedite the process by drafting a special document that allows all these rights to be conferred as part of a package deal, but now you're on the verge of reinventing marriage.

The legal issues surrounding the circumstances of married life will still remain whether marriage is a legal concept or no.

Replies from: Sarunas, Squark
comment by Sarunas · 2015-06-28T19:52:36.770Z · LW(p) · GW(p)

You can expedite the process by drafting a special document that allows all these rights to be conferred as part of a package deal, but now you're on the verge of reinventing marriage.

But if different groups (e.g. different churches or other kinds of organizations) hired lawyers to prepare different standardized packages, they would be able to offer different kinds of contracts that would correspond to that group's concept of marriage. That would give an individual more freedom to choose and would make it unnecessary to solve issues of non-traditional marriages at the political level, and I think that making things less political is usually a good thing.

Of course, there would be more legal paperwork, and, as you've mentioned, there are various risks related to that, in addition to other things.

comment by Squark · 2015-06-28T18:08:00.820Z · LW(p) · GW(p)

The legal issues remain, but I see no reason to delegate them to the government. The people involved should be able to come up with any contract they like, regardless of their gender, number or the nature of their relationship. After all, we don't have special legal status for relationships between landlord and tenant, employer and employee etc.

comment by ChristianKl · 2015-06-27T19:20:49.991Z · LW(p) · GW(p)

Do you think that a state shouldn't give spouses special immigration rights? What about spousal rights when it comes to making medical decisions for an incapacitated partner?

Replies from: Squark, Sarunas
comment by Squark · 2015-06-28T06:11:15.175Z · LW(p) · GW(p)

Regarding medical decisions, I agree with Sarunas: one should have the ability to assign this right to anyone.

Regarding immigration rights, it seems reasonable to take romantic relationships and even more so common children into consideration when granting such rights. I'm not sure we gain anything here by having a legal status called "marriage".

comment by Sarunas · 2015-06-27T22:37:49.599Z · LW(p) · GW(p)

It is not strictly necessary that all these rights should go to the same person, neither it is necessary that such rights have to be related to marriage. It is simpler that way, but it does not seem to be strictly necessary. For example, a person could designate another person (whom they trust and who doesn't have to be their spouse, e.g. it could be a sibling, a parent, or simply a friend they respect) to make medical decisions in such cases and that would be analogous to a testator being able to name an executor of his/her will. If in a similar way other legal things that are currently associated with marriage were decoupled from it and each such right or duty would go to a designated person (not necessarily the same in all cases), marriage wouldn't require any government involvement.