Open thread, Dec. 22 - Dec. 28, 2014
post by Gondolinian · 2014-12-22T02:34:04.606Z · LW · GW · Legacy · 218 commentsContents
218 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
218 comments
Comments sorted by top scores.
comment by JoshuaZ · 2014-12-26T21:04:00.985Z · LW(p) · GW(p)
An op-ed on CNN.com about AI as an existential risk. This seems noteworthy in that this is one of the most mainstream places where this risk has been discussed in the popular press.
Replies from: ilzolende↑ comment by ilzolende · 2014-12-27T08:11:06.597Z · LW(p) · GW(p)
This article seems to mostly avoid the problems Yvain listed with AI reporting. It does talk about economic impacts, but it treats those as a short-term problem instead of the main problem.
Also, I still can't believe that a news anchor mentioned a paperclip maximizer on the air.
comment by edanm · 2014-12-22T14:09:06.039Z · LW(p) · GW(p)
Something I'm looking for:
A list of habits to take up, to improve my life, that are vetted and recommended by the community. Preferably in order of most useful to least useful. Things like "start using Anki", "start meditating", etc.
Do we have list like this compiled? If not, can we create it? I'm a big believe in the things this community recommends, and have already taken up using Anki, am working on Meditation, and am looking for what other habits I should take up.
FYI, I thought of this as I was reading gwern's Dual N-Back article, in which he mentions it's probably not worth the time, as there are much higher-potential activities to do.
(Here's the relevant excerpt from gwern: N-BACK IN GENERAL
To those whose time is limited: you may wish to stop reading here. If you seek to improve your life, and want the greatest ‘bang for the buck’, you are well-advised to look elsewhere. Meditation, for example, is easier, faster, and ultra-portable. Typing training will directly improve your facility with a computer, a valuable skill for this modern world. Spaced repetition memorization techniques offer unparalleled advantages to students. Nootropics are the epitome of ease (just swallow!), and their effects are much more easily assessed - one can even run double-blind experiments on oneself, impossible with dual N-back. Other supplements like melatonin can deliver benefits incommensurable with DNB - what is the cognitive value of another number in working memory thanks to DNB compared to a good night’s sleep thanks to melatonin? Modest changes to one’s diet and environs can fundamentally improve one’s well-being. Even basic training in reading, with the crudest tachistoscope techniques, can pay large dividends if one is below a basic level of reading like 200WPM & still subvocalizing. And all of these can start paying off immediately.)
Replies from: Scott Garrabrant, someonewrongonthenet, John_Maxwell_IV, Lumifer, cameroncowan↑ comment by Scott Garrabrant · 2014-12-22T17:49:18.181Z · LW(p) · GW(p)
I suggest you make it happen. Start with a discussion level post suggesting habits, then a week later, make a discussion level post asking everyone to rank them.
↑ comment by someonewrongonthenet · 2014-12-24T05:33:29.692Z · LW(p) · GW(p)
I strongly suspect the real list of most important habits and skills fall within running, weight training, and cooking, but this is not the forum I'd typically go for advice on those topics.
Replies from: edanm↑ comment by edanm · 2014-12-24T08:19:35.846Z · LW(p) · GW(p)
I agree. Perhaps this should be qualified as "most important habits that are only recommended in the Rationality community". Otherwise there are plenty of other skills we can add (another example - start saving money early, etc).
Replies from: None↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-01-07T10:53:18.224Z · LW(p) · GW(p)
This is a great book on habits and includes a list of habits to consider forming.
↑ comment by cameroncowan · 2014-12-23T00:04:59.753Z · LW(p) · GW(p)
What Nootropics do people take?
Replies from: btrettel↑ comment by btrettel · 2014-12-23T04:16:56.009Z · LW(p) · GW(p)
Scott Alexander's nootropics survey lists many popular ones.
comment by polymathwannabe · 2014-12-27T03:00:25.525Z · LW(p) · GW(p)
comment by Nikario · 2014-12-24T14:46:34.243Z · LW(p) · GW(p)
As a person with a scientific background who suddenly has come into academic philosophy, I have been puzzled by some of the aspects of its methodology. I have been particularly bothered with the reluctance of some people to give precise definitions of the concepts that they are discussing about. But lately, as a result of several discussions with certain member of the Faculty, I have come to understand why this occurs (if not in the whole of philosophy, at least in this particular trend in academic philosophy).
I have seen that philosophers (I am talking about several of them published in top-ranked, peer-reviewed journals, the kinds of articles I read, study and discuss) who discuss about a concept which tries to capture "x" have, on one hand, an intuitive idea of this concept, imprecise, vague, partial and maybe even self-contradictory. On the other hand, they have several "approaches" to "x", corresponding to several philosophical trends that have a more precise characterisation of "x" in terms of other ideas that are more clear i.e. in terms of the composites "y1", "y2", "y3", ... The major issue at stake in the discussion seems to be whether "x" is really "y1" or "y2" or "y3" or something else (note that sometimes an "yi" is a reduction to other terms, sometimes "yi" is a more accurate characterisation that keeps the words used to speak of "x", that does not matter).
What is puzzling is this: how come all of them agree they are taking about "x" while actually, each is proposing a different approach? Indeed, those who say that "x" is "y1" are actually saying that we should adopt "y1" in our thought, and by "x" they understand "y1". Others understand "y2" in "x". Why don't they realise they are talking past each other, that each of them is proposing a different concept and the problem comes just because they want all to call it like they call "x"? Why don't they make sub-indices for "x", therefore managing to keep the word they so desperately want, but without confusing each of its possible meanings?
The answer I have come up with is this: they all believe that there is a unique, best sense to which they refer when they speak about "x", even if it they don't know which is it. They agree that they have an intuitive grasp of something and that something is "x", but they disagree about how to better refine that ("y1"? "y2"? "y3"?). Instead, I used to focus only on "y1" "y2" and "y3" and assess them according to whether they are self-consistent or not, simple or not, useful or not, etc. "x" had no clear definition, it barely meant anything to me, and therefore I decided I should banish it from my thought.
But I have come to the conclusion that it is useful to keep this loose idea about "x" in mind and believe that there is something to that intuition, because only in the contemplation of this intuition you seem to have access to knowledge that you have not been able to formalise, and hence, the intuition is a source of new knowledge. Therefore, philosophers are quite right in keeping vague, loose and perhaps self-contradictory concepts about "x", because this is an important source from where they draw in order to create and refine approaches "y1" "y2" and "y3", hoping that one of them might get "x" right. ((At this point, one might claim that I am simply saying that it is useful to have the illusion that the concept of "x" really means something, even though it actually means nothing, simply because having the illusion is a source of inspiration. But doesn't precisely the fact that it is a source of inspiration suggest that it is more than a simple illusion? There seems to be a sense in which a bad approach to "x" is still ABOUT "x"))
I would be grateful if I got your thoughts on this.
P.S. A more daring hypothesis is that when philosophers get "x" right in "y", this approach "y" becomes a scientific paradigm. This also suggests that for those "x" where little progress has been made in millennia, the debate is not necessarily misguided, but what happens is that the intuition is pointing towards something very, very complicated, and no one has been able to give a formal accout of the things it refers to.
Replies from: Richard_Kennaway, gedymin, Protagoras, ChristianKl, iarwain1, Lumifer, Douglas_Knight↑ comment by Richard_Kennaway · 2014-12-25T23:16:18.296Z · LW(p) · GW(p)
It might be useful to look at what happens in mathematics. What, for example, is a "number"? In antiquity, there were the whole numbers and fractions of everyday experience. You can count apples, and cut an apple in half. (BTW, I recently discovered that among the ancient Greeks, there was some dispute about whether 1 was a number. No, some said, 1 was the unit with which other things were measured. 2, 3, 4, and so on were numbers, but not 1.)
Then irrationals were discovered, and negative numbers, and the real line, and complex numbers, and octonions, and Cayley numbers, and p-adic numbers, and perhaps there are even more things that mathematicians call numbers. And there are other ways that the ways that "numbers" behave have been generalised to define such things as fields, vector spaces, rings, and many more, the elements of which are generally not called numbers. But unlike philosophers, mathematicians do not dispute which of these is the "right" concept of "number". All of the concepts have their uses, and many of them are called "numbers", but "number" has never been given a formal definition, and does not need one.
For another example, consider "integration". The idea of dividing an arbitrary shape into pieces of known area and summing their areas goes back at least to Archimedes' "method of exhaustion". When real numbers and functions became better understood it was formalised as Riemann integration. That was later generalised to Lebesgue integration, and then to Haar measure. Stochastic processes brought in Itô integration and several other forms.
Again, no-one as far as I know has ever troubled with the question, "but what is integration, really?" There is a general, intuitive idea of "measuring the size of things", which has been given various precise formulations in various contexts. In some of those contexts it may make sense to speak of the "right" concept of integration, when there is one that subsumes all of the others and appears to be the most general possible (e.g. Lebesgue integration on Euclidean spaces), but in other contexts there may be multiple incomparable concepts, each with its own uses (e.g. Itô and Stratonovich integration for stochastic processes).
But in philosophy, there are no theorems by which to judge the usefulness of a precisely defined concept.
Replies from: Nikario↑ comment by gedymin · 2014-12-25T13:01:23.465Z · LW(p) · GW(p)
Scott Aaronson has formulated it in a similar way (quoted from here):
Replies from: Nikariowhenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.
Of course, even if Q′ is solved, centuries later philosophers might still be debating the exact relation between Q and Q′! And further exploration might lead to other scientific or mathematical questions — Q′′, Q′′′, and so on — which capture aspects of Q that Q′ left untouched. But from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.
…A good replacement question Q′ should satisfy two properties: (a) Q′ should capture some aspect of the original question Q — so that an answer to Q′ would be hard to ignore in any subsequent discussion of Q, [and] (b) Q′ should be precise enough that one can see what it would mean to make progress on Q′: what experiments one would need to do, what theorems one would need to prove, etc.
↑ comment by Nikario · 2014-12-26T15:48:56.063Z · LW(p) · GW(p)
Thank you for the reference. I am not sure if Aaronson and I would agree. After all, depending on the situation, a philosopher of the kind I am talking about could claim that whatever progress has been made by answering the quesion Q' also allows us to know the answer to the question Q (maybe because they are really the same question), or at least to get closer to it, instead of simply saying that Q does not have an answer.
I think Protagoras' example of the question about whales being fish or not would make a good example of the former case.
↑ comment by Protagoras · 2014-12-25T20:07:14.397Z · LW(p) · GW(p)
It is almost completely uncontroversial that meaning is not determined by the conscious intentions of individual speakers (the "Humpty Dumpty" theory is false). More sophisticated theories of meaning note that people want their words to mean the same as what other people mean by them (as otherwise they are useless for communication). So, bare minimum, knowing what a word means requires looking at a community of language users, not just one speaker. But there are more complications; people want to use their words to mean the same as what experts intend more than they want to use their words to mean the same as what the ignorant intend. Partly that may be just to make coordination easier, but probably an even bigger motive is that people want their words to pick out useful and important categories, and of course experts are more likely to have latched on to those. A relatively uncontroversial extension of this is that meaning needn't precisely match the intentions of any current language speaker or group of language speakers; if the intentions of speakers would point to one category, but there's a very similar, mostly overlapping, but much more useful and important category, the correct account of the meaning is probably that it refers to the more useful and important category, even if none of the speakers know enough to pick out that category. That's why words for "fish" in languages whose origins predate any detailed biological knowledge of whales nonetheless probably shouldn't be thought to have ever included whales in their reference.
So, people can use words without anybody knowing exactly what they mean. And figuring out what they mean can be a useful exercise, as it requires learning more about what you're dealing with; it isn't just a matter of making an arbitrary decision. All that being said, I admit to having some skepticism about some of the words my fellow philosophers use; I suspect in a number of cases there are no ideal, unambiguous meanings to be uncovered (indeed, there are probably cases where they don't mean anything at all, as the Logical Positivists sometimes argued).
↑ comment by ChristianKl · 2014-12-25T11:50:17.979Z · LW(p) · GW(p)
You think that words can be defined and then the definition if you look at a sentence and know the gramatical rules and the definition of those words you can find out what the sentence means. That belief is wrong. If reasoning would work that way, we would have smart AI by now. Meaning depends on context.
I like the concept of phenomelogical primitives. Getting people to integrate a new phenomelogical primitives into their thinking is really hard. I even read someone argue that it's impossible in physics education to teach new primitives.
Teaching physics students that a metal ball thrown on the floor bounces back because of springiness that lets the ball contract when it hits the floor and then expands again is hard. It takes a while till students don't reason anymore that the floor somehow pushes the ball back but that a steel ball contracts.
In biology there the concept of a pseudogene. It's basically a string of DNA that looks like a gene that codes for a gene but that's not expressed into a protein.
On the first instance that seems like a fine definition, on second investigation different biologists differ about what "looking like a gene" means. Different bioinformaticians each write their own algorithms to detect genes and there are cases where one algorithm A says that D is a pseudogene but algorithm B says that D isn't.
Of course changing the trainings data on which the algorithms runs also changes the classification. A really deep definition of a particular concept of a pseudogene would probably mention all the trainings data and the specific machine learnine algorithm used.
There are various arguments complicated arguments to prefer one algorithm over another because the resulting classification is better. You can say it's okay that the algorithm doesn't notice that some strings are genes because they don't look like genes are supposed to look or you can say that you really want that your algorithm detects all genes that exist as genes. As a result the amount of pseudogenes changes.
You could speak of pseudogene_A and pseudogene_B but in many cases, you don't need to think about those details. and abstract them away. It okay if a few people figure out a decent definition of pseudogene that behaves like it's supposed to and then others can use that notion.
In philosophy the literature and how the literature handles various concepts could be thought as training data for the general human mental classification algorithm. A full definition of a concept would have to conclude what the concept does in various edge cases.
On LW we have our jargon problem. We can use an existing word for a concept or we can invent a new term for what we speak about. We have to decide whether the existing term is good enough for our purposes or whether we mean something slightly different that warrants a new term. That's not always an easy decision.
To repeat a cliche: "There are only two hard things in Computer Science: cache invalidation and naming things" Naming is also hard outside of computer science.
↑ comment by iarwain1 · 2014-12-25T00:59:20.440Z · LW(p) · GW(p)
I recently asked a question that I think is similar to what you're discussing. To recap, my question was on the philosophical debate about what "knowledge" really means. I asked why anyone cares - why not just define Knowledge Type A, Knowledge Type B, etc. and be done with it? If you would taboo the word knowledge would there be anything left to discuss?
Am I correct that that's basically what you're referring to? Do you have any thoughts specifically regarding my question?
Replies from: Viliam_Bur, ChristianKl, Nikario↑ comment by Viliam_Bur · 2014-12-29T11:06:55.662Z · LW(p) · GW(p)
Maybe those people are bad at "tabooing" their topics. Which may either mean the topic is very difficult to "taboo", or that they simply do not think this way and instead e.g. try to associate the topic with applause lights. In other words, either the "philosophical" topics are those where tabooing is difficult, or the "philosophers" are people who are bad at tabooing.
Since there are many different philosophers trying many different things, I would guess that it really is difficult to taboo those topics. (Which does not exclude the possibility that most philosophers are actually bad at tabooing; I just think it is unlikely that all of them are.)
On the other hand, maybe the philosophers who taboo the topic properly are simply ignored by the others. The problem is never solved because even when someone solves it, others do not accept the solution.
Also, even proper tabooing does not answer the question immediately. Even if you taboo "knowledge" properly, the explanation may require some knowledge about informatics or neuroscience, which may be not available yet.
Replies from: alienist↑ comment by alienist · 2015-01-05T04:52:57.563Z · LW(p) · GW(p)
On the other hand, maybe the philosophers who taboo the topic properly are simply ignored by the others. The problem is never solved because even when someone solves it, others do not accept the solution.
And if they do it stops being called "philosophy". This happened most notably to natural philosophy.
↑ comment by ChristianKl · 2014-12-25T11:52:00.301Z · LW(p) · GW(p)
His problem is that he isn't clear what knowledge means in academic philosophy and he tabooed the word in his post. There's obviously something left to discuss.
↑ comment by Lumifer · 2014-12-24T16:44:57.001Z · LW(p) · GW(p)
I think the approach you describe is valid but dangerous.
It's valid because occasionally (and maybe even frequently) you want to think about something that you cannot properly express in words and so cannot define precisely and unambiguously. Some people (e.g. Heidegger) basically create a new language to deal with that problem, but more often you try to define that je ne sais quoi through, to use a geometric analogy, multiple projections. Imagine that you want to think about a 6-dimensional manifold. Human minds, alas, are not well suited to thinking in six dimensions, so you need to construct some projections of that manifold into a 3-dimensional space which humans can deal with. You, of course, can construct many different projections and you will feel that some of them are more useful for capturing the character of that 6-dimensional thing, and some not so much. But other people may and probably will disagree about which projections are useful and which are not.
It's also dangerous for obvious reasons, starting with the well-know tale of the blind men and the elephant...
↑ comment by Douglas_Knight · 2014-12-31T07:59:32.573Z · LW(p) · GW(p)
There are lots of examples where this struggle with definitions has been fruitful. In the early 20th century in the boundary between philosophy and mathematics there were debates about the meanings of "proof" and "computation." It is true that the successful resolution of these debates has largely turned the subject from philosophy into math, although that has little do with the organization of academic departments.
comment by fubarobfusco · 2014-12-25T22:12:25.550Z · LW(p) · GW(p)
Mindfulness meditation measurably reduces implicit bias. The mechanism is a bit unclear, though.
comment by polymathwannabe · 2014-12-22T12:39:49.394Z · LW(p) · GW(p)
Reposted from previous open thread because it was near to being buried under newer threads:
comment by skeptical_lurker · 2014-12-23T10:55:39.067Z · LW(p) · GW(p)
In previous discussions of optimal investing, the efficient market hypothesis has been repeatedly cited to say that you cannot easily predict the market, and that even if you could, you would be better off working for a bank, rather than risking your own money.
But suppose I had discovered an entirely novel technique for predicting the market, one sufficiently complex that the EMH does not apply. How would I get a job at a bank to leverage this?
I could show them my technique, but then (in the unlikely event they took me seriously) they could copy it. I could get them to sign a non-disclosure agreement, but what mechanism is there to enforce such an agreement, when the technique could be used behind closed doors?
I could track my predictions, registered with some impartial third party, but then if thousands of people did this (which they do) or one person did this with thousands of sockpuppets, then some of these people would show extremely good results by sheer luck.
One reason this may be important is that 'superhuman finance AI' is strictly simpler than 'superhuman AGI' and it's important to work out how easy it is for AI to acquire massive resources by dominating the world's financial markets.
META: I considered posting this to the comments on the last optimal investing post, but since its several months old I assumed few would read it.
Replies from: Punoxysm, Lumifer↑ comment by Punoxysm · 2014-12-23T18:22:42.199Z · LW(p) · GW(p)
Well, you could invest your own money. Most strategies benefit from using small amount of cash, since being able to take advantage of small volume opportunities often trumps higher relative execution costs. Register your track record and start recruiting investors.
You can also trade with your own strategies at companies that let you use a mis of your money and theirs, and provide some resources. Typically, traders at these companies aren't executing pure softwares strategies, so "stealing" their methods doesn't have much point.
You could also go to people with established reputations in academic or quantitative finance and show them enough of your method to convince them you're legit, but not so much they can copy it, then have them lend their reputation.
A superhuman AGI could accumulate tremendous financial resources; to do so most effectively it would need access to as many feeds and exchanges as possible, so some kind of shell company would buy those for it. I'm not sure to what extent a finance-specialist AI that achieves superhuman performance is really easier to make than a superhuman AGI; and bear in mind that it could lose its edge quickly.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2014-12-24T23:34:49.742Z · LW(p) · GW(p)
You can also trade with your own strategies at companies that let you use a mis of your money and theirs, and provide some resources.
I'm not sure I follow you, do you mean leverage or something else?
You could also go to people with established reputations in academic or quantitative finance and show them enough of your method to convince them you're legit, but not so much they can copy it, then have them lend their reputation.
Any information about a method narrows the hypothesis space, so I'm not sure this is possible. But discussion with academics, who presumably are not actually trading is an interesting idea.
I'm not sure to what extent a finance-specialist AI that achieves superhuman performance is really easier to make than a superhuman AGI
Its not at all obvious whether its easier or not - if AI can be broken down into subsystems, such as "planning" "hypothosis generation" and "predictions" then it is a matter of solving just the prediction subsystem, which is of course easier than solving all the other problems as well. OTOH, "hypothosis generation" could be an integral part of prediction, and the subsystems might not be able to operate independently.
↑ comment by Lumifer · 2014-12-23T18:19:04.833Z · LW(p) · GW(p)
even if you could, you would be better off working for a bank, rather than risking your own money.
That depends on the particulars. In any case, this is more the bailiwick of hedge funds rather than banks nowadays.
suppose I had discovered an entirely novel technique for predicting the market
You would have to demonstrate that it works and convince people with money to give some of that money to you to trade.
You can try to persuade people with just your technique (and backtesting results) under an NDA, but without a track record it will be hard -- or the deals offered to you won't be good. You also can trade your own money -- in whatever amounts you have -- for a while in a separate audited account. A good track record and a demonstrated willingness to risk your own money on your idea will help you persuade other people that the idea works.
Remember, you don't have to mathematically prove anything, all you have to do is convince people with money to give you some :-)
how easy it is for AI to acquire massive resources by dominating the world's financial markets.
For a fully developed AI it will be easy, but I am not sure why would it bother. And if you're thinking about trading "AIs" developed on Wall St. and such, these are very likely to be narrow tool-like AIs with very specific and limited goals.
Replies from: None↑ comment by [deleted] · 2014-12-28T02:22:04.423Z · LW(p) · GW(p)
And if you're thinking about trading "AIs" developed on Wall St. and such, these are very likely to be narrow tool-like AIs with very specific and limited goals.
Alexander Wissner-Gross disagrees with you. He believes "superhuman AI" will emerge from quantitative finance. His talk at Singularity Summit is below.
comment by Gondolinian · 2014-12-24T02:59:03.948Z · LW(p) · GW(p)
It seems that the link on the About page for the welcome thread is still pointing to the previous WT. I would appreciate it if someone with editing access to the About page could update the link. I also wonder if it would be possible to have the link point to the newest post with a certain tag, e.g. the "welcome" tag, thus making it point to the newest WT automatically.
Thanks!
Replies from: John_Maxwell_IV, Scott Garrabrant↑ comment by John_Maxwell (John_Maxwell_IV) · 2015-01-07T10:21:52.460Z · LW(p) · GW(p)
I changed the link.
Replies from: Gondolinian↑ comment by Gondolinian · 2015-01-07T13:42:51.841Z · LW(p) · GW(p)
Thanks!
↑ comment by Scott Garrabrant · 2014-12-24T04:02:10.967Z · LW(p) · GW(p)
If this is possible, I think it would also be nice to have a link I can bookmark which takes me straight to the most recent open thread's comments.
Replies from: philh↑ comment by philh · 2014-12-24T23:34:21.592Z · LW(p) · GW(p)
(source)
Replies from: Scott Garrabrant↑ comment by Scott Garrabrant · 2014-12-26T04:59:50.487Z · LW(p) · GW(p)
Thanks!
comment by Capla · 2014-12-24T01:50:49.703Z · LW(p) · GW(p)
I am naive and inexperienced in the ways of love, but it seems implausible that romantic love is often (usually?) bidirectional. Of all the people of the right sex that one is close too, why do people usually fall in love with someone who is likewise in love with them?
Replies from: ChristianKl, MrMind, asr, Capla, CBHacking↑ comment by ChristianKl · 2014-12-24T10:13:23.803Z · LW(p) · GW(p)
When it comes to dumb animals it's because falling in love requires a courtship ritual with multiple steps. If any of the partners doesn't follow the steps of the ritual then the other animal doesn't feel the emotions to go to the next step of the ritual.
If a male pig emits enough androsterone, a female pig goes directly into mating position.
When it comes to humans on the other hand it's possible to do fall in love quite randomly without it being bidirectional. Humans also don't have an a mating season and do mating quite randomly.
At the same time it's not so random that old impulses that make it more likely that love is bidirectional don't exist. Rapport between humans get's facilitated through mimikry where people who like each other copy movement of the other person. Breath syncs.
There are also things that happen on a deeper level and that are more difficult to explain. Sunday at the end of the Leipzig Solstice I had a conversation that was on a very deep emotional level. It wasn't love but still deep emotions. The woman I was talked to left the room because by empathizing with me about an issue that's for me painful, because she felt very strong pain in her heart.
Given my current understanding of emotional mimikry and emotions I consider bidirectional love at first sight to be quite plausible even if it doesn't happen to most people.
It also happens that people have sex before they are fully in love. Feeling pleasure during sex then get's associated with the sex partner and increases the feelings towards that partner.
Replies from: Capla↑ comment by Capla · 2014-12-24T16:42:07.897Z · LW(p) · GW(p)
Humans also don't have an a mating season
Why not? When did we diverge form other mammals in this regard?
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2014-12-25T12:30:42.459Z · LW(p) · GW(p)
The female menstrual cycle is unique to primates. I'm not sure how far this goes back and opened a question over at biology stackexchange.
But that's not everything. A human can get horny without feeling love. A human can feel love after chatting on a dating website and telling himself in his mind stories about how awesome it will be, to be with the other person.
While Bonobo's like having sex I doubt that they would be capable to fall in love like that. They ability to tell yourself that kind of story seems to be something very human and evolutionary recent.
Nature doesn't protect us from unilateraly falling in love through fantasizing about a person that isn't present.
↑ comment by Lumifer · 2014-12-24T18:04:55.029Z · LW(p) · GW(p)
Humans are not unique in that respect.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-25T11:58:21.526Z · LW(p) · GW(p)
If you read that article on "Estrous cycle" you linked, it says "Humans have menstrual cycles instead of estrous cycles. [...] Unlike animals with estrous cycles, human females are sexually receptive throughout their cycles." Humans along with some other primates are unique in that they have menstrual cycles.
↑ comment by MrMind · 2014-12-24T09:17:22.688Z · LW(p) · GW(p)
it seems implausible that romantic love is often (usually?) bidirectional
I think you are correct on this one.
why do people usually fall in love with someone who is likewise in love with them?
I have a two-way answer. First: they don't. Unreciprocated love is quite a theme in Western culture, and I suspect in other cultures too, because as a situation is both extremely common and much dramatic. Second: there's also an active selection effect, where we tend to like people that like us and that we are more familiar with. Regarding social circles and romantic interests, it's an exploration / exploitation tradeoff, but the important thing is that there's an exploitation choice in the first place.
↑ comment by asr · 2014-12-24T05:16:49.983Z · LW(p) · GW(p)
"Falling in love" isn't this sudden thing that just happens, it's a process and it's a process that is assisted if the other person is encouraging and feels likewise. Put another way, when the object of your affection is uninterested, that's often a turnoff, and so one then looks elsewhere.
Replies from: Capla↑ comment by Capla · 2014-12-24T16:38:22.420Z · LW(p) · GW(p)
when the object of your affection is uninterested, that's often a turnoff, and so one then looks elsewhere.
I'm probably not a typical case (which is probably why I am confused), but this has not been my experience.
I don't know if I've ever been "in love", given that "in love" is so sloppily and subjectively defined. I'm not clear on what's a crush and what's "in love." But suffice it to say that I have felt feelings for someone that someone who is less introspective (or philosophically careful) would likely label as being in love.
When I am in such a state, the feelings of the other towards me are fairly irrelevant to my feelings towards her. I want the best for the object of my love and I sort of "melt" inside when I see her smile, but I don't want her to love me back, necessarily. I want her to be happy. It's not as if I'm not fishing for a relationship, and if that one isn't biting, I'll go find someone else to fall in love with.
However, I have worked hard to develop myself emotionally, to forgive always, and to love unconditionally.
Replies from: Viliam_Bur, hyporational↑ comment by Viliam_Bur · 2014-12-29T11:27:04.843Z · LW(p) · GW(p)
I don't know if I've ever been "in love", given that "in love" is so sloppily and subjectively defined. I'm not clear on what's a crush and what's "in love."
I guess there are multiple components of "love", and different people may use different subsets. For example:
- obsessively thinking about someone
- wanting to cuddle with someone
- wanting to have sex with someone
- admiring someone
- caring about someone's utility
- trusting that someone cares about your utility
- precommiting to cooperate in Prisoners' Dilemma with someone
To avoid possible misunderstanding: the ordering here is random, it is not supposed to suggest some "lower/higher" or "less intense / more intense" classification, and I guess most of these items are independent on each other, although there can be correlations.
Replies from: Capla↑ comment by Capla · 2014-12-29T17:29:07.692Z · LW(p) · GW(p)
Personally,
obsessively thinking about someone
I know that one, I think, depending on what obsessive means. I've never been impaired by it. In fact I've often used thoughts of people who are better than me (or my perfect illusion of people who are in fact not so great) to motivate myself (e.g. "Y wouldn't give up now", "If she could do it, I can do it.") Note that I use comparisons to fictional role models in much the same way...which is fascinating. Maybe I'm only motivated by fictional models of humans.
wanting to cuddle with someone
I don't think so? I like to cuddle with my dog (who, now that I think about it, fits more of these [1, 2, 5, 6 to the limited extent that he can understand] than almost any humans I've met. I think it's fair to say that I'm in love with my dog, or I was once in love with my dog, but have fallen out of love).
wanting to have sex with someone
My desire to fuck someone has so far been entirely unrelated to my romantic love/admiration for the person. Sometimes I've felt that the person is so beautiful that sex would be a debasement, but maybe I have (like most of us?) screwed-up [heh] associations with sex?
admiring someone
See above. Of course, I admire people I'm not in love with, but my admiration for those people goes way overboard.
caring about someone's utility
I care for all people and all creatures. I don't think I even prioritize the people I know.
trusting that someone cares about your utility
Nope. I don't think so. I may love a person, but that doesn't mean I trust him/her.
precommiting to cooperate in Prisoners' Dilemma with someone
Explicitly? I don't think I've ever done that, or that I know anyone who has. Isn't this just number 6, again?
My take away from these is the experience of being in love (at least for me) seems to consist primarily of having an exaggerated view of the other person. I asked elsewhere if romantic love might just be a special case of the halo effect. I wonder if perfect rationalists are immune to love.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-12-30T09:46:44.143Z · LW(p) · GW(p)
Uhm, this probably wouldn't match my definition of "love". On the other hand, I know other people who define "love" exactly like this, although they would probably object against an explicit description. Let's move from definitions to anticipated experiences.
If the emotion you described "seems to consist primarily of having an exaggerated view of the other person", then it seems unsustainable in long term. If you spend a lot of time with someone, you will get more data about them, and the exaggerated view will be fixed. Also, this "special case of the halo effect" may be powered by dopamine, which (according to other people; I am not a biochemist) usually expires after 3-6 months. Unless during that time you also develop other feelings for the person, the foundation for the relationship will be gone. Then the same feeling may develop towards another person. From the inside this feels like "realizing that X wasn't your true love, because you have met Y, and now you understand that Y is your true love". Of course, in the next cycle it leads to "realization" that Y also wasn't the "true love". (It is possible that this effect is responsible for many divorces, so I would strongly advise against getting married in this situation.)
Shortly, what you described seems to me sufficient for "hooking up", and insufficient for a long-term relationship. There are many people in the same situation. It is possible to develop also stronger feelings.
Replies from: Capla↑ comment by Capla · 2014-12-30T19:41:46.484Z · LW(p) · GW(p)
First I want to comment that I am making a firm distinction between love (which I have for many people, including those not of my preferred sex, and sometimes non-humans or even plants, and is a sort of sympathy with and recognition of the beauty of the other) and being "in love" (which is seems to be a far more intense, and consuming feeling towards one specific person at a time, I think). I love my friends. Being "in love" is something else. I think it is the case that one must love the person with which he/she is in love (one is necessary but not sufficient for the other), but perhaps not.
Furthermore, being in love (if I can call my experience that) doesn't have anything much to do with being in a romantic relationship with a person.
If you spend a lot of time with someone, you will get more data about them, and the exaggerated view will be fixed
Not necessarily, or we couldn't fall into affective death spirals. There's confirmation bias to contend with.
then it seems unsustainable in long term.
Yes. That is probably the the case in most situations.
From the inside this feels like "realizing that X wasn't your true love, because you have met Y, and now you understand that Y is your true love". Of course, in the next cycle it leads to "realization" that Y also wasn't the "true love". (It is possible that this effect is responsible for many divorces, so I would strongly advise against getting married in this situation.)
Did you see what I wrote here?
I have distinct meta-cognitive thoughts. For instance, I feel like I will love this person forever, because that is entailed in the feeling, but I am also aware that I have no real way of predicting my future mental states and that people who are in love frequently and wrongly predict the immortality of the feeling.
also..
Replies from: Viliam_BurNone of this has any extremely obvious effects on my decision making: I wouldn't run off and get married for instance, because of that voice of rational meta-cognition, for example. However, It probably biases me in all sorts of ways that I can't track as readily.
↑ comment by Viliam_Bur · 2014-12-31T15:19:40.315Z · LW(p) · GW(p)
Okay then. If you distinguish between "love", "being in love" and "being in a romantic relationship", I'd say you already have better understanding than most people who put all of this into one big blob labeled "love".
Replies from: Capla↑ comment by hyporational · 2014-12-26T04:24:56.426Z · LW(p) · GW(p)
When I am in such a state, the feelings of the other towards me are fairly irrelevant to my feelings towards her.
I think people learn to regulate themselves when they realize how self-defeating this is. I know I did. The key to self-regulation is stopping the process in it's infancy. This applies to other feelings too.
Replies from: Capla↑ comment by Capla · 2014-12-26T19:07:08.730Z · LW(p) · GW(p)
I think people learn to regulate themselves when they realize how self-defeating this is.
What is "this"?
Replies from: hyporational↑ comment by hyporational · 2014-12-27T03:06:52.966Z · LW(p) · GW(p)
I was thinking about removing the comment because it wasn't clear to me what you were referring to. I think having strong feelings for other people while ignoring how they feel about you is generally a waste of resources.
Replies from: Capla↑ comment by Capla · 2014-12-24T16:48:56.571Z · LW(p) · GW(p)
Ok. Reading the replies to this, I feel like my model of how dating works is fairly divergent from the norm.
Do people date (some people: I know that people date for all sorts of reasons) in the hopes of falling in love with each other? You pick someone who is reasonably attractive to you, then go through the motions of having a relationship (which may include eating together or having sex together, among other things) until you fall in love?
The behaviors precede the feelings?
That is not at all the model I am working from.
Replies from: Lumifer, None, drethelin, CBHacking↑ comment by Lumifer · 2014-12-24T17:05:22.404Z · LW(p) · GW(p)
It's probably useful to think of relationships as feedback loops. From that point of view it's not particularly important whether you started with behaviours or feelings -- what's important is whether that feedback loop got going.
Replies from: Capla↑ comment by Capla · 2014-12-24T18:57:39.120Z · LW(p) · GW(p)
How is it exactly that the behaviors produce the feelings? Is it "just" that flattery and attention make people feel good, which leads to "in love" feelings? What is it about dating that engenders romance? This is what seems foreign to me, but then again,
Replies from: Lumifer, Viliam_Bur, alienistI am naive and inexperienced in the ways of love
↑ comment by Lumifer · 2014-12-24T19:59:28.706Z · LW(p) · GW(p)
Keep in mind that the notion of romantic love is fairly recent (goes back to Middle Age troubadours, I think) and the idea that romantic love is the proper basis for marriage in common people is very recent (XIX century would be my off-the-top-of-my-head guess). People married without "feelings" for many centuries and guess what, it mostly worked.
Humans are adaptable.
Replies from: NancyLebovitz, JoshuaZ↑ comment by NancyLebovitz · 2014-12-25T12:49:17.126Z · LW(p) · GW(p)
I though people had noticed romantic love well before the troubadours, it's just that people used to think romantic love was madness.
↑ comment by JoshuaZ · 2014-12-25T13:54:05.044Z · LW(p) · GW(p)
Is that really accurate? A number of the stories in Ovid's Amores and Metamorphoses which sound pretty close to what we'd call "romantic love" and that's from around 20 BCE and there's no indication that anything there is shocking or surprising to roman notions of love.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-12-29T11:37:20.400Z · LW(p) · GW(p)
I would guess that in the past "romantic love" was a luxury that only wealthy people could afford (e.g. citizens of the Ancient Rome) and often happened outside of marriage; most people married for economical reasons.
In other words "you can love someone" is old, but "you should marry the person you love" is new.
↑ comment by Viliam_Bur · 2014-12-29T11:54:37.480Z · LW(p) · GW(p)
Spending more time together means you (a) can notice more interesting things about the other person, and (b) have more common experience. Both of those can contribute to love.
There are things that you only notice after long interaction, for example how reliable the other person is, or how they behave in exceptional situations. (On the other hand, there is also a chance you will notice negative traits.) Having things in common increases the feeling of closeness.
Replies from: Capla↑ comment by Capla · 2014-12-29T17:06:15.342Z · LW(p) · GW(p)
Yes. I understand that. But it is just as true for my friends, of whom I am very selective, and with whom I grow very close. I have only occasionally developed quasi-romantic feelings for (feelings that, given what I'm reading here, seem close enough to qualify as "in love with") a friend. Why is that?
(Admittedly, both the people I've "fallen for" have been 1) of my preferred sex and 2) particularly awesome.)
People keep reminding me that it's not a dice role, it's a process, yet from my perspective it seem pretty random. I've never tried to "fall in love" and it's a trope [fictionalized?] that you can't control it. That's why it's described by FALLING. It seems a lot like a dice role!
But, maybe I'm not giving enough credence to individuals' responsiveness to stimuli. Reviewing what I know about dating, it seem well designed for building a Pavlovian association between feelings of intimacy, attraction and pressure and the other person (for instance, candle light is romantic because low light conditions make the pupils dilate, just as they do when one is aroused. I wonder if seeing a movie is a popular date because it simulates highly emotional experience, that both people can undergo together, without the difficulties that accompany them such high emotions in life).
This is funny to me, because I have thought that dating, as it is usually practiced, is not a particularity good way to get to know someone well, but as is often the case, it's not that it doesn't work, it's that it does something different than is advertized.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-12-30T10:01:54.744Z · LW(p) · GW(p)
I would guess that low light also creates a small level of instinctive fear, which creates a desire to be together with someone (for protection).
↑ comment by drethelin · 2014-12-25T07:17:20.684Z · LW(p) · GW(p)
It's not going through the motions, you're just placing too much of an emphasis on love, and probably love at first sight. When people meet someone who is attractive, they have certain emotions toward them. When they start to interact, they get more and more information about well they get along, the kind of things the other person enjoys etc. This is a fun process if you're attracted to someone, and you can get more and more interested in them. This isn't "Going through the motions" but neither is it love at first sight. It's development of relatively minor feelings into a long-term, very emotional commitment, which we generally refer to as love. The reason romantic love is often bidirectional is because it's not random: Its the outcome of a process of mutual attraction and interaction.
TLDR: Love is not assigned by cosmic die roll, but an emotional outcome of human behaviors towards those around them.
Replies from: Capla↑ comment by Capla · 2014-12-26T02:03:39.747Z · LW(p) · GW(p)
When they start to interact, they get more and more information about well they get along, the kind of things the other person enjoys etc. This is a fun process if you're attracted to someone, and you can get more and more interested in them.
This applies equally to getting to know someone in a non-romantic context, and in fact fairly well describes my excitement at meeting someone who I think might be a potential friend. Why is it sometimes feelings of love instead of friendship?
(Note that this is in the context of not really understanding the difference between a friend-relationship and a romantic-relationship.)
Replies from: hyporational, drethelin↑ comment by hyporational · 2014-12-26T05:17:58.873Z · LW(p) · GW(p)
Why is it sometimes feelings of love instead of friendship.
Could be just context and interpretation, which do make the psychological reality of the situation different.
↑ comment by CBHacking · 2014-12-25T10:23:03.783Z · LW(p) · GW(p)
Please either clearly define "love" or taboo the word and ask the question again. It's a vague term and I think your meaning of it differs somewhat from the rest of us. For one thing, I would not say it's a binary state of "in love with" or "not (yet) in love with" a person.
Many people date with the hopes of developing a strong, long-lasting romantic connection, but that doesn't mean that if such a connection fails to form then the entire effort was wasted. Being in a romantic relationship is (supposed to be) a pleasant experience, regardless of how far into it you are. People who are explicitly looking for a life partner would do well to only seek out potential dates/partners with the same goal in mind, but even then you can have a fun time for a few dates and then decide that it's just not going to work out, or spend a happy year together and discover that you never really stop looking for somebody else nonetheless.
As for "the behaviors precede the feelings?", I'm not sure how to answer that except that about "love" specifically, but in general the answer is "no". You engage in behaviors commensurate with your current feelings. People on a first date frequently have nothing between themselves but a shallow attraction (online dating changes this a bit, but that's another topic); they get dinner together / take a walk together / have a drink together because they are experiencing a mutual feeling of "I want to start getting to know that person, probably with an eye to developing a romantic relationship". If they have sex on the first date, it's because after some time together each one is attracted to and aroused by the other person, not (generally) because they are "in love". If they sincerely ask to set up another date, that's probably because they felt the first date was pleasant (however "far" it went) and would like to spend more time together in a potentially-romantic context. None of these behaviors - asking somebody out, going on a first date, going on successive dates, kissing, having sex, etc. - require being "in love" with the other person; merely they require that both people want to.
Of course, the forming of a romantic attachment is a different matter. Anything I say on that subject should be taken with approximately a tablespoon of salt; I am no expert in this area and I know any explanation I try and give on the subject will be inadequate. With that said, I'll take a shot at it: forming such attachments (one possible value of "falling in love") takes time and will often (but not always) grow out of spending a lot of time together romantically and deeply enjoying the experiences. You can have lots of fun together, over months or even years, without forming much of a connection; I am told that it's possible for people to spend one evening together and feel like they can't live without the other person. Neither scenario is either a prerequisite for nor precludes "the motions of having a relationship".
Replies from: Capla↑ comment by Capla · 2014-12-26T02:20:17.531Z · LW(p) · GW(p)
Please either clearly define "love" or taboo the word and ask the question again. It's a vague term and I think your meaning of it differs somewhat from the rest of us.
I don't know if being "in love" is a thing that actually exists, and if it does, what exactly it is, but in general, people seem to regard whatever the term refers to as somthign great and profound, so I am interested in it. I'm trying to figure out what's behind the word. So maybe you can tell me?
Anyone here who has been "in love", can you tell us what the characteristics were? I'd like to know if it corresponds to an internal state that I have experienced. In particular, how does romantic love differ from friendship?
Replies from: hyporational, erratio↑ comment by hyporational · 2014-12-26T05:01:55.521Z · LW(p) · GW(p)
I don't know if being "in love" is a thing that actually exists
What do you mean? There's a word called love and there's a reality that people refer to with the word. Words are replaceable.
I'd like to know if it corresponds to an internal state that I have experienced.
The problem is the same hormonal process doesn't necessarily feel exactly the same in everyone's body. I think it would be more reliable to inspect your thoughts and your behavior towards the person and compare them to other people in love.
In particular, how does romantic love differ from friendship?
That's like asking how hot differs from cold, perhaps equally inexplicable, if you're asking about being in love. If you would've experienced it, you'd know. More reasonable people look mainly for friendship in their life partners and therefore might use the words love and friendship interchangeably. They probably wouldn't talk about being in love however.
As a data point I've been in love several times, also reciprocically. It hasn't made me happier on the whole, since the state makes me short sighted and neglect more lasting sources of happiness, and therefore I'm not seeking it anymore.
Replies from: Capla↑ comment by Capla · 2014-12-26T19:13:37.573Z · LW(p) · GW(p)
I think it would be more reliable to inspect your thoughts and your behavior towards the person and compare them to other people in love.
Exactly. But I don't know what the thoughts of other people are and I have reason to think that my external behaviors will differ from others, even if we are both motivated by the same feelings.
Replies from: hyporational↑ comment by hyporational · 2014-12-27T03:28:46.197Z · LW(p) · GW(p)
When I'm in love, my thoughts are obsessed with the person and other thoughts are put aside. My thinking is distorted by baseless optimism. I fail to notice flaws in them I would notice with a sober mind, and when I do notice flaws I accept flaws I wouldn't normally accept. Since being in love feels so good, much of my thinking is dedicated to reinforcing my feelings through imagining situations with the person when they're not around and of course nothing in those situations ever goes wrong. This creates unrealistic expectations. I plan my life with them optimistically way further than I would plan my life in any other regard. I can't properly analyze my mental state while being in love, since analytical thinking would likely end it and that's the last thing I want.
Replies from: Capla, Capla↑ comment by Capla · 2014-12-27T05:12:45.704Z · LW(p) · GW(p)
Comparing...
When I'm in love, my thoughts are obsessed with the person and other thoughts are put aside.
I'm not sure about obsessed but when I'm [state possibly reference by the phrase "in love", and which I will represent by "X"] I do think about the person a lot, significantly more than anyone else in my life, despite not seeing this person with high frequency.
My thinking is distorted by baseless optimism. I fail to notice flaws in them I would notice with a sober mind, and when I do notice flaws I accept flaws I wouldn't normally accept.
I quibble with "baseless." When I'm X I certainly express great admiration for the person, bordering on a perception of perfection, but the individual in question has always been someone who is legitimately exceptional by objective measures. However, it does seem to a very strong halo effect.
Since being in love feels so good
Check.
imagining situations with the person when they're not around and of course nothing in those situations ever goes wrong
I'm not sure what sort of imagining you're doing, but I can relate to imagined conversations during which the person in question is impressed to the point of astonishment of some virtue of mine (my restraint, or my altruism, or something).
This creates unrealistic expectations.
I don't think so, but then, I've never desired to have a romantic relationship with either of my objects of affection. I have desired to be close to them and spend time with them. I'm not really sure what "romantic" is.
I plan my life with them optimistically way further than I would plan my life in any other regard.
Nope. When I'm X, I'm not doing an planning.
I can't properly analyze my mental state while being in love, since analytical thinking would likely end it and that's the last thing I want.
No again. I have distinct meta-cognitive thoughts. For instance, I feel like I will love this person forever, because that is entailed in the feeling, but I am also aware that I have no real way of predicting my future mental states and that people who are in love frequently wrongly predict the immortality of the feeling. I laugh at myself and at how ridiculous I am. My ability to maintain a clear outside-view does nothing to squash the subjective feelings.
None of this has any extremely obvious effects on my decision making: I wouldn't run off and get married for instance, because of that voice of rational meta-cognition, for example. However, It probably biases me in all sorts of ways that I can't track as readily.
I should also note that I might sometimes feel a twinge of jealousy, but release it almost immediately.
So...Have I been "in love"? It sounds like I've had most (?) of the symptoms? How is this any differnat form just having "crush" on someone?
↑ comment by Capla · 2014-12-27T04:45:27.367Z · LW(p) · GW(p)
I was just rereading the sequences, and I wonder, is being "in love" just an application of the halo effect?
↑ comment by erratio · 2014-12-28T03:47:36.487Z · LW(p) · GW(p)
Congratulations, if you can't easily discern the difference between romantic and platonic love then you may be aromantic or demiromantic!
Unfortunately, as one of those myself, I can't shed much light on the difference despite currently being in a romantic relationship. But you might start off looking at those terms and various forums for asexual/aromantic types to get a better handle of where the applicable lines are
↑ comment by CBHacking · 2014-12-24T08:37:06.276Z · LW(p) · GW(p)
Unrequited love is a thing that exists, and breakups where only one side feels the relationship should end are common. In fact, one reason that people decide to stop dating is because the other person is falling / has fallen in love with them, and they don't feel that strongly towards that person (or see themselves likely to do so soon) and therefore call it off to let both people go on to find other partners (EDIT: polyamory mitigates this problem to a degree but you will still have a problem if there is severely unequal attachment between partners). I have no statistics for the number of people who end up with staying with the first person they said "I love you" too, or with the first person who said "I love you" to them, but I expect it (90%) to be under 50%, and wouldn't be surprised (50%) if it's 25% or lower, for either direction. People do not "usually fall in love with someone who is likewise in love with them".
You can quibble over definitions of what constitutes "love"; people, including those in relationships, do it constantly. If you simply take it as "capacity to be in a strong romantic relationship with a person for a period of years" then for most people there are a huge number of people they could fall in love with, and if they spend a lot of time with one of those people and mutually engage in displays of closeness, affection, desire, and pleasure in spending time together, many of those people will mutually fall in love.
Don't believe in the myth of "the one for me"; high standards are good, but if you spend a lot of time with somebody who you really like and who really likes you, you will probably find that the characteristics you're looking for in a love and the way you view the other person will become more aligned. That applies to both (or potentially "all", for cases of more than two people) parties. Definitely don't believe in "love at first sight"; you may well fall in love with somebody who you felt you loved from the moment you met them, but you're far more likely to fall in love with somebody who you got to know and discovered a mutual attraction and desire for each other's affection and presence in your/their life.
comment by gjm · 2014-12-23T13:04:37.469Z · LW(p) · GW(p)
Is it my imagination, or has there been a bit of a rash of account-deleting lately? I've noticed, within the last month or thereabouts) one formerly-prominent LW user explicitly announcing the intention to delete their account and then doing it, and another just deleting their account without -- so far as I saw -- any announcement. I'm not so observant that I would expect to have noticed every such deletion. So quite likely there are more.
Replies from: emr, Lumifer↑ comment by emr · 2014-12-26T20:43:08.644Z · LW(p) · GW(p)
I was never a prominent poster, but I recently switched accounts for idiosyncratic reasons. I noticed only one other recent deletion (the first case that you mention).
Replies from: gjm↑ comment by gjm · 2014-12-27T03:07:20.965Z · LW(p) · GW(p)
I suspect -- on the basis of similarity of name -- that yours may be the second account I had in mind. (Though if so, I'd classify you as a somewhat-prominent poster: certainly one whose account name I recognized.)
Given this and other responses and non-responses, I'm revising downward my estimate of how likely it is that there are a bunch of other recent deletions I haven't noticed. So, maybe not much of a rash after all.
Replies from: emr↑ comment by emr · 2015-01-01T04:28:35.446Z · LW(p) · GW(p)
Hmm. I didn't think this name was a strong clue. And I wouldn't want to be wrongly associated with someone else's old account, so if you care about it, you can PM with the old name I can verify or not (the old account was essentially as anonymous as this name).
Replies from: gjm↑ comment by Lumifer · 2014-12-23T16:32:53.187Z · LW(p) · GW(p)
Not sure there is a common pattern. The former case had to do with certain privacy realizations, I think, and the latter case was a lack of fit between the person and the local culture.
Replies from: gjm↑ comment by gjm · 2014-12-23T17:37:39.328Z · LW(p) · GW(p)
I think you just provided me with more evidence that there are more cases than I explicitly know about, because the not-announced one I had in mind was someone whose fit with the local culture was (I think) just fine.
[EDITED to add: But clearly we have the same first case in mind.]
Replies from: CBHackingcomment by JoshuaZ · 2014-12-25T22:24:47.138Z · LW(p) · GW(p)
The Oatmeal has a high highly positive review of his experience with the new Google self-driving car. While the Oatmeal is somewhat nerd focused, there's enough of an influence into the more mainstream culture that this sort of thing could have some impact. He makes the point quite effectively that the cars are highly conservative in their driving. The piece may be worth reading simply for the amusement value.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2014-12-26T01:06:30.454Z · LW(p) · GW(p)
Self-driving cars offer a straightforward, practical way to talk about AI ethics with people who've never written a line of code.
For instance, folks will ask (and have asked) questions like, "If a self-driving car has to choose between saving its owner's life and saving the lives of five pedestrians, which should it choose?" With overtones of distrust (I don't want my car to betray me!), class anxiety (I don't want some rich fuck's car to choose to run my kids over to save him!), and anti-capitalism/anti-nerdism (No matter what those rich dorks at Google choose, it'll be wrong and they should be sued to death!).
And the answer that Google seems to have adopted is, "It should see, think, and drive well enough that it never gets into that situation."
Which is exactly the right answer!
Almost all of the benefit of programming machines to make moral decisions is going to amount to avoiding dilemmas — not deciding which horn to impale oneself on. Humans end up in dilemmas ("Do I hit the wall and kill myself, or hit the kids on the sidewalk and kill them?") when we don't see the dilemma coming and avoid it. Machines with better senses and more predictive capacity don't have to have that problem.
Replies from: Jiro, ChristianKl↑ comment by Jiro · 2014-12-28T03:54:00.821Z · LW(p) · GW(p)
Folks will ask questions like "how do we balance the usefulness of energy against the danger to the environment from using energy". And the answer is "we should never get into a situation where we have to make that choice".
Of course, anyone who actually gave that answer to that question would be speaking nonsense. In a non-ideal world, sometimes you won't be able to maximize or minimize two things simultaneously. It may not be possible to never endanger either the passengers or pedestrians, just like it may not be possible to never give up using energy and never endanger the environment. It's exactly the wrong answer.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2014-12-28T04:54:47.666Z · LW(p) · GW(p)
Sure, you want to make sure the behavior in a no-win situation isn't something horrible. It would be bad if the robot realized that it couldn't avoid a crash, had an integer overflow on its danger metric, and started minimizing safety instead of maximizing it. That's a thing to test for.
But consider the level of traffic fatalities we have today.
How much could we reduce that level by making drivers who are better at making moral tradeoffs in an untenable, no-win, gotta-crash-somewhere situation ... and how much could we reduce it by making drivers who are better at avoiding untenable, no-win, gotta-crash-somewhere situations in the first place?
I suggest that the latter is a much larger win — a much larger reduction in fatalities — and therefore far more morally significant.
↑ comment by ChristianKl · 2014-12-28T14:13:18.774Z · LW(p) · GW(p)
And the answer that Google seems to have adopted is, "It should see, think, and drive well enough that it never gets into that situation."
I don't think designing a car with the idea that it will never get into accidents is a great idea. Even if the smart car itself makes no mistake it can get into a crash and should behave optimally in the crash.
Even outside of smart cars there are design decisions that can increase the safety of the car owner at the expense of the passangers of a car you crash into.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2014-12-28T19:58:59.232Z · LW(p) · GW(p)
I don't think designing a car with the idea that it will never get into accidents is a great idea.
I totally agree! You want to know what the limit cases are, even if they will almost never arise. (See my other response on this thread.)
But if you want to make a system that drives more morally — that is, one that causes less harm — almost all the gain is in making it a better predictor so as to avoid crash situations, not in solving philosophically-hard moral problems about crash situations.
Part of my point above is that humans can't even agree with one another what the right thing to do in certain moral crises is. That's why we have things like the Trolley Problem. But we can agree, if we look at the evidence, that what gets people into crash situations is itself avoidable — things like distracted, drunken, aggressive, or sleepy driving. And the gain of moving from human drivers to robot cars is not that robots offer perfect saintly solutions to crash situations — but that they get in fewer crash situations.
comment by Jackercrack · 2014-12-22T22:09:49.013Z · LW(p) · GW(p)
I, like many people, have a father. After a long time of not really caring about the whole thing he's expressed an interest in philosophy this Christmas season. Now, as we know a lot of philosophy is rather confused and I don't see any big reasons for him to start thinking truth is irrelevant or other silly things. I don't think the man is considering reading anything particularly long or in-depth.
So, I'm asking for book recommendations for short-ish introductions to philosophy that don't get it all wrong. Solid, fundamental knowledge about how we know what we know, why we can know it and so on. The whole less wrong thing really. I think i'll also send him a copy of epistomology 101 for beginners.
All ideas are welcome even if it's not 100% the right book.
Replies from: closeness, Punoxysm, Interpolate↑ comment by closeness · 2014-12-23T11:44:00.361Z · LW(p) · GW(p)
Not a book but: http://sqapo.com/
Replies from: LizzardWizzard↑ comment by LizzardWizzard · 2014-12-23T13:08:19.516Z · LW(p) · GW(p)
Thank you, this is awesome!
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-12-29T11:12:35.981Z · LW(p) · GW(p)
And it can be printed, if someone insists on a book form.
↑ comment by Interpolate · 2014-12-23T03:15:20.117Z · LW(p) · GW(p)
I haven't read this book myself, but I've read other books in this series and would recommend them:
http://www.amazon.com/Philosophy-Short-Introduction-Edward-Craig/dp/0192854216/
I like the idea of directing him to the Less Wrong sequences as he would probably benefit more from that. It's available in pdf and other print-suitable forms here so you could print it out and put it in a fancy binder or something:
http://lesswrong.com/lw/37v/sequences_in_alternative_formats/
comment by beoShaffer · 2014-12-27T00:03:30.939Z · LW(p) · GW(p)
Outside view/counterfactual exercise. You have a cause, say global warming, which you think so important that even a small change to its odd of success trumps the direct moral impact of anything else you can do with your life. E.g. you believe that even an extra dollar of funding for alternative energy is more morally important than saving a human life (given that the person has a net 0 carbon footprint). However, you are open to the possibility that there is an even more important cause that trumps yours to a similar level. You also know that there have historically been people that thought their causes were as important as you think yours is and turned out to be horribly wrong, at least from your prospective e.x. the Bolsheviks. How much do you focus on directly contributing to your cause vs contributing to public goods, not defecting in prisoners dilemmas, and other conventional do gooding?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-28T13:57:37.457Z · LW(p) · GW(p)
The general Effective Altruism idea is that you shouldn't go for a very tiny chance of influencing a major cause. Instead on focusing on causes you should seek for opportunities that are effective at producing change.
comment by is4junk · 2014-12-23T18:36:49.950Z · LW(p) · GW(p)
Does anyone feel that cryogenics is like a bad lottery? A ticket costs thousands of dollars and the chance to win is unknown. Even worse, if you do 'win' it is not clear what you win (your prize: here is a zombie that thinks it is you) or when you win it.
Replies from: Manfred, shminux, ChristianKl↑ comment by Shmi (shminux) · 2014-12-23T22:27:08.286Z · LW(p) · GW(p)
Not a lottery, more like an insurance policy (which it usually is, literally) without a clear description of benefits. On a related note, I'd take a zombie who thinks it is me over no traces of me any day.
Replies from: Capla↑ comment by Capla · 2014-12-26T02:25:16.268Z · LW(p) · GW(p)
Why?
Why is a zombie that thinks it's you preferable to a zombie that doesn't?
Replies from: shminux↑ comment by Shmi (shminux) · 2014-12-26T02:27:38.675Z · LW(p) · GW(p)
Because then it's not a zombie, it's me, as far as I'm concerned, in a zombie body. (I assume that most of my memories and my personality is preserved between after reanimation.)
Replies from: Capla↑ comment by Capla · 2014-12-26T02:36:32.669Z · LW(p) · GW(p)
Oh! You mean a zombie-zombie, not a p-zombie.
Hah!
Replies from: shminux↑ comment by Shmi (shminux) · 2014-12-26T03:04:55.049Z · LW(p) · GW(p)
I don't know what would constitute a p-zombie, in any context. I was just using the terminology of the comment I replied to, Presumably calling the hypothetical entity which inherits my identity a zombie.
↑ comment by ChristianKl · 2014-12-24T09:37:56.222Z · LW(p) · GW(p)
Instead of "unknown" pick a number. What the percentage that you believe it would work? Then go and calculate expected payoffs based on the price.
Replies from: is4junk↑ comment by is4junk · 2014-12-24T15:54:01.762Z · LW(p) · GW(p)
When I think about it I end up with a bad drake equation for both the 'win' and the 'outcome payoff'. In the drake equation you get to start off with the number of planets in the universe.
When you win is also interesting. Being revived 1 year after death should be worth more then 1m years after death.
Replies from: jkaufman, Lumifer↑ comment by jefftk (jkaufman) · 2014-12-27T23:11:53.475Z · LW(p) · GW(p)
Previous discussion of Drake-style equations for cryonics: http://lesswrong.com/lw/fz9
Replies from: is4junk↑ comment by Lumifer · 2014-12-24T16:34:39.562Z · LW(p) · GW(p)
Being revived 1 year after death should be worth more then 1m years after death.
Why? If you assume progress, wouldn't you want to be revived into a more advanced society rather than the same old mess?
Replies from: Jiro↑ comment by Jiro · 2014-12-24T16:38:00.014Z · LW(p) · GW(p)
If you're revived 1 year after death the people you care about are probably still around, you probably have useful job skills, you may be able to recover some of your old property, etc.
Replies from: Lumifer↑ comment by Lumifer · 2014-12-24T16:59:27.443Z · LW(p) · GW(p)
I understand that. But what you are losing is the chance of being reborn, ahem, in a better place.
It's an interesting choice, driven, I assume, by risk aversion and desire for novelty. Probably different people will choose differently.
Replies from: drethelin, NancyLebovitz↑ comment by NancyLebovitz · 2014-12-24T17:56:49.839Z · LW(p) · GW(p)
How much use is a better place to you if you can't understand it? I'd rather live through the intervening years so I can grow into the better future.
comment by Interpolate · 2014-12-23T03:20:16.963Z · LW(p) · GW(p)
I would like to try NSI-189: http://en.wikipedia.org/wiki/NSI-189
People's experiences with this drug and suggestions for vendors would therefore be welcome.
Replies from: skeptical_lurker, hyporational↑ comment by skeptical_lurker · 2014-12-23T11:41:22.265Z · LW(p) · GW(p)
Stupid questions time:
NSI-189 has been shown to increase the hippocampal volume of adult mice by 20%
Given that there is only a certain amount of room inside the skull, how can this be true in adult mice? I can understand how it might increase density, or increase hippocampal volume when administered in adolescence and the skull has not finished growing, but unless there are holes in the brain I can't see how this could be true in adults.
I suppose maybe the amount of cerebospinal fluids could decrease, increasing intelligence at a cost of decreasing ability to withstand blows to the head.
Replies from: hyporational, philh↑ comment by hyporational · 2014-12-23T16:16:35.389Z · LW(p) · GW(p)
The hippocampus is a relatively tiny structure in the human brain, and I would guess it's even proportionally smaller in the mouse brain. I doubt the corresponding decrease in cerebrospinal fluid volume would make any difference in function. There's already much more variation in cerebrospinal fluid volume in healthy humans than a 20% increase in hippocampal volume could account for.
↑ comment by philh · 2014-12-23T12:28:13.873Z · LW(p) · GW(p)
Most of the brain is not hippocampus. You could increase the volume of the hippocampus by taking volume away from the rest of the brain (e.g. making it more dense or taking neurons away from it).
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2014-12-23T16:06:31.175Z · LW(p) · GW(p)
To state the blatantly obvious, if there is a possibility that this drug works by taking neurons away from other parts of the brain, then its use is really dangerous.
BTW is this a guess, or do you have some reason to believe that this sort of neuroplacicity is possible?
Replies from: philh↑ comment by philh · 2014-12-23T19:15:44.331Z · LW(p) · GW(p)
Not even really a guess, just something that seemed vaguely plausible (I know almost nothing about neuroscience). "Making it more dense" seemed more likely to me, i.e. the hippocampus grows and puts pressure on the rest of the brain causing it to shrink without losing function. But it also seemed plausible to me that neurons regularly get repurposed between brain structures, so I mentioned that as well.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2014-12-23T19:38:25.967Z · LW(p) · GW(p)
Fair enough. I know a little neuroscience, and while there is synaptic plasticity, AFAIK neurons cannot switch between entirely different regions of the brain. I agree that an increase in density is more plausible.
↑ comment by hyporational · 2014-12-23T16:44:10.297Z · LW(p) · GW(p)
I'd be more interested in behavioral changes in the mice. For some reason not all people with tiny hippocampuses or generally atrophied brains have problems with memory (or mood), and we still can't reliably diagnose progressive memory disorders, or many other neurological disorders for that matter, via brain scans alone.
comment by FiftyTwo · 2014-12-29T04:36:42.672Z · LW(p) · GW(p)
I find I often pickup mindsets and patterns of thought from from reading fiction or first person non-fiction. E.g. I'm a non-sociopath, but I noticed thought patterns more simlar when reading "Confessions of a Sociopath"
I figure this may be a useful way to hack myself towards positive behaviours. Can anyone reccomend fiction that would encourage high productivity mindsets?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-12-29T12:46:23.523Z · LW(p) · GW(p)
I've noticed that watching Herman's Head (you can find most of the episodes on YouTube) helped me model my mind as a dialogue between competing agents.
comment by ike · 2014-12-23T02:37:43.272Z · LW(p) · GW(p)
What should my prior be for an offer that looks too good to be true to be actually true? I was wondering after I saw a lot of arguing online over whether a certain company was a scam or not. This is a prior, so before factoring in things like media attention or base country or how loud people like and/or denounce it or anything else. Although a guess on how much each of those factors should affect the rate would be useful too.
Replies from: Punoxysm, NancyLebovitz, Lumifer↑ comment by Punoxysm · 2014-12-23T18:32:23.235Z · LW(p) · GW(p)
There's a large range between excellent company and scam company. Many companies are earnestly but poorly run, or not-scams-per-se but concealing financial issues. Others seem too-good-to-be-true but really are that good.
As a rule, companies make offers that are just good enough to get a yes. My prior would be that too-good-to-be-true always deserves extra scrutiny, and is probably somehow deceptive or high-risk if the terms don't make a guarantee (for instance, equity and deferred compensation in a job offer could never materialize). The other possibility is that they believe you are more valuable to them than other companies do. The question is why? (A final possibility is that you have a poor understanding of the job market, and the other companies are lowballing you).
Replies from: ike↑ comment by NancyLebovitz · 2014-12-23T14:50:15.229Z · LW(p) · GW(p)
Tentatively, how much too good to be true is it? Does resemble past scams? Do the people making the offer get angry when they're asked about details?
Replies from: ike↑ comment by ike · 2014-12-23T14:54:14.693Z · LW(p) · GW(p)
Again, I'm looking for a prior. All those things come after. When I tried to do my own analysis, I got stuck on a prior, and realized I don't have a good idea where to put it.
I'll rephrase the question: what percentage of such offers (counting everything regardless of these other factors) are more or less true? After you have that, you can update up or down based on any other info.
Replies from: ChristianKl, NancyLebovitz↑ comment by ChristianKl · 2014-12-23T20:42:34.412Z · LW(p) · GW(p)
Different people consider different claims "too good to be true". To produce a specific number you would have to provide a more precise notion of "too good to be true".
Replies from: ike↑ comment by ike · 2014-12-25T14:14:55.838Z · LW(p) · GW(p)
How about take the reference class of "things publically accused of being a scam by non-official entities, but not by official ones (like any government agencies, bbb, media, etc.)." Weigh different offers by publicity (more precisely, the number of people who potentially thought about using whatever it was.)
Is that well-defined enough?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-25T15:32:59.984Z · LW(p) · GW(p)
If you search well enough I think that you can find for most products that somewhere on the internet a person calls it a scam.
Replies from: ike↑ comment by ike · 2014-12-25T15:58:40.947Z · LW(p) · GW(p)
How about specifically an illegal scam? The way scam is used doesn't always imply illegal.
Do you have any ideas about how we can define this in a useful way?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-25T16:20:39.634Z · LW(p) · GW(p)
I don't think it's useful to make decisions based on whether or not a random person on the internet calls something a scam or an illegal scam.
When it comes to a financial investment it makes sense to ask: "If they advertise those returns and those returns are real, why doesn't some smart bank simply invest money into the vehicle?" "The deal still seem to look good even if they company would reduce the rates of return, so why don't they reduce the rates to make more profit for themselves?"
For a lot of other products: "What do trustworthy knowledgable people say about the product?" "Who can I ask?"
Replies from: ike↑ comment by ike · 2014-12-25T16:32:11.456Z · LW(p) · GW(p)
I'm not using this as a decision maker end-all, I'm using it to define the reference class to get a prior. All those questions are calculated afterwards. I'm trying to define the "too-good" category, not define something that would discriminate between scams and non-scams.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-25T16:38:57.636Z · LW(p) · GW(p)
I think it's a poor reference class.
The core question isn't whether something is too good to be true, but whether it's reasonable that an opportunity like that exists without information being hidden from you.
↑ comment by NancyLebovitz · 2014-12-23T16:44:03.414Z · LW(p) · GW(p)
I may not know what you mean by a prior. Could you give me some examples?
Replies from: ike↑ comment by ike · 2014-12-23T19:26:42.292Z · LW(p) · GW(p)
Hm. What I mean is that when I try to weigh up the evidence, it seems pretty balanced (perhaps slightly weighted toward genuine), so the prior will determine it. If the prior was 10%, I would conclude that it was probably real, versus if the prior was 1%, I would conclude it was fake.
If you want me to explain what I mean by prior ...
Before doing any investigating, what is the probability of something that I am likely to hear about that fits the intuitional category of "too good to be true" to be more-or-less true? (I'm assuming an implicit weighting for popularity, which seems fair. OTOH it might be hard to estimate popularity for different people.)
Replies from: jsteinhardt↑ comment by jsteinhardt · 2014-12-23T22:37:23.529Z · LW(p) · GW(p)
I think many people in this subthread are suggesting ways of interpreting the evidence that you may not have (or may have) thought of, or alternately, additional pieces of potential evidence that may not obviously be evidence. So it seems like the question you should really be asking is, "how do I assess this opportunity?" rather than "what should the prior be?"
↑ comment by Lumifer · 2014-12-23T03:02:00.332Z · LW(p) · GW(p)
What should my prior be for an offer that looks too good to be true to be actually true?
Recall the old poker wisdom: every game has a mark. If you are sitting in a poker game and you don't know who the mark is, chance are it's you.
Replies from: ike↑ comment by ike · 2014-12-23T03:18:47.318Z · LW(p) · GW(p)
Are you trying to say epsilon? I think it's higher than that, if only because only semi-believable offers are likely to be made in the first place.
Also, all those calculations are done after the prior. Perhaps I should have included how well I understand the business model in the list of factors?
Replies from: Lumifer↑ comment by Lumifer · 2014-12-23T03:52:09.680Z · LW(p) · GW(p)
Well, the general answer is, as usual, "it depends". If you are talking about your prior before any offer-specific information, I think it should be pretty low. Once you've heard the offer and, maybe, looked a bit into it, the prior isn't very relevant any more, presumably you have a bunch of information/evidence at this point which should weigh in more heavily than the prior.
Replies from: ike↑ comment by ike · 2014-12-23T03:59:28.652Z · LW(p) · GW(p)
If the evidence is not very strong either way, the prior is relevant. (Which seems to be the case in the one I'm trying to find out about.) It seemed to me that the two groups in the debate had different priors, and that's why they were arguing (each trying to shift the burden of proof, for example.)
comment by [deleted] · 2014-12-22T17:53:23.387Z · LW(p) · GW(p)
Learning programming languages:
I want to start learning programming languages for use in my occupation. What are some learning resources that would make this an effecient and worthwhile experience?
Replies from: emr, Gondolinian, somnicule, None, is4junk, Interpolate, None↑ comment by emr · 2014-12-22T19:36:34.500Z · LW(p) · GW(p)
Can you give some more background?
Stuff like: What occupation? Do you have a goal or application already in mind? Are there any programming-related things you can think of right now that seem fun, (writing a video game, a program to play chess, solving math puzzles, doing data analysis, etc)?
Replies from: None↑ comment by [deleted] · 2014-12-22T20:36:24.097Z · LW(p) · GW(p)
Certainly.
I work at the administrative level of libraries (both public and academic). Some of the jobs I seek require experience or knowledge of programming languages and the ability to work with servers and websites because the administrator must be responsible for the library's network and site.
I have some web building experience, but little experience with programming or server use, so I want to acquaint myself with the tools necessary to maintain a library's network and computers. For many of these positions, JavaScript, HTML, and Python make appearance alongside library specific system like MARC records (for cataloguing).
Replies from: Lumifer↑ comment by Lumifer · 2014-12-22T20:59:19.023Z · LW(p) · GW(p)
There are three different sets of skills involved in what you are talking about. I don't know which one do you want to focus on.
There is the system administrator (sysadmin) skill set related to "the ability to work with servers and websites" -- systadmins are "responsible for the library's network and site" and they "maintain a library's network and computers". It mostly has to do with deep understanding of Windows and/or Unixoid operating systems and doesn't involve that much programming per se and that mostly in shell scripts and/or swiss-army-knife languages like Perl.
There is the web developer skill set related to websites and it involves, nowadays, a mix of HTML (which is NOT a programming language), Javascript, and a database backend (usually but not always SQL).
And finally there is the programmer (err... software engineer) skill set which varies a lot depending on the context, but is generally about the ability to architect, construct, and maintain complex software systems or subsystems. Different subfields tend to use different programming languages so there is no single recommendation to be made.
Replies from: None↑ comment by [deleted] · 2014-12-22T22:24:08.936Z · LW(p) · GW(p)
Thank you kindly for the break down.
Most of the positions I apply for would involve knowledge of sysadmin work and, rarely, web developer work. As you can likely tell, my position is not as the sysadmin or web developer, but rather as the manager under whom these positions work.
Replies from: None↑ comment by [deleted] · 2014-12-23T04:39:49.791Z · LW(p) · GW(p)
Yes, that's a little different. IT frequently doesn't entail a lot of coding by hand. There's a lot of copying and pasting code from scripts you come across, and then editing the code to get it to run properly on your system. An introductory course in computer science will help with that, but some of the more advanced stuff you'll encounter will simply never come up. What you really need is basic familiarity with a wide variety of different frameworks; how to browse through them to find what you're looking for. The hard part isn't knowing how to solve the problem; it's knowing where to look to find the problem. On the web development side, you will want to see what content management system your company is using and learn how that CMS functions. You can use builtwith.com to figure it out if you don't already know. For each language or framework, make sure you focus on just the basics before moving onto a new one. If you try to go in-depth with all of them you're never going to get through it all. When you have to do something in that language, just open up a cheat sheet for it, and use that to guide you.
Replies from: None↑ comment by Gondolinian · 2014-12-22T19:10:44.581Z · LW(p) · GW(p)
In addition to what is4junk suggested, you might want to check out Codecademy.
Replies from: None↑ comment by somnicule · 2014-12-22T20:08:06.445Z · LW(p) · GW(p)
Learn Python the Hard Way is a pretty solid resource, though I used it before Codecademy came out. Both are excellent practice-based resources for starting programming.
After that, just get python books and work through them. Tools like Flask and Django if you want to do web development, other stuff if you want to do other stuff. If you don't know if you
Stackoverflow is usually where google will take you when you look for answers to your questions, so you might as well bookmark it.
And if you don't have something in mind you want to make, but you want to keep practising, try doing some ProjectEuler problems.
Replies from: None↑ comment by [deleted] · 2014-12-22T19:19:23.380Z · LW(p) · GW(p)
What is your occupation and what are you looking to program?
Replies from: None↑ comment by [deleted] · 2014-12-22T20:38:05.432Z · LW(p) · GW(p)
My occupation is library administration. My specific interest is that many of the administrative positions I am looking at require some knowledge of programming in order to maintain the library's network. So, what I would be working with would be an organization-wide network (the organization being either small scale or multi-branch with each branch being small in scale).
↑ comment by is4junk · 2014-12-22T18:22:14.118Z · LW(p) · GW(p)
Coursera and edx have several free courses.
https://www.coursera.org/
https://www.edx.org/
↑ comment by Interpolate · 2014-12-23T03:18:23.755Z · LW(p) · GW(p)
Code Academy: http://www.codecademy.com/
Harvard's CS50 course: https://cs50.harvard.edu/
You can also take CS50 through edX, which grants certificates: https://www.edx.org/course/introduction-computer-science-harvardx-cs50x#.VJjeyf8AAA
Replies from: None↑ comment by [deleted] · 2014-12-26T01:00:50.519Z · LW(p) · GW(p)
If you're interested in learning web development, I've been going through a great resource called The Odin Project. It pulls together a bunch of different free resources to take you from knowing nothing to being hired.
Replies from: Nonecomment by ike · 2014-12-23T04:27:16.884Z · LW(p) · GW(p)
Does this mean something? http://phys.org/news/2014-12-quantum-physics-complicated.html
They found that 'wave-particle duality' is simply the quantum 'uncertainty principle' in disguise, reducing two mysteries to one.
That seems like the kind of thing that sounds cool to say but doesn't represent anything new, except that it's being widely reported (Google news search) as a spanking-new revolution in QM, which is evidence for it being at least slightly significant. Can anyone who understands what's going on (or knows someone that does) tell me whether this is anything significantly new?
Edit: looks like the HN people aren't too impressed either. https://news.ycombinator.com/item?id=8772422
Replies from: passive_fist, jsteinhardt, None↑ comment by passive_fist · 2014-12-23T08:46:49.103Z · LW(p) · GW(p)
It's just an example of lousy reporting. But first, some background. Whether a 'particle' or 'wave' approximation is more accurate depends on energy density. When the density of energy is relatively low compared to the energy of the photons (such as gamma rays coming off from a sample of radioactive material), the particle approximation is far more appropriate. When the energy density is high relative to the energy of the photons (like in a microwave oven), the wave approximation fits better. This is what was traditionally meant by 'wave-particle duality'.
This idea is indeed connected with the fact that the more localized a wavefunction is, the more spread is spectrum of momenta is (uncertainty principle). This is widely known and is nothing new. What they've done in this paper - which despite the lazy reporting of the paper is actually a thought-provoking bit of work - is consider 'wavefunction collapse', which is just the process of entanglement of the 'observer' wavefunction with the 'experiment' wavefunction. They've essentially shown that the amount of information that can flow from the 'experiment' to the 'observer' when the wavefunctions become entangled has an entropic bound. This idea has been thrown about for years; here they claim to have finally found a satisfying proof.
↑ comment by jsteinhardt · 2014-12-23T06:16:29.015Z · LW(p) · GW(p)
I'm not a physicist but I think it's been well-known for a while that wave-particle duality arises from the uncertainty principle.
↑ comment by [deleted] · 2014-12-23T13:02:46.536Z · LW(p) · GW(p)
Imagine you build a computer model, and it has an ontology. Ontolgy 1.
This computer model simulates the behavior or particles. These particles and their interactions form another computer. A computer inside a computer simulation. The internal computer has its own ontology. Ontology 2. The internal computer performs measurements of its world.
Basically, the idea is that Ontology 2 is what the computer model knows about itself. And that the uncertainty principle applies to our measurements, which in the model are represented by Onotology 2.
That means Ontology 1 literally does not have to follow the laws of physic that we observe, merely produce observations that are consistent with the laws of physics we observe.
Replies from: ike↑ comment by ike · 2014-12-23T14:30:10.574Z · LW(p) · GW(p)
Um... that just sounds like the simulation hypothesis, and doesn't seem relevant. (Or is that what the paper means, and all the news articles have crazy authors?)
Although you could probably get that written up and have everyone report it as revolutionary.
Replies from: None, None↑ comment by [deleted] · 2014-12-23T15:29:36.702Z · LW(p) · GW(p)
The article says "The connection between uncertainty and wave-particle duality comes out very naturally when you consider them as questions about what information you can gain about a system. Our result highlights the power of thinking about physics from the perspective of information".
Ok, with that in mind, and the primary/secondary ontology thing in mind too.
Now, we have an observer in a model (the computer inside the computer). And that observer makes measurements by no special rules, just the basic interactions of the physics engine simulating the particles in ontology 1.
Let's ask the observer to measure a particle's position and momentum with total certainty.
The observer makes the first measurement, and records it, and this alters the state of the particle. When the observer goes to make a second measurement, the state of the particle is obviously different after the first measurement than before.
The measurement gets stored in Ontology 2.
Ontology 2 is a different kind of informaiton than Ontolgy 1. The Uncertainty principle only applies to Ontology 2.
It's whatever emergent information the model comes to know about itself by containing a neural network that performs measurements.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-12-23T20:33:25.285Z · LW(p) · GW(p)
The observer makes the first measurement, and records it, and this alters the state of the particle.
The disturbance interpretation is kind of deprecated nowadays.
Replies from: None↑ comment by [deleted] · 2014-12-25T21:21:19.130Z · LW(p) · GW(p)
Well, you could look at Heisenberg's Uncertainty Princpiple: (o_x * o_p >= hbar / 2) and try to interpret what it means.
But I'm suggesting something else entirely.
Set that equation off to the side. And then building a model of measurement.
We don't have a model of measurement actually taking place, by Everett's standards. Computers have a little ways to go to a model that size. But we will, and in the meantime, any abstract thinker should be able to grok this.
In our models, we have relationships between measurable quantities. In Everett's model, the measurable quantitites are emergent features, existing an an inner ontology defined by the neural network of an observer that exists as a physical process in the primary ontology.
The idea here is, every measurement the observer makes will be consistent with the uncertainty principle. The uncertainty principle returns to the model as a pattern in the measurements made.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-12-25T21:54:34.112Z · LW(p) · GW(p)
Well, you could look at Heisenberg's Uncertainty Princpiple: (ox * op >= hbar / 2) and try to interpret what it means.
I have, and so have other people.
We don't have a model of measurement actually taking place, by Everett's standards.
We don't have a model of what measurement? Quantum measurement?
The idea here is, every measurement the observer makes will be consistent with the uncertainty principle. The uncertainty principle returns to the model as a pattern in the measurements made.
You're hinting that the UP will emerge form any notion of measurement, without any other assumptions. I am not sure you can pull that off though.
In any case, you need to be a lot more precise.
Replies from: None↑ comment by [deleted] · 2014-12-25T21:56:47.320Z · LW(p) · GW(p)
We don't have a model of what measurement? Quantum measurement?
Measurement according to Everett's PhD Thesis, page 9:
Summary here:
https://github.com/MazeHatter/Everett-s-Relative-State-Formulation
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-12-25T22:10:01.600Z · LW(p) · GW(p)
Measurement according to Everett's PhD Thesis,
That doesn't answer the question "what is being explained", it answers the question "how is being explained".
Summary here:
https://github.com/MazeHatter/Everett-s-Relative-State-Formulation
Not actually a summary, since you introduce elements not present in the original.
Replies from: None↑ comment by [deleted] · 2014-12-25T22:15:00.449Z · LW(p) · GW(p)
That doesn't answer the question "what is being explained", it answers the question "how is being explained".
I don't follow.
It seems to me, in the Copenhagen interpretation, measurement was a collapse event. Everett is saying, you know, we can probably just model the observer as physical process.
Measurement, to Everett, would be a physical process which can be modeled. A measurment can be said to have objectively happened when the observer creates a record of the measurement.
Not actually a summary, since you introduce elements not present in the original.
Ok, that might be a valid point. What specific elements are not in the original?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-12-26T11:30:16.778Z · LW(p) · GW(p)
It seems to me, in the Copenhagen interpretation, measurement was a collapse event.
Yes, but not a non physical event. That would be Consciousness Causes Collapse
Everett is saying, you know, we can probably just model the observer as physical process.
Yes, but youre reading that as a classical physical process, and then guessing that disturbance must be the classical mechanism by which the appearance of quantum weirdness arises.
What specific elements are not in the original?
Disturbance
comment by Capla · 2014-12-28T18:12:22.241Z · LW(p) · GW(p)
Can anyone tell me who wrote this?
Replies from: radical_negative_one, shminux↑ comment by radical_negative_one · 2014-12-28T19:09:47.913Z · LW(p) · GW(p)
Glancing at the comments, I see one of them addressed to "nyan", so I'm guessing it's Nyan Sandwich, who left when More Right was formed.
Replies from: Capla↑ comment by Capla · 2014-12-28T19:15:29.105Z · LW(p) · GW(p)
I thought you might be joking, but low and behold, there is a "more right." What are they, a neo- reactionary offshoot of lesswrong?
Replies from: knb↑ comment by knb · 2014-12-28T22:46:52.212Z · LW(p) · GW(p)
They are definitely neo-reactionaries, but I wouldn't say they are an offshoot of Less Wrong. Anissimov was never a regular commenter here, IIRC.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-12-29T00:58:07.727Z · LW(p) · GW(p)
He was an employee of MIRI though.
↑ comment by Shmi (shminux) · 2014-12-29T06:30:54.941Z · LW(p) · GW(p)
Yes, it was Nyan_Sandwich, before he turned to the Dark Side.
comment by ChristianKl · 2014-12-25T17:33:26.531Z · LW(p) · GW(p)
A week after the solstice I'm thinking about the next one.
Is there any song that's good to sing that represents the idea of updating a belief or at least learning something new?
Replies from: fubarobfusco, polymathwannabe↑ comment by fubarobfusco · 2014-12-25T20:16:36.584Z · LW(p) · GW(p)
Rework a few lyrics on "I'm a Believer"?
↑ comment by polymathwannabe · 2014-12-26T04:05:29.640Z · LW(p) · GW(p)
As always, everything can be said with Alanis Morissette.
comment by polymathwannabe · 2014-12-23T13:21:20.107Z · LW(p) · GW(p)
Webcomic:
Utilitarianism vs. virtue ethics.
Replies from: ilzolende↑ comment by ilzolende · 2014-12-23T19:19:14.711Z · LW(p) · GW(p)
Same webcomic, on the same theme: Kant's Birthday
comment by [deleted] · 2014-12-23T13:04:01.761Z · LW(p) · GW(p)
Are all interpretations of QM equally wrong?
Is there not one that is less wrong?
Is there not one that is truer to the themes of rationalism?
What is the difference between rationalism and empiricism?
Replies from: passive_fist, Tenoke↑ comment by passive_fist · 2014-12-25T01:25:40.094Z · LW(p) · GW(p)
Interpretations of quantum mechanics are not really something that a lot of physicists worry about and, in my humble opinion, they aren't that interesting anyway. The idea of 'interpretations' is mostly a relic from the days of Bohr and Einstein. Their approach to physics was quite different from today. Virtually everyone agrees that the act of measurement is nothing more than the entanglement of the 'experiment' system with the 'observer' system, and that as far as physics is concerned, that's the only thing that matters. You can call it 'wavefunction collapse' or 'choosing a world' or whatever you want, you're talking about the same idea. In fact physicists often use these various metaphors interchangeably.
There are a few slightly more interesting concepts that are related to interpretations, such as if hidden variable theories are plausible (they'd have to be non-local and super-deterministic to be plausible) or if Bohmian pilot wave mechanics is an adequate mechanism for reality (it's not yet known whether it can incorporate special relativity, and that's really really important for modern physics since the standard model is intimately tied with special relativity).
I guess you could say that the many-worlds interpretation is the closest to a 'consistent' interpretation. But there's a danger here. Just because it's a consistent interpretation doesn't mean that there must actually be an infinite number of 'worlds'. At the end of the day, it could turn out that quantum mechanics is just the result of some unknown, deeper theory, and that there is just a single world.
Replies from: Douglas_Knight, None↑ comment by Douglas_Knight · 2014-12-31T01:03:10.414Z · LW(p) · GW(p)
Virtually everyone agrees
Have you actually talked to physicists about this? I have and that is the opposite of my conclusion.
↑ comment by [deleted] · 2014-12-25T21:03:23.430Z · LW(p) · GW(p)
Virtually everyone agrees that the act of measurement is nothing more than the entanglement of the 'experiment' system with the 'observer' system, and that as far as physics is concerned, that's the only thing that matters.
As I pointed out, Hugh Everett does not agree.
He describes observation as being performed by an automatically functioning with sensory gear and a memory.
The observer has to create a memory record in memory for a measurement to occur.
It's on page 9 on his thesis:
Replies from: passive_fist↑ comment by passive_fist · 2014-12-25T22:47:19.416Z · LW(p) · GW(p)
Quantum theory was barely understood when Everett wrote his thesis. In particular, quantum mechanics as operator theory on Hilbert spaces was only starting to become understood, and Bell's theorem had not yet been proven. An article written 50 years ago has little bearing on what physicists think today.
But anyway, the distinction of 'creating a memory' does not apply when you consider the observer and experiment together as a single quantum system. All quantum systems are reversible and follow unitary transformation laws. This means that no information can ever be lost or created within a quantum system.
Replies from: None↑ comment by [deleted] · 2014-12-25T22:51:19.398Z · LW(p) · GW(p)
But anyway, the distinction of 'creating a memory' does not apply when you consider the observer and experiment together as a single quantum system.
Why not?
Let's say I am looking at a clock.
I'm a physical thing, interacting with another physical thing. You can consider me+clock to be a single physical system.
I still record the measurement in my memory. I still remember looking at the clock. That doesn't magically go away.
Accroding to Everett, his idea is to "deduce the subjective appearance of phenomena" by looking that contents of my memory.
In other words, Everett's model does not make a prediction until a measurement record is created. He then suggests these measurment records are consistent with our empirical observations, and also the equivalient to the predictions derived from a collapse.
Replies from: passive_fist↑ comment by passive_fist · 2014-12-26T03:26:07.334Z · LW(p) · GW(p)
Yes, but what if a memory is created and then destroyed?
Replies from: None↑ comment by Tenoke · 2014-12-23T13:31:27.914Z · LW(p) · GW(p)
You haven't seen that one big sequence that mostly argues for one of the QM interpretations, have you?
Replies from: None↑ comment by [deleted] · 2014-12-23T13:36:58.908Z · LW(p) · GW(p)
http://lesswrong.com/lw/r8/and_the_winner_is_manyworlds/
Ugh. DeWitt strikes again.
Replies from: gjm↑ comment by gjm · 2014-12-23T18:26:36.041Z · LW(p) · GW(p)
If you have some actual reason for thinking that the Everett interpretation is bad, why don't you give it here rather than just saying you don't like it?
[EDITED to add: Ah, I see that what you dislike isn't the Everett interpretation but de Witt's formulation of it. Fair enough, but the same remark applies mutatis mutandis.]
Replies from: None↑ comment by [deleted] · 2014-12-23T18:36:55.065Z · LW(p) · GW(p)
Indeed, I should.
Here is one reason, you say "Everett Interpretation" and "DeWitt's Formulation", but you've got that crossed.
Everett provided the Relative State Formulation. That's what he called his paper, which is a bold and provocative attempt to mathematically model two ontologies, absolute and relative state, unlike one that is traditional in mathematics.
DeWitt provided the Many Worlds Interpretation, that was the name of their paper.
While the MWI lends itself to some interesting things, far and away, Everett's models lend themselves to the stuff we actually work on today, computer vision and computer hearing. He left theoretical physics to work on machine learning. Because that's how reality worked to him.
The type of thing he writes about on Page 9 is coming within our grasp, technologically, for the first time.
This idea seems to be anticipated by Leibniz, and Plato, and Kant, and a great many other traditions wherein our objective reality is a relative reality based on an absolute reality.
So Everett's Formulation gives us new mathematical territory to explore, which we have been exploring independently as machine learning. And the result is model that contains an observer that produces a measurement record. Probably not far off.
comment by Mollie · 2014-12-23T16:16:44.464Z · LW(p) · GW(p)
I'm in the middle of a rationality crisis. I wish I had somebody to talk to, but I'm not close enough to any rationalists to ask for a personal chat when I keep thinking, "They have more important things to do!" and none of my close friends are rationalists.
Replies from: satt, drethelin, dthunt↑ comment by satt · 2014-12-23T18:38:52.658Z · LW(p) · GW(p)
I know you retracted that comment, but if you're still looking for someone to talk to, you could make a post in the Discussion section. If that seems a bit too public or off-topic or whatever, Tumblr might be worth a shot; there are quite a few LWish people there too, and commiseration about personal crises isn't exactly rare on Tumblr.
comment by advancedatheist · 2014-12-22T03:58:31.025Z · LW(p) · GW(p)
Did Whileon Chay run a scam to fund his wife's cryopresevation in 2009?
BTW, I'd like to know the ethnicity of the guy because I don't recognize the provenience of that sort of name, and it may have a nonobvious pronunciation:
References:
Department of Justice: Manhattan U.S. Attorney Announces Charges Against Manager Of Commodities Pool For Defrauding Investors Of More Than $5 Million
http://www.justice.gov/usao/nys/pressreleases/December14/ChayWhileonIndictmentPR.php
And a whole lot of news websites have picked up on this story. Just one example:
Authorities: Man used investor funds to cryogenically freeze wife
http://www.cnn.com/2014/12/20/justice/new-york-fraud-charges/
I suspect this describes the wife's cryopreservation:
http://www.alcor.org/Library/pdfs/casereportA2435.pdf
Now, Alcor acted in good faith by putting this woman into cryopreservation and accepting her husband's money for it. I suspect the O Administration won't make a big deal out of it because Chay's case involves a relatively small amount of money as financial malfeasance goes, and it lacks a racial angle to exploit.
Replies from: pragmatist↑ comment by pragmatist · 2014-12-22T05:14:28.763Z · LW(p) · GW(p)
I suspect this describes the wife's cryopreservation:
I doubt it. The subject of that document was 46 when she died. Chay's wife was 52, according to news reports.
I suspect the O Administration won't make a big deal out of it because Chay's case involves a relatively small amount of money as financial malfeasance goes, and it lacks a racial angle to exploit.
I'm not as opposed to political discussion on this site as many are, but I do think the original point of EY's "Politics is the Mindkiller" post is worth keeping in mind. Inserting this kind of mind-killing aside in an otherwise non-political comment is needlessly inflammatory and distracting. I don't want to see this sort of thing on LW.
Replies from: bramflakes↑ comment by bramflakes · 2014-12-22T10:32:04.770Z · LW(p) · GW(p)
I'm not as opposed to political discussion on this site as many are, but I do think the original point of EY's "Politics is the Mindkiller" post is worth keeping in mind. Inserting this kind of mind-killing aside in an otherwise non-political comment is needlessly inflammatory and distracting. I don't want to see this sort of thing on LW.
Having seen many of his recent posts I believe he's doing it on purpose.