Open Thread, Jul. 20 - Jul. 26, 2015

post by MrMind · 2015-07-20T06:55:04.257Z · LW · GW · Legacy · 206 comments

Contents

206 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

206 comments

Comments sorted by top scores.

comment by [deleted] · 2015-07-21T06:04:25.314Z · LW(p) · GW(p)

Would a series of several posts on astrobiology and the Fermi paradox, each consisting of a link to an external post on a personal blog I have just established to contain my musings on the subject and related matters, be appreciated?

Replies from: philh, jacob_cannell, ZankerH, MrMind, JoshuaZ, Username, Stingray
comment by philh · 2015-07-21T09:56:46.371Z · LW(p) · GW(p)

Your comments here are consistently interesting, and I'd like to subscribe to your RSS feed.

comment by jacob_cannell · 2015-07-21T21:21:00.517Z · LW(p) · GW(p)

Yes.

Why not dual post both here and on your blog?

Replies from: None
comment by [deleted] · 2015-07-21T22:22:04.333Z · LW(p) · GW(p)

A hidden question of mine is actually how to present them here - copypaste into this space to do a dual post, or merely post links and brief summaries.

It also appears that my bloggery will dither a bit away from what is likely the most interesting to this audience (Fermi paradox and 'where are they' and the likely shape of the future of humanity) on occasion into things like basic origin of life theories, geochemistry, what I think SETI should actually be doing compared to what they are doing now, and one or two case studies of one-off radio signals that have never been confirmed. There is definitely a cohesive multiple-part initial burst incoming which I will probably link to in its entirety, but this leaves me wondering how much to link to/reproduce.

Replies from: jacob_cannell, ZeitPolizei, gwern
comment by jacob_cannell · 2015-07-21T23:13:36.040Z · LW(p) · GW(p)

things like basic origin of life theories, geochemistry, what I think SETI should actually be doing compared to what they are doing now, and one or two case studies of one-off radio signals that have never been confirmed

These topics haven't been discussed here much and may actually be more interesting for that reason, whereas general Fermi paradox and future civ models have come up recently.

comment by ZeitPolizei · 2015-07-22T22:27:14.696Z · LW(p) · GW(p)

I don't know how it is for others, but personally, I am much more likely to read a full text if it's posted here directly, than if there's just a link.

comment by gwern · 2015-07-22T18:50:19.204Z · LW(p) · GW(p)

A hidden question of mine is actually how to present them here - copypaste into this space to do a dual post, or merely post links and brief summaries.

You could do what I do: copypaste, and then six months later after all the discussion is done, delete the LW copy and replace it with a link & summary. Best of both worlds, IMO.

comment by ZankerH · 2015-07-21T07:11:03.611Z · LW(p) · GW(p)

I'd be interested in your take on the topic.

comment by MrMind · 2015-07-21T06:51:02.743Z · LW(p) · GW(p)

I surely would appreciate it.

comment by JoshuaZ · 2015-07-23T23:18:47.469Z · LW(p) · GW(p)

Yes. absolutely would be of interest.

comment by Username · 2015-07-21T12:08:50.373Z · LW(p) · GW(p)

Has anyone in the history of LW ever said that they don't want new interesting posts?

comment by Stingray · 2015-07-21T15:35:41.826Z · LW(p) · GW(p)

Keenly waiting for your posts!

comment by tetronian2 · 2015-07-21T00:57:27.109Z · LW(p) · GW(p)

Is anyone interested in another iterated prisoner's dilemma tournament? It has been nearly a year since the last one. Suggestions are also welcome.

Replies from: solipsist, tetronian2
comment by solipsist · 2015-07-23T01:24:50.362Z · LW(p) · GW(p)

In addition to current posters, these tournaments generate external interest. I, and more importantly So8res, signed up for an account at LessWrong for one of these contests.

Replies from: tetronian2
comment by tetronian2 · 2015-07-23T02:08:34.127Z · LW(p) · GW(p)

Wow, I was not aware of that. I saw that the last one got some minor attention on Hacker News and Reddit, but I didn't think about the outreach angle. This actually gives me a lot of motivation to work on this year's tournament.

Replies from: solipsist
comment by solipsist · 2015-07-23T04:15:08.316Z · LW(p) · GW(p)

Oops! I misremembered. So8res' second post was for that tournament, but his first was two weeks earlier. Shouldn't have put words in his mouth, sorry!

comment by tetronian2 · 2015-07-26T16:33:38.656Z · LW(p) · GW(p)

So, to follow up on this, I'm going to announce the 2015 tournament in early August. Everything will be the same except for the following:

  • Random-length rounds rather than fixed length
  • Single elimination instead of round-robin elimination
  • More tooling (QuickCheck-based test suite to make it easier to test bots, and some other things)

Edit: I am also debating whether to make the number of available simulations per round fixed rather than relying on a timer.

I also played around with a version in which bots could view each other's abstract syntax tree (represented as a GADT), but I figured that writing bots in Haskell was already enough of a trivial inconvenience for people without involving a special DSL, so I dropped that line of experimentation.

comment by Toggle · 2015-07-23T22:22:09.846Z · LW(p) · GW(p)

Just an amusing anecdote:

I do work in exoplanet and solar system habitability (mostly Mars) at a university in a lab group with four other professional researchers and a bunch of students. The five of us met for lunch today, and it came out that three of the five had independently read HPMoR to its conclusion. After commenting that Ibyqrzbeg'f Iblntre cyndhr gevpx was a pretty good idea, our PI mentioned that some of the students at Cal Tech used a variant of this on the Curiosity rover- they etched graffiti in to hidden corners of the machine ('under cover of calibrations'), so that now their names have an expected lifespan of at least a few million years against Martian erosion. It's a funny story, and also pretty neat to see just how far Eleizer's pernicious influence goes in some circles.

comment by MondSemmel · 2015-07-20T18:23:11.700Z · LW(p) · GW(p)

I just listened to a podcast by Sam Harris called "Leaving the Church: A Conversation with Megan Phelps-Roper". It's a phenomenal depiction of the perspective of someone who was born in, but then left, the fundamentalist Westboro Baptist Church.

Most interesting is Megan's clear perspective on what it was like before she left, and many LWers will recognize concepts like there being no evidence that could have possibly convinced her that her worldview had been wrong, etc. Basically, many things EY warns of in the sequences, like motivated cognition, are things she went through, and she's great at articulating them.

comment by gwern · 2015-07-26T15:26:00.475Z · LW(p) · GW(p)

So the head of BGI, famous for extremely ambitious & expensive genetics projects which are a Chinese national flagship, is stepping down to work on AI because genetics is just too boring these days: http://www.nature.com/news/visionary-leader-of-china-s-genomics-powerhouse-steps-down-1.18059

I haven't been following estimates lately, but how much do people think it would cost in GPUs to approximate a human brain at this point given all the GPU performance leaps lately? I note that deep learning researchers seem to be training networks with up to 10b parameters using a 4 GPU setup costing, IIRC, <$10k, and given the memory improvements NVIDIA & AMD are working on, we can expect continued hardware improvements for at least another year or two.

(Schmidhuber's group is also now training networks with 100 layers using their new 'highway network' design; I have to wonder if that has anything to do with Schmidhuber's new NNAISENSE startup, beyond just Deepmind envy... EDIT: probably not if it was founded in September 2014 and the first highway network paper was pushed to arxiv in May 2015, unless Schmidhuber et al set it up to clear the way for commercializing their next innovation and highway networks is it.)

Replies from: Wei_Dai, ESRogs
comment by Wei Dai (Wei_Dai) · 2015-07-27T23:30:00.688Z · LW(p) · GW(p)

I haven't been following estimates lately, but how much do people think it would cost in GPUs to approximate a human brain at this point given all the GPU performance leaps lately?

I had some recent discussions with Jacob Cannell about this, where he estimated that (with the right software which we don't yet have) you could build a human-level AGI with about 1000 modern GPUs. The amortized cost plus electricity (or if you rent from Amazon AWS) is roughly $0.1 per hour per GPU so the total would be around $100 per hour.

Schmidhuber's new NNAISENSE startup

FLI just gave Bas Steunebrink $196,650 to work on something called "experience-based AI" or EXPAI, and Bas is one of the co-founders of NNAISENSE. This EXPAI sounds like a traditional hand-coded AI, not ANN based. Possibly they set up the startup without any specific plans, but just in case they wanted to commercialize something?

comment by ESRogs · 2015-08-20T05:05:10.513Z · LW(p) · GW(p)

From a very uninformed perspective, this looks like an area of science where China is leading the way. Can anyone more informed comment on whether that is accurate, and whether there are other areas in which China leads?

comment by James_Miller · 2015-07-21T03:41:07.004Z · LW(p) · GW(p)

There have been a lots of data breaches recently. Is this because of incompetence, or is it really difficult to maintain a secure database? If I'm going to let at least 100 people have access to a database and intelligent hackers really want to get access for themselves do I have much of a chance of stopping the hackers? Restated: have the Chinese and Russians probably hacked into most every database they really want?

Replies from: solipsist, Lumifer
comment by solipsist · 2015-07-23T04:00:34.687Z · LW(p) · GW(p)

I am not close to an expert in security, but my reading of one is that yes, the NSA et. al. can get into any system they want to, even if it is air gapped.

Dilettanting:

  • It is really really hard to produce code without bugs. (I don't know a good analogy for writing code without bugs -- writing laws without any loopholes, where all conceivable case law had to be thought of in advance?)
  • The market doesn't support secure software. The expensive part isn't writing the software -- it's inspecting for defects meticulously until you become confident enough that defects which remain are sufficiently rare. If a firm were to go though the expense of producing highly secure software, how could they credibly demonstrate to customers the absence of bugs? It's a market for lemons.
  • Computers systems comprise hundreds of software components and are only as secure as the weakest one. The marginal returns from securing any individual software component falls sharply -- there isn't much reason to make any component of the system too much more secure than the average component. The security of most consumer components is very weak. So unless there's an entire secret ecosystem of secured software out there, "secure" systems are using a stack with insecure, consumer, components.
  • Security in the real world is helped enormously by the fact that criminals must move physically near their target with their unique human bodies. Criminals thus put themselves at great risk when committing crimes, both of leaking personally identifying information (their face, their fingerprints) and of being physically apprehended. On the internet, nobody knows you're a dog, and if your victim recognizes your thievery in progress, you just disconnect. It is thus easier for a hacker to make multiple incursion attempts and hone his craft.
  • Edward Snowden was, like, just some guy. He wasn't trained by the KGB. He didn't have spying advisors to guide him. Yet he stole who-knows-how-many thousands of top-secret documents in what is claimed to be (but I doubt was) the biggest security breach in US history. But Snowden was trying to get it in the news. He stole thousands of secret document, and then yelled though a megaphone "hey everyone I just stole thousand of secret documents". Most thieves do not work that way.
  • Intelligence organizations have budgets larger than, for example, the gross box office receipts of the entire movie industry. You can buy a lot for that kind of money.
Replies from: Zubon, hg00, James_Miller, kpreid
comment by Zubon · 2015-07-23T21:22:27.887Z · LW(p) · GW(p)

Additional note to #3: humans are often the weakest part of your security. If I want to get into a system, all I need to do is convince someone to give me a password, share their access, etc. That also means your system is not only as insecure as your most insecure piece of hardware/software but also as your most insecure user (with relevant privileges). One person who can be convinced that I am from their IT department, and I am in.

Additional note to #4: but if I am willing to forego those benefits in favor of the ones I just mentioned, the human element of security becomes even weaker. If I am holding food in my hands and walking towards the door around start time, someone will hold the door for me. Great, I am in. Drop it off, look like I belong for a minute, find a cubicle with passwords on a sticky note. 5 minutes and I now have logins.

The stronger your technological security, the weaker the human element tends to become. Tell people to use a 12-character pseudorandom password with an upper case, a lower case, a number, and a special character, never re-use, change every 90 days, and use a different password for every system? No one remembers that, and your chance of the password stickynote rises towards 100%.

Assume all the technological problems were solved, and you still have insecure systems go long as anyone can use them.

comment by hg00 · 2015-07-26T02:10:42.701Z · LW(p) · GW(p)

Great info... but even air-gapped stuff? Really?

Replies from: solipsist
comment by solipsist · 2015-07-26T04:11:21.619Z · LW(p) · GW(p)

My understanding is that a Snowden-leaked 2008 NSA internal catalog contains airgap-hopping exploits by the dozen, and that the existence of successful attacks on air gapped networks (like Stuxnet) are documented and not controversial.

This understanding comes in large measure from a casual reading of Bruce Schneier's blog. I am not an security expert and my "you don't understand what you're talking about" reflexes are firing.

But moving to areas where I know more, I think e.g. if I tried writing a program to take as input the sounds of someone typing and output the letters they typed, I'd have a decent chance of success.

comment by James_Miller · 2015-07-23T12:47:46.980Z · LW(p) · GW(p)

Thanks! As an economist I love your third reason.

comment by kpreid · 2015-09-01T01:30:10.536Z · LW(p) · GW(p)

Computers systems comprise hundreds of software components and are only as secure as the weakest one.

This is not a fundamental fact about computation. Rather it arises from operating system architectures (isolation per "user") that made some sense back when people mostly ran programs they wrote or could reasonably trust, on data they supplied, but don't fit today's world of networked computers.

If interactions between components are limited to the interfaces those components deliberately expose to each other, then the attacker's problem is no longer to find one broken component and win, but to find a path of exploitability through the graph of components that reaches the valuable one.

This limiting can, with proper design, be done in a way which does not require the tedious design and maintenance of allow/deny policies as some approaches (firewalls, SELinux, etc.) do.

comment by Lumifer · 2015-07-21T03:45:59.609Z · LW(p) · GW(p)

Is this because of incompetence, or is it really difficult to maintain a secure database?

Both.

have the Chinese and Russians probably hacked into most every database they really want?

I wonder why you exclude the Americans from the list of the attackers :-/

The answer is no, I don't think so, because while maintaining a secure database is hard, it's not impossible, especially if the said database not connected to the 'net in any way.

comment by Thomas · 2015-07-20T10:08:52.491Z · LW(p) · GW(p)

I see, as many others may, that currently we are living in a NN (Neural Networks) renaissance. They are not as good as one may wish them to be, in fact sometimes they seem quite funny.

Still, after some unexpected advances from the last year onward, they look quite unstoppable to me. Further advances are plausible and their applications in playing the Go game for example, can bring us some very interesting advances and achievements. Even some big surprise is possible here.

Anybody else shares my view?

Replies from: Houshalter, jacob_cannell
comment by Houshalter · 2015-07-20T11:08:16.827Z · LW(p) · GW(p)

You are not alone. I think NNs are definitely the best approach to AI, and recent progress is quite promising. They have had a lot of success on a number of different AI tasks. From machine vision to translation to video game playing. They are extremely general purpose.

Here's a recent quote from Schmidhuber (who I personally believe is most likely to create AGI.)

Schmidhuber and Hassabis found sequential decision making as a next important research topic. Schmidhuber’s example of Capuchin monkeys was both inspiring and fun (not only because he mistakenly pronounced it as a cappuccino monkey.) In order to pick a fruit at the top of a tree, Capuchin monkey plans a sequence of sub-goals (e.g., walk to the tree, climb the tree, grab the fruit, …) effortlessly. Schmidhuber believes that we will have machines with animal-level intelligence (like a Capuchin smartphone?) in 10 years.

Schmidhuber’s answer was the most unique one here. He believes that the code for truly working AI agents will be so simple and short that eventually high school students will play around with it. In other words, there won’t be any worry of industries monopolizing AI and its research. Nothing to worry at all!

Replies from: Thomas, MrMind
comment by Thomas · 2015-07-20T15:37:34.724Z · LW(p) · GW(p)

Meanwhile I also saw what Schmidhuber has to say and it is very interesting. He is talking about the second NN renaissance which is now.

I wouldn't be to much surprised, if a dirty general AI would be achieved this way. Not that it's very likely yet, but possible. And it could be quite nasty, as well. Perhaps it's not only the most promising venue, but also the most dangerous one.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-07-21T21:24:17.025Z · LW(p) · GW(p)

And it could be quite nasty, as well. Perhaps it's not only the most promising venue, but also the most dangerous one.

Why do you believe this? Do you think that brain inspired ANN based AI is intrinsically more 'nasty' or dangerous than human brains? Why?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2015-07-21T23:20:15.448Z · LW(p) · GW(p)

Other agents are dangerous to me to the extent that (1) they don't share my values/goals, and (2) they are powerful enough that in pursuing their own goals, they have little need to take game theoretic consideration of my values. ANN based AI will be similar to other humans in (1), and regarding (2) they are likely to be more powerful than humans since they'll be running on faster, more capable hardware than human brains, and probably have better algorithms as well.

Schmidhuber's best case scenario for superintelligence is that they take no interest in humanity, colonize space and leave us to survive on Earth. What's your best case scenario? Does it seem not much worse to you than the best case scenario for FAI (i.e., if humanity could coordinate to solve the cosmic tragedy of the commons problem and wait until we know how to safely build an AGI that shares some compromise, e.g., weighted average, of all human values)?

Replies from: jacob_cannell
comment by jacob_cannell · 2015-07-22T01:46:28.315Z · LW(p) · GW(p)

Other agents are dangerous to me to the extent that (1) they don't share my values/goals, and (2) they are powerful enough that in pursuing their own goals, they have little need to take game theoretic consideration of my values. ANN based AI will be similar to other humans in (1), and regarding (2) they are likely to be more powerful than humans since they'll be running on faster, more capable hardware than human brains, and probably have better algorithms as well.

Your points 1 and 2 are true but only in degrees. Humans vary significantly in terms of altruism (1) and power (2). Hitler - from what I've read - is a good example of a powerful, non-altruistic human. Martin Luther King and Ghandi are examples of highly altruistic humans (the first patterned directly after Jesus, the second patterned after Jesus and Bhudda). Now, it could be the case that these two were more selfish than they appear at first, because they were motivated by reward in the afterlife. Well perhaps to a degree, but that line of argument mostly fails as a complete explanation (and even if true, could also potentially become a strategy).

Finally, brain inspired ANNs != human brains. We can take inspiration from the best examples of human capabilities and qualities while avoiding the worst, and then extrapolate to superhuman dimensions.

Altruism can be formalized by group decision/utility functions, where the agent's utility function implements some approximation of the ideal aggregate of some vector of N individual functions (ala mechanism design, and clarke tax style policies in particular).

What's your best case scenario?

We explore AGI mind space and eventually create millions and then billions of super-wise/smart/benevolent AI's. This leads to a new political system - perhaps based on fast cryptoprotocols and new approximations of ideal group decision policies from mechanism design. Operating systems as we know them are replaced with AIs which eventually become something like mental twins, friends, trusted advisers, and political representatives. The main long term objective of the new AI governance is universal resurrection - implemented perhaps in a 100 years or so by turning the moon into a large computing facility. Well before that, existing humans begin uploading into the metaverse.

The average person alive today becomes a basically immortal sim but possesses only upper human intelligence. Those who invested wisely and get in at the right time become entire civilizations unto themselves (gods) - billions or trillions of times more powerful. The power/wealth gap grows without bound. It's like Jesus said: "To him who has is given more, and from him who has nothing is taken all."

However, allocating all of future wealth based on however much wealth someone had on the eve of the singularity is probably sub-optimal. The best case would probably also involve some sort of social welfare allocation policy, where the AIs spend a bunch of time evaluating and judging humans to determine a share of some huge wealth allocation. All the dead people who are recreated as sims will need wealth/resources, so decisions need to be made concerning how much wealth each person gets in the afterlife. There are very strong arguments for the need for wealth/money as an intrinsic component of any practical distributed group decision mechanism.

Perhaps the strongest argument against UFAI likelihood is sim-anthropic: the benevolent posthuman civs/gods (re)create far more historical observers than the UFAIs, as part of universal resurrection. Of course, this still depends on us doing everything in our power to create FAI.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2015-07-23T00:09:48.365Z · LW(p) · GW(p)

Thanks for the clear explanation of your views. What do you see as the main obstacles to achieving this?

Martin Luther King and Ghandi are examples of highly altruistic humans

I'm really worried that mere altruism isn't enough. If the other agent is more powerful, any subtle differences in values or philosophical views between myself and the other agent could be disastrous, as they optimize the universe according to their values/views which may turn out to be highly suboptimal for me. Consider the difference between average and total utilitarianism, or different views on whether we should assume the universe must be computable, what prior/measure to put on the multiverse, or how to deal with anthropics, e.g. simulation argument.

But I don't want them to blindly accept my current values/views either, since they may be wrong. Humans seem to have some sort of general problem solving / error correcting algorithm which we call "doing philosophy", and maybe we can teach that to ANN-based AI more easily than we could program it by hand, so in that sense maybe ANN-based AI actually could be less "nasty" than other approaches.

To me, achieving a near optimal outcome is difficult but not impossible, given enough time, but I don't see how to get the time. The current leaders in ANN-based AI don't seem to appreciate the magnitude of the threat, or the difficulty of solving the problem. (Besides Schmidhuber, who apparently does see the threat but is ok with it? Now that Bostrom's book has been out for a year and presumably most people who are ever going to read it has already read it, I'm not sure what's going to change their minds.) Perhaps ANN-based AI could be considered more "nasty" in this sense because it seems easier to be complacent about it, thinking that when the time comes, we'll just teach them our values, whereas trying to design a de novo AGI brings up a bunch of issues like exactly what utility function to give it, or what decision theory or prior, that perhaps makes it easier to see the larger problem.

(The other main obstacle I see is the strong economic and psychological incentives to achieve AGI ASAP, but that's the same whether we're talking about ANN-based AI or other kinds of AI.)

Replies from: jacob_cannell
comment by jacob_cannell · 2015-07-23T06:54:48.165Z · LW(p) · GW(p)

Thanks for the clear explanation of your views. What do you see as the main obstacles to achieving this?

My optimistic scenario above assumes not only that we solve the technical problems but also that the current political infrastructure doesn't get in the way - and in fact just allows itself to be dissolved.

In reality of course I dont think it will be that simple.

There are technical problems like value learning, and then there are socio-political problems. AGI is likely to cause systemic unemployment and thus a large recession which will force politics to get involved. The ideal scenario may be a shift to increased progressive/corporate tax combined with UBI or something equivalent. In the worst cases we have full scale depression and political instability.

Related to that will be the legal decisions concerning rights for AGI (or lack thereof). AGI rights seem natural, but they will also be difficult to enforce. AGI will be hard to define, and a poor definition can easily lead to strange perverse incentives.

Then there are the folks who don't believe in machine consciousness, or uploading, and basically will view all this as a terrible disaster. It's probably good that we've raised AI risk awareness amongst academics and elites, but AI may now have mainstream branding issues.

One question/concern I have been monitoring for a while now is the response from conservative Christianity. It's not looking good. Google "Singularity image of the beast" to get an idea.

In terms of risk, it could be that most of the risk lies in an individual or group using AGI to takeover the world, not from failure of value learning itself. Many corporations are essentially dictatorships or nearly so - there is no reason for a selfish CEO to encode anyone else's values into the AGI they create. Human risk rather than technical.

I'm really worried that mere altruism isn't enough. If the other agent is more powerful, any subtle differences in values or philosophical views between myself and the other agent could be disastrous, as they optimize the universe according to their values/views which may turn out to be highly suboptimal for me.

You already live in a world filled with a huge sea of agents which have values different than your own. We create new generations of agents all the time, and eventually infuse them with power and responsibility. We don't need to achieve 'perfect' value alignment (and that is probably non coherent regardless). We need only to align value distributions.

That being said, I do believe that the AGI we create will be far more aligned with our values than our children are.

The real fear is perhaps that of being left behind. The only solution to that really is to use AGI to accelerate the development of uploading.

The current leaders in ANN-based AI don't seem to appreciate the magnitude of the threat, or the difficulty of solving the problem.

From what I see, they have a wide spectrum of opinions. Schmidhuber is also unusual in that - for whatever reasons - he's pretty open about his views on the long term future, whereas many other researchers are more reserved.

Also, most of the top ANN researchers do not see a clear near term path to AGI - or else they would be implementing it already. They are focused on extending out from current solutions. Value learning comes later, in terms of natural engineering dependencies.

thinking that when the time comes, we'll just teach them our values,

Well yes - in the ANN approach that is the most likely solution. And actually its the most likely solution regardless, because designing a human complexity utility/value function by hand is just not workable.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2015-07-23T08:12:51.475Z · LW(p) · GW(p)

One question/concern I have been monitoring for a while now is the response from conservative Christianity. It's not looking good. Google "Singularity image of the beast" to get an idea.

What kind of problems do you think this will lead to, down the line?

You already live in a world filled with a huge sea of agents which have values different than your own. We create new generations of agents all the time, and eventually infuse them with power and responsibility.

This is true, but:

  1. I'm not comparing ANN-based AGI to the status quo, but to a future with some sort of near-optimal FAI.
  2. The new agents we currently create aren't much more powerful than ourselves, and cannot take over the universe and foreclose the possibility of a better outcome.
  3. Humans or humanity as a whole seem capable of making moral and philosophical progress, and this capability is likely to persist in future generations. I'm not sure the same will be true of ANN-based AGIs.

That being said, I do believe that the AGI we create will be far more aligned with our values than our children are.

I look forward to your post explaining this, but again my fear is that since to a large extent I don't know what my own values are (especially when it comes to post-Singularity problems like how to reorganize the universe on a large scale, i.e., whether we should run it according to Eliezer's Fun Theory, or convert it to hedonium, or what sort of hedonium exactly, or to spend most of the resources available to me on some sort of attempt to break out of any potential simulations we might be in, or run simulations of my own), straightforward approaches at value learning won't work when it comes to people like me, and there won't be time to work out how to teach the AGI to solve these and other philosophical problems.

The real fear is perhaps that of being left behind. The only solution to that really is to use AGI to accelerate the development of uploading.

Because we care about preserving our personal identities whereas many AGIs probably won't, AGIs will be faced with fewer constraints when it comes to improving themselves or designing new generations of AGIs, and along with a time advantage that is likely quite large in subjective time, this probably means that AGIs will always have a large advantage in intelligence until they reach the maximum feasible level in this universe and human uploads slowly catch up. Are you not worried that during this time, the AGIs will take over the universe and reorganize it according to their imperfect understanding of our values, which will look disastrous when we become superintelligences ourselves and figure out what we really want?

Replies from: jacob_cannell
comment by jacob_cannell · 2015-07-23T23:14:55.239Z · LW(p) · GW(p)

One question/concern I have been monitoring for a while now is the response from conservative Christianity. It's not looking good. Google "Singularity image of the beast" to get an idea.

What kind of problems do you think this will lead to, down the line?

Hopefully none - but the conservative protestant faction seems to have considerable political power in the US, which could lead to policy blunders. Due to that one stupid book (revelations), the xian biblical worldview is almost programmed to lash out at any future system which offers actual immortality. The controversy over stem cells and cloning is perhaps just the beginning.

On the other hand, out of all religions, liberal xtianity is perhaps closest to transhumanism, and could be its greatest ally.

As an example, consider this quote:

It is a serious thing to live in a society of possible gods and goddesses, to remember that the dullest and most uninteresting person you talk to may one day be a creature which, if you saw it now, you would be strongly tempted to worship.

This sounds like something a transhumanist might say, but it's actually from C.S. Lewis:

The command Be ye perfect is not idealistic gas. Nor is it a command to do the impossible. He is going to make us into creatures that can obey that command. He said (in the Bible) that we were "gods" and He is going to make good His words. If we let Him—for we can prevent Him, if we choose—He will make the feeblest and filthiest of us into a god or goddess, dazzling, radiant, immortal creature, pulsating all through with such energy and joy and wisdom and love as we cannot now imagine, a bright stainless mirror which reflects back to God perfectly (though, of course, on a smaller scale) His own boundless power and delight and goodness. The process will be long and in parts very painful; but that is what we are in for. Nothing less. He meant what He said.

Divinization or apotheosis is one of the main belief currents underlying xtianity, emphasized to varying degrees across sub-variations and across time.

..

[We alread create lots of new agents with different beliefs ...]

This is true, but:

  1. I'm not comparing ANN-based AGI to the status quo, but to a future with some sort of near-optimal FAI.

The practical real world FAI that we can create is going to be a civilization that evolves from what we have now - a complex system of agents and hierarchies of agents. ANN-based AGI is a new component, but there is more to a civilization than just the brain hardware.

  1. The new agents we currently create aren't much more powerful than ourselves, and cannot take over the universe and foreclose the possibility of a better outcome.

Humanity today is enormously more powerful than our ancestors from say a few thousand years ago. AGI just continues the exponential time-acceleration trend, it doesn't necessarily change the trend.

From the perspective of humanity of a thousand years ago, friendliness mainly boils down to a single factor: will the future posthuman civ ressurrect them into a heaven sim?

  1. Humans or humanity as a whole seem capable of making moral and philosophical progress, and this capability is likely to persist in future generations. I'm not sure the same will be true of ANN-based AGIs.

Why not?

One of the main implications of the brain being a ULM is that friendliness is not just a hardware issue. There is a hardware component in terms of the value learning subsystem, but once you solve that, it is mostly a software issue. It's a culture/worldview/education issue. The memetic software of humanity is the same software that we will instill into AGI.

That being said, I do believe that the AGI we create will be far more aligned with our values than our children are.

I look forward to your post explaining this, but again my fear is that since to a large extent I don't know what my own values are (especially when it comes to post-Singularity problems like how to reorganize the universe on a large scale . .

I don't see how that is a problem. You may not know yourself completely, but have some estimation or distribution over your values. As long as you continue to exist into the future, and as long as you have a significant share in the future decision structure (ie wealth or voting rights), then that should suffice - you will have time to figure out your long term values.

Are you not worried that during this time, the AGIs will take over the universe and reorganize it according to their imperfect understanding of our values, which will look disastrous when we become superintelligences ourselves and figure out what we really want?

This is a potential worry, but it can probably be prevented.

The brain is reasonably efficient in terms of intelligence per unit energy. Brains evolved from the bottom up, and biological cells are near optimal nanocomputers (near optimal in terms of both storage density in DNA, and near optimal in terms of energy cost per irreversible bit op in DNA copying and protein computations). The energetic cost of computation in brains and modern computers alike is dominated by wire energy dissipation in terms of bits/J/mm. Moore's law is approaching it's end which will result in hardware that is on par a little better than the brain. With huge investments into software cleverness, we can close the gap and achieve AGI. In 5 years or so, lets say that 1 AGI runs amortized on 1 GPU (neuromorphics doesn't change this picture dramatically). That means an AGI will only require 100 watts of energy and say $1,000/year. That is about a 100x productivity increase, but in a pinch humans can survive on only $10,000 a year.

Today the foundry industry produces about 10 million mid-high end GPUs per year. There are about 100 million human births per year, and around 4 million per year in the US. Of course if we consider only humans with IQ > 135, then there are only 1 million high IQ humans born per year. This puts some constraints on the likely transition time, and it is likely measured in years.

We don't need to instill values so perfectly that we can rely on our AGI to solve all of our problems until the end of time - we just need AGI to be similar enough to us that it can function as at least a replacement for future human generations and fulfill the game theoretic pact across time of FAI/god/resurrection.

Replies from: gjm, Wei_Dai
comment by gjm · 2015-07-24T11:29:28.439Z · LW(p) · GW(p)

liberal xtianity is perhaps closest to transhumanism, and could be its greatest ally

There's some truth in the first half of that, but I'm not so sure about the second. Expecting that God will at some point transform us into something beyond present-day humanity is a very different thing from planning to make that transformation ourselves. That whole "playing God" accusation probably gets worse, rather than better, if you're actually expecting God to do the thing in question on his own terms and his own schedule.

For a far-from-perfect analogy, consider the interaction between creationism and climate change. You might say: Those who fear that human activity might lead to disastrous changes in the climate, including serious harm to humanity, should find their greatest allies in those who believe that in the past God brought about a disastrous change in the earth's climate and wrought serious harm to humanity. But, no, of course it doesn't actually work that way; what actually happens is that creationists say "human activity can't harm the climate much; God promised no more worldwide floods" or "the alleged human influence on climate is on a long timescale, and God will be wrapping everything up soon anyway".

Replies from: jacob_cannell
comment by jacob_cannell · 2015-07-24T14:00:59.732Z · LW(p) · GW(p)

Expecting that God will at some point transform us into something beyond present-day humanity is a very different thing from planning to make that transformation ourselves.

Not necessarily. There is this whole idea that we are god or some aspect of god - as Jesus famously said " Is it not written in your law, I said, Ye are gods?". There is also the interesting concept in early xtianity that christ became a sort of distributed mind - that the church is literally the risen christ. Teilhard de Chardin gave a modern spin on that old idea. See also the assimilation saying. Paul thought something similar when he said things like " It is no longer I who live, but Christ who lives in me". So there is this strong tradition that Christ is something that can inhabit people. In that tradition (which really is the most authentic ) god builds the kingdom through humans. Equating the 'kingdom' with a positive singularity is a no brainer.

Yes the literalist faction will always wait for some external event, and to them Christ is a singular physical being, but that isn't the high IQ faction of xtianity.

For a far-from-perfect analogy, consider the interaction between creationism and climate change.

Creationists are biblical literalists - any hope for an ally is in the more sophisticated liberal variants.

comment by Wei Dai (Wei_Dai) · 2015-07-24T01:26:32.976Z · LW(p) · GW(p)

Why not?

Different configurations of artificial neurons (e.g., RNNs vs CNNs) are better at learning different things. If you build an AGI and don't test whether it can learn to do philosophy, it may not be able to learn to do philosophy very well. In the rush to build AGIs in order to reap the economic benefits, people probably won't have time to test for this.

The memetic software of humanity is the same software that we will instill into AGI.

I'm guessing that AGIs will have a very different distribution of capabilities from humans (e.g., they'll have much more working memory, and be able to do complex calculations instantaneously and with very low error, but bad at certain things that we neglect to optimize for when building them) so they'll probably develop a different set of memetic software that's more optimal for them.

As long as you continue to exist into the future, and as long as you have a significant share in the future decision structure (ie wealth or voting rights), then that should suffice - you will have time to figure out your long term values.

I guess that could potentially work while AGIs are maxed out at human level or slightly beyond and costing $1000/year, but if I'm not very optimistic that any social structure we come up with could preserve our share of the universe as the AGIs improve themselves and become more powerful. For example, if an AGI or a group of AGIs figures out a way to colonize the universe using resources under their sole control, why would they give the rest of us a share?

Today the foundry industry produces about 10 million mid-high end GPUs per year.

Surely there are lots of foundries (Intel's for example) that could be retooled to build GPUs if it became profitable to do so?

This puts some constraints on the likely transition time, and it is likely measured in years.

The hope is that we use this time to develop the necessary social structures to prevent AGIs from taking over the universe (without giving us a significant share of it)?

Replies from: jacob_cannell
comment by jacob_cannell · 2015-07-24T05:24:59.404Z · LW(p) · GW(p)

If you build an AGI and don't test whether it can learn to do philosophy, it may not be able to learn to do philosophy very well.

AGI to me is synonymous with a universal learning machine, and in particular with a ULM that learns at human capability. Philosophy is highly unlikely to require any specialized structures - because humans do philosophy with the same general cortical circuitry that's used for everything else.

In the rush to build AGIs in order to reap the economic benefits, people probably won't have time to test for this.

This is a potential problem, but the solution comes naturally if you - do the unthinkable for LWers - and think of AGI as persons/citizens. States invest heavily into educating new citizens beyond just economic productivity, as new people have rights and control privileges, so it's important to ensure a certain level of value alignment with the state/society at large.

In particular - and this is key - we do not allow religions or corporations to raise people with arbitrary values.

I'm not very optimistic that any social structure we come up with could preserve our share of the universe as the AGIs improve themselves and become more powerful.

Yeah - but we only need to manage the transition until human uploading. Uploading has enormous economic value - it is the killer derived app for AGI tech, and brain inspired AGI in particular. It seems far now mainly because AGI still seems far, but given AGI then change will happen quickly: first there will be a large wealth transfer to those who developed AGI and or predicted it, and consequently uploading will become up-prioritized.

Surely there are lots of foundries (Intel's for example) that could be retooled to build GPUs if it became profitable to do so?

Yeah - it could be pumped up to 10x current output fairly easily, and perhaps even 100x given a few years.

The hope is that we use this time to develop the necessary social structures to prevent AGIs from taking over the universe (without giving us a significant share of it)?

I expect that individual companies will develop their own training/educational protocols. Government will need some significant prodding to get involved quickly, otherwise they will move very slowly. So the first corps or groups to develop AGI could have a great deal of influence.

One variable of interest - which I am uncertain of - is the timetable involved in forcing a key decision through the court system. For example - say company X creates AGI. Somebody then sues them on behalf of their AGIs for child neglect or rights violation or whatever - how long does it take the court decide if and what types of software could be considered citizens? The difference between 1 year and say 10 could be quite significant.

At the moment it looks like the most straightforward route to having high leverage over the future is to be involved in the creation of AGI.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2015-07-26T01:54:09.765Z · LW(p) · GW(p)

AGI to me is synonymous with a universal learning machine, and in particular with a ULM that learns at human capability. Philosophy is highly unlikely to require any specialized structures - because humans do philosophy with the same general cortical circuitry that's used for everything else.

I also have some hope that philosophy ability essentially comes "for free" with general intelligence, but I'm not sure I want to bet the future of the universe on it. Also, an AGI may be capable of learning to do philosophy, but isn't motivated to do it, or isn't motivated to follow the implications of its own philosophical reasoning. A lot of humans for example don't seem to have much interest in philosophy, but instead things like maximizing wealth and status.

This is a potential problem, but the solution comes naturally if you - do the unthinkable for LWers - and think of AGI as persons/citizens.

Do you have detailed ideas of how that would work? For example if in 2030, we can make a copy of an AGI for $1000 (cost of a GPU) and that cost keeps decreasing, do we give each of them an equal vote? How do we enforce AGI rights and responsibilities if eventually anyone could buy a GPU card, download some open source software and make a new AGI?

Yeah - but we only need to manage the transition until human uploading. Uploading has enormous economic value - it is the killer derived app for AGI tech, and brain inspired AGI in particular.

I argued in a previous comment that it's unlikely that uploads will be able to match AGIs in intelligence until AGIs reach the maximum feasible level allowed by physics and uploads catch up, but I don't thinking you responded to that argument. If I'm correct in this, it doesn't seem like the development of uploading tech will make any difference. Why do you think it's a crucial threshold?

how long does it take the court decide if and what types of software could be considered citizens? The difference between 1 year and say 10 could be quite significant.

Even 10 years seem too optimistic to me. I think a better bet, if we want to take this approach, would be to convince governments to pass laws ahead of time, or prepare them to pass the necessary laws quickly once we get AGIs. But again, what laws would you want these to be, in detail?

comment by MrMind · 2015-07-21T06:54:44.828Z · LW(p) · GW(p)

Nothing to worry at all!

Yep, nothing at all!

comment by jacob_cannell · 2015-07-21T21:27:59.273Z · LW(p) · GW(p)

Oh yes.

A month ago I touched on this topic in "The Brain as a Universal Learning Machine". I intend to soon write a post or two specifically focusing on near term predictions for the future of DL AI leading to AGI. My main counterintuitive point is that the brain is actually not that powerful at all at the circuit level.

Replies from: Thomas
comment by Thomas · 2015-07-21T21:39:15.744Z · LW(p) · GW(p)

My main counterintuitive point is that the brain is actually not that powerful at all at the circuit level.

Quite possible, even quite likely. I think that the nature is trying to tell us this, by just how bad we humans are at arithmetic, for example.

Replies from: Houshalter
comment by Houshalter · 2015-07-22T00:16:13.707Z · LW(p) · GW(p)

It's not the algorithms, it's the circuitry itself that is inefficient. Signals propagate slowly through the brain. They require chemical reactions. Neurons are actually fairly big. You could fill the same space with many smaller transistors.

comment by Richard_Kennaway · 2015-07-21T10:40:40.978Z · LW(p) · GW(p)

Here comes the future, unevenly distributed. For crime-fighting purposes, Kuwait intends to record the genome of all of its citizens.

Replies from: None
comment by [deleted] · 2015-07-21T15:02:03.236Z · LW(p) · GW(p)

Random analysis! From the fact that they anticipate using $400 million to record and track about 4 million people, you can tell they are talking about using microarrays to log SNP profiles (like 23andme) or microsatellite repeat lengths or some otherwise cheap and easy marker-based approach rather than de novo sequencing. De novo sequencing that many people would be much more human DNA sequence data than has ever been produced in the history of the world, would clog up the current world complement of high throughput sequencers for a long time, would be no more useful for legal purposes, and probably cost $40 billion + (probably more to develop infrastructure).

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-07-22T06:00:18.133Z · LW(p) · GW(p)

Iceland has managed to guess the complete sequence for all of its residents from SNPs by getting complete sequences of 3%. (Not that crime-fighting would use anything more than SNPs.)

Replies from: Lumifer
comment by Lumifer · 2015-07-22T15:47:16.317Z · LW(p) · GW(p)

Iceland has managed to guess the complete sequence for all of its residents from SNPs

Does not compute.

You can "guess" some statistical averages for the whole population, but you cannot "guess" the complete sequence for any particular individual.

Replies from: gwern
comment by gwern · 2015-07-22T18:40:29.954Z · LW(p) · GW(p)

but you cannot "guess" the complete sequence for any particular individual.

Of course you can. If you have a giant complete pedigree for most or all of the population and you have SNPs or whole-genomes for a small fraction of the members, and especially if it's a highly homogenous population, then you can impute full genomes with varying but still-far-better-than-whole-population-base-rate accuracy for any particular entry (person) in the family tree. They're all highly correlated. This is no odder than noting that you can infer a lot about a parent's genome from one or two childrens' genomes despite never seeing the parent's genome. Your first cousin's genome says a lot about your genome, and even more if one can put it into a family tree and also has one of your grandparent's genomes. And if you have all the family trees and samples from most of them...

(This will not work too well for Kuwait since while the citizens may be highly inbred, they do not have the same genealogical records, and citizens are, IIRC, outnumbered by resident foreigners who are drawn from all over the world and especially poor countries. But it does work for Iceland.)

Replies from: Douglas_Knight, Lumifer
comment by Douglas_Knight · 2015-07-22T19:31:16.493Z · LW(p) · GW(p)

All the coverage says that they used pedigrees, but I'd think that they could be reconstructed from SNPs, rather more accurately.

Replies from: gwern
comment by gwern · 2015-07-22T19:35:30.697Z · LW(p) · GW(p)

Throwing away data is rarely helpful.

comment by Lumifer · 2015-07-22T18:53:47.904Z · LW(p) · GW(p)

you can impute full genomes with varying but still-better-than-whole-population-base-rate accuracy for any particular entry in the family tree.

True. But when the OP says "guess the complete sequence" I assume a much higher accuracy than just somewhat better than the base rate.

You can produce an estimate for the full sequence just on the basis of knowing that the subject is human (with some low accuracy), you can produce a better estimate if you know the subject's race, you can produce an even better one if you know the specific ethnic background, etc. It's still a statistical estimate and as such is quite different from actually sequencing the DNA of a specific individual.

Replies from: gwern
comment by gwern · 2015-07-22T19:04:21.990Z · LW(p) · GW(p)

I assume a much higher accuracy than just somewhat better than the base rate.

How much higher would that be and how do you know the Icelandic imputations do not meet your standards?

It's still a statistical estimate and as such is quite different from actually sequencing the DNA of a specific individual.

A 'actual' sequence is itself a 'statistical estimate', since even with 30x coverage there will still be a lot of errors... (It's statistics all the way down, is what I'm saying.) For many purposes, the imputation can be good enough. DNA databases have already shown their utility in tracking down criminals who are not sampled in it but their relatives are. From a Kuwaiti perspective, your quibbles are uninteresting.

Replies from: Lumifer
comment by Lumifer · 2015-07-22T19:29:23.284Z · LW(p) · GW(p)

From a Kuwaiti perspective, your quibbles are uninteresting.

You don't look like a Kuwaiti :-P And, of course, interestingness is in the eye of the beholder...

comment by Vaniver · 2015-07-21T17:29:00.381Z · LW(p) · GW(p)

DSCOVR is finally at L1 and transmitting back photos. I'm using that one as my new desktop background.

I remember being excited about this more than a decade ago; it's somewhat horrifying to realize that it took longer than New Horizons to reach its destination, though it was traveling through politics, rather than space.

(The non-spectacle value of this mission is at least twofold: the other side of it does solar measurements and replaces earlier CME early warning systems, and this side of it gives us a single temperature and albedo measurement for the Earth, helping with a handful of problems in climate measurement, and thus helping with climate modeling.)

Replies from: None, Gunnar_Zarncke
comment by [deleted] · 2015-07-22T01:34:50.225Z · LW(p) · GW(p)

You can see the smoke from the record-breaking recent Canadian and Alaskan wildfires in the photos. Those clouds drifted all the way over here to North Carolina shortly after those pictures were taken.

comment by Gunnar_Zarncke · 2015-07-22T20:17:06.615Z · LW(p) · GW(p)

I'd really like to see the photos taken in the 7 other wavelength bands esp. near infrared and compare this to the rgb pciture. One should be able to see clouds and oceans in the IR too.

comment by [deleted] · 2015-07-21T06:13:50.733Z · LW(p) · GW(p)

This question is inspired by the suprisingly complicated Wikipedia page on correlation and dependence.Can you explain distance correlation and brownias covariance as well as the 'Randomized Dependence Coefficient' in lay man's terms and their application, particularly for rationalists? How about the 'correlation ratio', 'polychoric correlation' and 'coefficient of determination'?

Replies from: MrMind, MrMind
comment by MrMind · 2015-07-21T08:56:40.933Z · LW(p) · GW(p)

All your links are belong to wrongness. Please delete the 'www' before en. in en.wikipedia.

Replies from: tim
comment by tim · 2015-07-23T04:13:08.423Z · LW(p) · GW(p)

Clarity, you have a large number of comments with incorrect Wikipedia links. Your "introspective illusion" comment directly above this one does it correctly. You clearly are capable of generating functional links to Wikipedia pages.

Please take a few minutes to make your recent comments less frustrating to read. It is frankly astounding that so many people have given you this feedback and you are still posting these broken links.

Replies from: Elo
comment by Elo · 2015-07-24T00:22:37.999Z · LW(p) · GW(p)

This post would need to be in response to his post (not a lower level replier) or he would not get a notification about it.

comment by MrMind · 2015-07-21T09:23:03.113Z · LW(p) · GW(p)

A first broad attempt.

The stage is set-up in this way: you observe two set of data, that your model indicates as coming from two distinct sources. The question is: are the two sets related in any way? If so, how much? The 'measure' of such is usually called correlation.

From an objective Bayesian point of view, it doesn't make much sense to talk about correlation between two random variables (it makes no sense to talk about random variable either, but that's another story), because correlation is always model dependent, and probabilities are epistemic. Two agents observing the same phoenomenon, having different information about it, may very well come to totally opposite conclusions.

From a frequentist point of view, though, the correlation between variables express an objective quantity, all the measure that you mention are attempts at finding out how much correlation there is, making more or less explicit assumptions about your model.

If you think that the two sources are linearly related, then the Pearson coefficient will tell you how much the data supports the model.
If you think the two variables comes from a continuous normal distribution, but you can only observe their integer value, you use polychoric correlation. And so on...
Depending on the assumptions you make, there are different measures of how much correlated the data are.

comment by [deleted] · 2015-07-21T10:09:41.524Z · LW(p) · GW(p)

Is EY's Cogntive Trope Therapy for real or a parody?

It sounds parodistic yet comes accross as weirdly workable. There is a voice in my head telling me I should not respect myself until I become more of a classical tough-guy type, full of courage and strength. However it does not sound like my father did. It sounds a lot like a teenage bully actually. My father sounded a lot more like show yourself respect by expecting a bit more courage or endurance from yourself. Hm. Carl Jung would have a field day with it.

Replies from: fubarobfusco, philh, ChristianKl, DanielLC, Richard_Kennaway
comment by fubarobfusco · 2015-07-21T15:38:15.811Z · LW(p) · GW(p)

It sounds parodistic yet comes accross as weirdly workable.

Two quotes come to mind (emphasis added) —


He therefore said: "Let me declare this Work under this title: ‘The obtaining of the Knowledge and Conversation of the Holy Guardian Angel’", because the theory implied in these words is so patently absurd that only simpletons would waste much time in analysing it. It would be accepted as a convention, and no one would incur the grave danger of building a philosophical system upon it.

[...] The mind is the great enemy; so, by invoking enthusiastically a person whom we know not to exist, we are rebuking that mind.

— Aleister Crowley, Magick in Theory and Practice


ROSE: I have no comprehensible path. There's nothing to overcome, no lesson to learn, no cathartic light at the end of this preposterous tunnel.
ROSE: Not for me, at least!
ROSE: I seriously have the DUMBEST arc anyone could conceivably imagine.
DAVE: rose we dont have fuckin "arcs" we are just human beings

Homestuck

Replies from: None
comment by [deleted] · 2015-07-21T15:49:03.542Z · LW(p) · GW(p)

I am not sure about Crowley's point - the mind being the great enemy as in the mind making all sorts of excuses and rationalizations? That is almost trivially true, however, I think using other parts of the mind to defeat these parts may work better than shutting the whole thing down because then what else can we work with?

It is similar to taking acid. Why do some, but only some people have really deep satori experiences from acid? Acid is just a hallucinogen. It is not supposed to do much. But sometimes the hallucinations overload and shut down big parts of the mind and then we pay attention to the rest and this can lead into the kinds of ego-loss, one-with-everything insights. However, isn't it really a brute-force way? It's like wearing a blindfold for months to improve our hearing.

comment by philh · 2015-07-21T22:04:56.978Z · LW(p) · GW(p)

Is EY's Cogntive Trope Therapy for real or a parody?

One might ask the same question of HPMOR.

He's being serious, but not solemn.

comment by ChristianKl · 2015-07-21T11:15:44.506Z · LW(p) · GW(p)

It's for real.

If you want to dig deeper into the idea of seeing your life as a story read the Hero's Journey by Joseph Campbell and associated literature.

Replies from: None
comment by [deleted] · 2015-07-21T11:33:25.608Z · LW(p) · GW(p)

But that one is about how myths and legends in the world seem to follow the same pattern. And then we saw Tolkien and George Lucas following it consciously with LOTR and Star Wars, and then Harry Potter, The Matrix etc. was modelled on those works. Cambell did figure out an ancient pattern for truly immersive entertainment, that one is for sure.

But did Campbell really come up with the idea that Average Guy could also use myths about legendary heroes to reflect upon and improve his own rather petty life? I don't think in the past people were taking self-help advice from Heracles and Achilles or in the modern world from Neo and Luke Skywalker... it must have been obvious that you as Mr. Average Guy are not made of the same mold as them besides they are fiction anyway, right?

Replies from: Richard_Kennaway, ChristianKl
comment by Richard_Kennaway · 2015-07-21T12:31:12.471Z · LW(p) · GW(p)

I don't think in the past people were taking self-help advice from Heracles and Achilles or in the modern world from Neo and Luke Skywalker

I don't know how the ancient Greeks related to their legends (although I'm sure that historians of the period do, and it would be worth knowing what they say), but The Matrix and Star Wars are certainly used in that way. Just google "red pill", or "Do or do not. There is no try." And these things aren't just made up by the storytellers. The ideas have long histories.

Literature is full of such practical morality. That is one of its primary functions, from children's fairy tales ("The Ugly Duckling", "The Little Red Hen", "Stone Soup") to high literature (e.g. Dostoevsky, Dickens, "1984"). Peter Watts ("Blindsight") isn't just writing an entertaining story, he's presenting ideas about the nature of mind and consciousness. Golden age sensuwunda SF is saying "we can and will make the world and ourselves vastly better", and has indeed been an inspiration to some of those who went out and did that.

Whenever you think you're just being entertained, look again.

comment by ChristianKl · 2015-07-21T12:05:24.673Z · LW(p) · GW(p)

But did Campbell really come up with the idea that Average Guy could also use myths about legendary heroes to reflect upon and improve his own rather petty life?

I'm not sure to what extend Campbell personally advocated the Hero's journey to be used by "Mr. Average Guy" but various NLP folks I know refer to the Hero's journey in that regard. Steven Gilligan and Roberts Dilts wrote http://www.amazon.com/The-Heros-Journey-Voyage-Discovery/dp/1845902866 Of course then the average guy stops being the average guy. In Eliezers words, he starts taking heroic responsibility.

comment by DanielLC · 2015-07-21T21:00:29.948Z · LW(p) · GW(p)

What I always feel like a character should do in that situation (technology permitting) is to turn on a tape recorder, fight the villain, and listen to what they have to say afterwards. And then try to figure out how to fix the problems the villain is pointing out instead of just feeling bad about themselves.

I guess that sort of works for this. You could write down what the voice in your head is saying, and then read it when you're not feeling terrible about yourself. And discuss it with other people and see what they think.

The problem with just trusting someone else is that unless you are already on your deathbed, and sometimes not even then, there is nothing you can say where their response will be "killing yourself would probably be a good idea". There is no correlation between their response and the truth, so asking them is worthless.

comment by Richard_Kennaway · 2015-07-21T10:36:45.661Z · LW(p) · GW(p)

Is EY's Cogntive Trope Therapy for real or a parody?

I think it's completely serious, and a good idea. And "si non è vero, è ben trovato". I'm never without my Cudgel of Modus Tollens.

comment by Error · 2015-07-21T02:48:26.413Z · LW(p) · GW(p)

One of our cats (really, my cat) escaped a few days ago after a cat carrier accident. In between working to find her and having emotional breakdowns, I find myself wanting to know what the actual odds of recovering her are. I can find statistics for "the percentage of pets at a shelter for whom original owners were found", but not "the percentage of lost pets that eventually make it back to their owners by any means." Can anyone do better? I don't like fighting unknown odds.

Additionally, if anyone has experienced advice for locating lost pets -- specifically an overly anxious indoor cat escaped outdoors -- it would be helpful. We have fliers up around the neighborhood, cat traps in the woods where we believe she's hiding, and trail cameras set up to try and confirm her location. Foot searches are difficult because of the heat and terrain (I came back with heat exhaustion the first day). I guess what I'm specifically looking for from LW is "here is something you should do that you're overlooking because bias X/trying to try/similar."

Replies from: None, jam_brand, None
comment by [deleted] · 2015-07-21T04:12:59.403Z · LW(p) · GW(p)

In my one experience with such a situation, we found our cat (also female, but an outdoor cat) a few days later in a nearby tree. I've seen evidence that other cats also may stay in a single tree for days when scared, notably when a neighbor's indoor cat escaped and was found days later stuck up a tree. Climbing down is more difficult than climbing up, so inexperienced cats getting stuck in trees is somewhat common. My best advice is to check all the nearby trees very thoroughly.

Also, food related sound may encourage her to approach, if there are any she is accustomed to such as food rattling in a dish or taping on a can of cat food with a fork.

comment by jam_brand · 2015-07-25T01:04:49.167Z · LW(p) · GW(p)

Here are some links I compiled on this topic recently when my cousin lost her cat. Best of luck!

TIPS

http://www.missingpetpartnership.org/recovery-tips/lost-cat-shelter-tip-sheet/ http://www.missingpetpartnership.org/recovery-tips/lost-cat-behavior/ http://www.catsinthebag.org/

(CONSULTING) DETECTIVES

http://www.missingpetpartnership.org/lost-pet-help/find-a-pet-detective/pet-detective-directory/ http://www.getmycat.com/pet-detective-database/ (not all consult via phone & email, but it seems many do, e.g. http://www.catprofiler.com/services.html)

eBOOKS

The following book apparently has an epilogue regarding finding missing pets: http://smile.amazon.com/Pet-Tracker-Amazing-Rachel-Detective-ebook/dp/B00UNPGD9Y/ (there's also an older, dead-tree edition called The Lost Pet Chronicles - Adventures of a K-9 Cop Turned Pet Detective)

http://smile.amazon.com/Three-Retrievers-Guide-Finding-Your/dp/1489577874/ http://www.sherlockbones.com/ http://www.lostcatfinder.com/lost_cat_finder/search_tips.html

FORUM: https://groups.yahoo.com/neo/groups/MissingCatAssistance/info

Replies from: Error
comment by Error · 2015-07-25T16:58:27.568Z · LW(p) · GW(p)

Looks like we chased the same set of links....I have most of those open in tabs right now. Thank you, though. We're still searching. Supposedly, frightened indoor cats can spend 10-12 days in hiding before hunger drives them out. We're at day eight now. It feels about five times as long as that.

Did your cousin's cat make it home?

Replies from: jam_brand
comment by jam_brand · 2015-07-27T03:52:34.503Z · LW(p) · GW(p)

She did, yes. It took 9 days and predictably she lost some weight, but she's otherwise ok. Anyway, I hope you can report similarly good news yourself soon.

Replies from: Error
comment by Error · 2015-07-27T13:51:36.219Z · LW(p) · GW(p)

I hope so too. We're up to day 11 now. -_-

How did they get the cat back?

Replies from: jam_brand
comment by jam_brand · 2015-07-28T07:54:48.226Z · LW(p) · GW(p)

On the last night while searching at the end of the road she lives on, my cousin noticed some movement by a mostly empty lot and when she approached she saw Lily (the cat) run into some weeds there. I wish I could say there was "one weird trick" that definitely helped, but it was actually more like a flurry of facebooking -- as much for getting emotional support as for finding leads -- and being vigilant enough to be in a position to get lucky.

comment by [deleted] · 2015-07-21T19:30:16.977Z · LW(p) · GW(p)

I recommend that you contact local shelters and search their lost & found sections. Craigslist also has a good lost & found section.

Useful info here, even if you don't live in Boston: http://www.mspca.org/adoption/boston/lost-and-found/lost.html

Replies from: jam_brand
comment by jam_brand · 2015-07-25T01:04:31.348Z · LW(p) · GW(p)

In addition to talking to animal shelters, checking in with local veterinarians could be useful as well.

comment by [deleted] · 2015-07-21T06:19:04.921Z · LW(p) · GW(p)

ds

comment by fubarobfusco · 2015-07-24T14:36:20.049Z · LW(p) · GW(p)

If you think you have come up with a solid, evidence-based reason that you personally should be furious, self-hating, or miserable, bear in mind that these conditions may make you unusually prone to confirmation bias.

Replies from: chaosmage
comment by chaosmage · 2015-07-24T22:24:59.156Z · LW(p) · GW(p)

Doesn't every strong emotion take up cognitive capacity that is then unavailable for critical thought? Why do you single out fury, self-hate and being miserable?

Replies from: fubarobfusco
comment by fubarobfusco · 2015-07-25T02:13:08.736Z · LW(p) · GW(p)

It's not just a matter of cognitive capacity being occupied; it's a matter of some emotional tendencies being self-limiting while others are self-reinforcing. Miserable people seem to often look for reasons to be miserable; angry people often do obnoxious things to others, which puts the angry person in situations that provoke further anger.

comment by NancyLebovitz · 2015-07-20T14:19:47.454Z · LW(p) · GW(p)

Tim Ferriss interviews Josh Waitzkin

The whole thing is interesting, but there's a section which might be especially interesting to rationalists about observing sunk cost fallacies about one's own strategies-- having an idea that looks good and getting so attached to it that one fails to notice the idea is no longer as good as it looked at the beginning.

Unfortunately, I can't find the section quickly-- I hope someone else does and posts the time stamp.

Replies from: None
comment by [deleted] · 2015-07-21T08:22:34.766Z · LW(p) · GW(p)

What I was wondering lately is if the sunken cost fallacy and commitment devices are two sides of the same coin. Sometimes people need to abandond dysfunctional projects no matter how much they invested, on the other hand, motivating yourself to not abandon a good habit is hard and one way to do that is to sunken-cost-trip yourself, commitment devices like chains.cc, habits diary and so on work more or less that way.

This sounds a lot like that kind of second-order rationality that according to EY does not exist: these commitment devices work by focusing on an irrational argument ("don't break the chain now, look at what a good record you have so far") instead of a rational one ("it makes no sense to abandon this good habit now") because our brain is wired so that it takes the irrational one far easier...

comment by Stefan_Schubert · 2015-07-23T22:34:14.407Z · LW(p) · GW(p)

Does anyone know if there is any data on the political views of the Effective Altruist community? Can't find it in the EA survey.

comment by [deleted] · 2015-07-20T08:12:54.211Z · LW(p) · GW(p)

There is an interesting startup that is about trying to turn cities into villages by trying to make neighbors help each other. You need to verify your address via a scanned document, a neighbor or a a code on a postcard they send you. I think the primary reason they find that verification important is that people are allowed to see the full name, picture and address of people in their own neighborhood. And probably they don't want to share that with people who are not actually neighbors. This seems to be key selling point of this startup - this is how it differs from any basic neighboor based Facebook group, that you really get to see each others face, name and address and people outside your hood really don't get to see it so you can be fairly comfortable about sharing it. Besides you can choose a few categories how you can help others e.g. babysitting, petsitting etc. and what kind of common activities you would be interested in.

Here is the bad news: the startup is currently only available in German and only in the city of Vienna, probably due to the postcard thing. They managed to find investors so it is likely they will have an English version and extend it all over the world, in that case they will probably change the name as well, currently the name is fragnebenan.com But I have no idea when will this happen.

Anyway, I was thinking primarily that Rationalists in Berlin may take an interest in this and help them extend fragnebenan.com to Berlin?

Replies from: chaosmage, ChristianKl
comment by chaosmage · 2015-07-20T10:40:13.271Z · LW(p) · GW(p)

This seems quite absurd. Why would I give my data to an obscure startup (who'll probably sell it sooner or later) and hope people in my neighborhood make the same choice, when I can probably have way better results simply inviting my neighbors for a BBQ?

Replies from: None, None
comment by [deleted] · 2015-07-20T22:19:59.702Z · LW(p) · GW(p)

How many barbeques have you actually thrown?

Of the barbeques you have thrown, how many of those have led to mutually beneficial arrangements?

Of those that have led to mutually beneficial arrangments, how many per BBQ?

Now how much time have you put in to arranging those BBQ vs Value gotten from those BBQs?

I don't know about your answer, but for me (substituting BBQ for dinner party) the answers respectively are probably about 10, 3, less than one, and WAYYY TO MUCH (if these types of arrangments were my only justification for throwing dinner parties.)

Now contrast this to how much time I've spent going through the free stuff offered on craigslist, vs the value I've gotten from it. The effort/value ratio is probably inverse. I think a startup that takes the "free services/free stuff" part of craigslist, but solves the unique problems of that segment (similar to what AirBNB has done for housing) could offer significant value.

Replies from: chaosmage
comment by chaosmage · 2015-07-21T11:52:16.577Z · LW(p) · GW(p)

I didn't do mere BBQs but threw full-on parties with the neighbors (who I didn't know at all) and other friends. Later two shared apartments in the same house combined held a huge party that spanned the house and included many of the neighbors. Many good friendships came out of that, and a couple of us moved in together later.

The BBQ idea is just a low-threshold variant of that which doesn't require copious amounts of alcohol.

For free stuff, we just have a place in the staircase where people drop things that are still good but not needed by their previous owner (mostly books). This works with zero explicit coordination.

Replies from: Emily, None
comment by Emily · 2015-07-22T11:18:19.228Z · LW(p) · GW(p)

For free stuff, we just have a place in the staircase where people drop things that are still good but not needed by their previous owner (mostly books). This works with zero explicit coordination.

I'm kind of amazed/impressed that this works, based on my experience of communal spaces. Don't people ever leave junk that they can't be bothered to get rid of? Does anyone adopt responsibility for getting rid of items that have been there a long time and clearly no one wants?

comment by [deleted] · 2015-07-21T19:58:05.184Z · LW(p) · GW(p)

The bigger the party, the more investment - This does not scale the same way a website does. Same thing with putting out free stuff on the steps.

comment by [deleted] · 2015-07-20T10:58:00.563Z · LW(p) · GW(p)

BBQ would not be allowed in my third floor apartment's balcony as it would stink up the place and it would be dangerous as well and I have no idea where could I store the equipment when not used as we have not much unused space, and my neighbors would be very creeped out if I would just ring on their door and invite them. We live in the same apartment since 2012 and never even talked to neighbors or had a chat. People tend to be very indifferent with each other in this apartment complex and I have no better experience with former ones either. These guys are trying to make a site that acts as an icebreaker - if you really need dog-sitting one day you can try to ask there and if someone helps you out then you have a form of connection and maybe will have a chat after it or something and maybe greet each other and stop for a chat the next time you see each other. The very idea is that the world is urbanizing, due to jobs and all that people who like the more communal village lifestyle are forced into cities where they suffer from the general indifference and impersonality so they try to change it and make cities more village like or suburbia like. They try to counter-act the negative psychological effects of urbanization with a "let's open our doors to each other" theme.

As for selling data, they have the same data as my utility company. They can link a name with an address. Anyone who walks up to our house will see the name on the door anyway. And a photo, so OK that is more. But overally this is not secret data nor very sensitive.

Replies from: chaosmage, ChristianKl
comment by chaosmage · 2015-07-20T13:39:06.407Z · LW(p) · GW(p)

So don't have the BBQ on your balcony, but down in the yard. And don't invite people by knocking, but via old-fashioned nice and friendly handwritten paper letters or a nice and friendly written note on the inside of the building's door. Bring a grill, a little food and drink, and invite people to contribute their own. I don't see how this could be easier. In the worst case only two or three people will come, but that'll be more than this site is likely to do.

I trust my utility company way more than I trust a random startup. Even Facebook, who this obviously competes with, doesn't ask for scanned identification documents just to access basic functionality.

And you didn't adress the issue with this site only connecting you with other people who happen to also use it. This alone makes this project unable to compete with simple Facebook neighborhood groups.

But let's assume they're super trustworthy and there are people in my neighborhood who use this site. It still looks a lot like a "if you have a hammer, everything looks like a nail" situation. Whatever it is, throw a website and an app at it. Even if a little post-it on the inside of the apartment building's door would do way more for way less.

Replies from: None, ChristianKl
comment by [deleted] · 2015-07-20T16:26:15.802Z · LW(p) · GW(p)

We have hundreds of people in this complex. I suspect at least 50% is more extroverted than me as the the uni etc. the ratio was more like 90%. If they did not do the BBQ thing I think I would not have much chance with it...

On the trust. Facebook does not also give you your neighbors location nor a way to check if someone claiming to be in your neighborhood is genuine.

I sort of agree to the extent that showing the address to everybody in the hood is perhaps too much, people would tell each other when they need so, but verifying is IMHO a good idea because it efficiently keeps spammers out. Perhaps sharing the address with everybody in the hood is a way to enforce politeness.

As for the last issue, I have actually a way to test it, as I was looking for a babysitter putting up an ad with a maximal cuteness baby photo in all our 12 stairways. I got two applicants. Out of 12 stairways times 6 levels times dunno like 6 flats. I will put up ads advertising this site some of these days and then if we get like 50 people there try again. But that 2 applicants was for me disappointingly low. Of course it could be that it will be even lower on the site as well.

Replies from: Gunnar_Zarncke, None
comment by Gunnar_Zarncke · 2015-07-21T07:14:22.100Z · LW(p) · GW(p)

If they did not do the BBQ thing I think I would not have much chance with it...

Bystander Effect? The more people there are that could throw a party the less likely it is any particular does. Be the exception.

Replies from: None
comment by [deleted] · 2015-07-21T08:14:16.050Z · LW(p) · GW(p)

I though that relates to stuff like accidents or other emergencies. I made a quick google search and could not find anything that would not relate to people being in trouble and needing help. But I do see it can play a role, a certain kind of waiting for each other to start...

comment by [deleted] · 2015-07-20T19:15:22.162Z · LW(p) · GW(p)

If people don't care when it's a poster on a stairwell, why are they going to start caring when it's a message on a website?

I think "website for local area stuff" has a problem where people think they'd use it far more than they actually would. People don't care about that sort of thing as much as they think they should, and this sort of thing is the digital equivalent of a home exercise machine that people buy, use once and then leave to moulder.

comment by ChristianKl · 2015-07-20T15:29:09.009Z · LW(p) · GW(p)

I trust my utility company way more than I trust a random startup. Even Facebook, who this obviously competes with, doesn't ask for scanned identification documents just to access basic functionality.

But ebay does ask for verifying addresses with postcards. Banks ask for verification of addresses.

Even if a little post-it on the inside of the apartment building's door would do way more for way less.

I don't think that post-its in the apartment building's door are an efficient way to communicate. If I could reach all the people in my apartment digitally, I do think that would be great. The problem is rather that it's unlikely that other people in my apartment building would sign up for such a service.

When I pack up packets for neighbors I sometimes would appreciate a digital way to contact the neighbor.

To effectively implement it in Berlin I think there are three choices:
1) Go to big landlords like degewo. Sell them on the idea that it's an added benefit to have communities in their apartments. Then let them communicate information that's currently communicated via hang-outs via the website.
2) Cooperate with government programs for neighborhood building in the Soziale Stadt category.
3) Focus on vibrant areas in Friedrichshain and Kreuzberg with a lot of young people who are eager to adopt new technology. Encourage new people who sign up to do hangouts in their houses.

From those 1) is likely the best strategy. It shouldn't cost degewo much money. Having a digital channel to their rentees might even save them money. Degewo runs ads so they care about having the image about being different from other landlords.

comment by ChristianKl · 2015-07-20T11:12:30.487Z · LW(p) · GW(p)

and my neighbors would be very creeped out if I would just ring on their door and invite them

How do you know?

Replies from: None
comment by [deleted] · 2015-07-20T12:50:01.466Z · LW(p) · GW(p)

It's like not trying to pick up a girl who did not give you any indicator of interest, like a long look or a smile. Perhaps over-cautious, but avoids a lot of embarrassment.

comment by ChristianKl · 2015-07-20T10:37:49.847Z · LW(p) · GW(p)

Anyway, I was thinking primarily that Rationalists in Berlin may take an interest in this and help them extend fragnebenan.com to Berlin?

Why do you consider that to be a high leverage action?

Replies from: None
comment by [deleted] · 2015-07-20T11:00:56.946Z · LW(p) · GW(p)

I don't fully understand what high leverage means here, I just think it is cool and helps people to help each other and extending it another 3.5M people would be rather neat. I think they want to do it anyway, it could be easier if they have local contacts who have learned some methods of efficiency here and tend to like startups.

Replies from: ChristianKl
comment by ChristianKl · 2015-07-20T11:27:39.432Z · LW(p) · GW(p)

"Help them expand" suggests that you propose to spend time on energy on promoting it.

It seems to me like the website only accepts people from Austria anyway.

Replies from: None
comment by [deleted] · 2015-07-20T12:47:51.339Z · LW(p) · GW(p)

No, I meant helping them in programming or other stuff such as the postcard stuff to be able to offer it elsewhere. Sorry if I did not detail it, I thought it is obvious: if you like the idea, consider joining them as bit later co-founders (the whole thing just comes from Nov 2014), as part owners, investors, investing sweat capital mostly, that sort of stuff, the usual startup story.

Or maybe that is not so usual, I have no idea, but I was just thinking if someone calls them and tell them I will help you expand your customer base by 150% if you give me 10% or some other arrangement, this is fairly common for startups?

Replies from: ChristianKl
comment by ChristianKl · 2015-07-20T13:30:12.064Z · LW(p) · GW(p)

Actually helping them in programming and stuff like that is investing time and energy. I do focus programming time on things I consider high leverage.

You actually live in Vienna and there programming team is in Vienna and not Berlin. You frequently say that you don't feel that your job has any meaning. You can program.

If they just managed to find investors they are likely not looking to raise more money at the moment. Even if they would looking for capital there nothing specific about the LW Berlin group when it comes to providing Angel funding for an Austrian company. In that case it also makes sense to argue why that investment better than various other possible investments.

Replies from: None
comment by [deleted] · 2015-07-20T16:31:04.897Z · LW(p) · GW(p)

All good points. Also you think there is not much location advantage in extending the service e.g. negotiating a low postcard price with the German Post and so on?

I will not leave a safe job for a startup (I would have considered that before we had a child, now it would be irresponsible) but I do consider contributing in the evenings, this is seriously something I could believe in.

Replies from: ChristianKl
comment by ChristianKl · 2015-07-20T17:07:34.102Z · LW(p) · GW(p)

Also you think there is not much location advantage in extending the service e.g. negotiating a low postcard price with the German Post and so on?

If there are meetings you buy a plane ticket. Vienna isn't that far from Germany.

When it comes to negotiating the idea is to hire a good salesperson. Most of the people at our meetup are coders who's aren't highly skilled salespeople. If I would hire for that role, I wouldn't pick a person from our LW group.

I will not leave a safe job for a startup (I would have considered that before we had a child, now it would be irresponsible)

Today there's nothing like a real safe job. All companies lay off people from time to time. Working at a job that you like is very useful. It's beneficial for the child to be around a dad who likes his job instead of a dad who hates his job.

Replies from: None, Lumifer
comment by [deleted] · 2015-07-21T07:43:30.927Z · LW(p) · GW(p)

Today there's nothing like a real safe job. All companies lay off people from time to time.

  1. The difference between a job that pays a fixed salary already vs. a startup that may pay dividends or something i n the future if it does not fold is fairly big.

  2. More in the direction of expertise than job. Do you know any SAP consultants? They can always find a job. I am not exactly that but in a similar industry. They cannot be outsourced to India because they need local knowledge like accounting rules and such software are so huge and the space of potential kinds of problems and industry practices and whatnots, also domain experiences is so big that in these types of industry experience never has a point of diminishing marginal returns. People who do it for 30 years are more valuable than people who do it for 15.

Abandoning that kind of investment to become yet another dreamy startup Ruby on Rails type of guy? They are a dime a dozen and young hotshots with 5 years of experience - because there is just not so much to learn - outdo the older ones. It is an up or out - you hit big and then become a Paul Graham and retire from programming into investorship or similar stuff, or you are sooner or later forced out. In that type of world there is no real equivalent of the 50 years old SAP logistics consultants who is seen as something sort of a doyen because he dealt with every kind of crap that happen in a project at a logistics company.

So it sounds really dangerous to abandon that kind of investment for a new start in something different.

But diversifying, using free time to contribute to a project, that could be smart - hedging bets, if the main industry (desktop business software based on domain knowledge and business process experience) somehow collapses then it makes easier to join a different one (cool hot modern web based stuff). That makes sense, getting a foot in in one's free time in a different industry..

Working at a job that you like is very useful. It's beneficial for the child to be around a dad who likes his job instead of a dad who hates his job.

Yes, if not for the risks.

comment by Lumifer · 2015-07-20T17:13:08.747Z · LW(p) · GW(p)

Today there's nothing like a real safe job.

This is an excellent example of the Fallacy of Gray, don't you think? :-)

Replies from: ChristianKl
comment by ChristianKl · 2015-07-20T18:49:51.096Z · LW(p) · GW(p)

That depends on how you think DeVliegendeHollander models the situation in his mind. Modeling people in situations like this isn't trivial. Given the priors I have about him, there's learned helplessness that provides a bias towards simply staying in the status quo.

In general most decently skilled developers don't stay unemployment for longer periods of time if they are in a startup that fails.

If you read his post closely then he says that he doesn't even consider it. The act of considering it would be irresponsible. I don't know enough to say that it would be the right choice for him to take that job, but I think he would profit from actually deeply considering it.

Replies from: None, Lumifer
comment by [deleted] · 2015-07-21T07:51:41.630Z · LW(p) · GW(p)

My experience is primarily not in the hands-on coding which in my business software world tends to be really primitive (read data, verify it, sum it up, write it somewhere, it is essentially primitive scripting), I don't think I have even seen an algorithm since school that was as complex as a quicksort which is first year exam material, as it is simply not done. In fact we constantly try to make frameworks where no coding is needed, just configuration, and employ non-programmer domain expert consultants as implementation specialists, but it always fails because people don't understand that properly that once your configuration gets that advanced that it loops over a set of data and makes if-then decisions, then it is coding again: just usually a poor coding framework. Example

Anyway it is more of a being a general troubleshooter. It is sort of difficult to explain (but actually this is the aspect that is likable about it, which is kind of balances the less likable aspects) that I lack a job description. A manager wants a certain kind of information in a regular report. To provide it, there needs to be software, bought or developed or both (customized), users trained, processes defined, and a bunch of potential other things and nobody really tells how to do it, nobody really tells what to do, it is just the need to achieve a result, an informational output just anyhow, with any sort of combination of technology, people and process. This is the good part of it, how open-ended it is but clearly far more than coding, and the coding part is usually primitive.

The bad part is coding the same bloody report the 100th time only slightly different... or answering the same stupid support call the 100th time because people keep making the same mistakes or forget the previous call. Of course both could be improved by reusable frameworks (often not supported by primitive technologies used), knowledge bases, or writing user manuals but that unfortunately does not depend on one guy, the obstacles to that tend to be organizational, usually short-sightedness.

Replies from: ChristianKl
comment by ChristianKl · 2015-07-21T11:17:16.586Z · LW(p) · GW(p)

Okay, then I likely underrated the skill difference between what you are currently doing and the work that exists in a startup like that.

Replies from: None
comment by [deleted] · 2015-07-21T11:38:13.690Z · LW(p) · GW(p)

BTW do you have any clue where to go on with this kind of skillset if I ever want to change things or what could be a good Plan B to get a foot in the door in? There are some professions that are really isolated and have little overlaps with anything else, such as doctors and lawyers and I have this impression all this business information management is like that, too. Outsiders know next to nothing about it and insiders tend to not know much about anything else, professionally at least. Ever knew a succesful SAP, Oracle, NAV, CRM, Baan or whatever consultant who is now good at doing something else? I know one guy who held out only for three years and, I sh.t you not, threw it all away and became an undocumented (illegal) snowboarding trainer in the US in the Rockies :) But that is probably not the typical trajectory esp. not after a dozen years.

Replies from: Lumifer
comment by Lumifer · 2015-07-21T17:24:28.174Z · LW(p) · GW(p)

where to go on with this kind of skillset

You might want to think about moving into management.

Replies from: None
comment by [deleted] · 2015-07-22T04:45:52.502Z · LW(p) · GW(p)

Wouldn't that mean focusing less on the reliable parts of the thing (software, process) and far more on the people? I would have to motivate people and suchlike and basically simulate someone who is an extrovert and likes to a talk and this type of normal personality?

Replies from: Lumifer
comment by Lumifer · 2015-07-22T05:02:17.513Z · LW(p) · GW(p)

That very much depends on the particulars of a managing job and on the company's culture. Your skills as you described them aren't really about programming -- they are about making shit happen. Management is basically about that, except that the higher you go in ranks, the less you do yourself and the more you have other people do for you. It is perfectly possible to be an effective manager without being a pep-rally style extrovert.

comment by Lumifer · 2015-07-20T18:59:44.217Z · LW(p) · GW(p)

That depends on how you think DeVliegendeHollander models the situation in his mind.

No, I do not think that your fallacy depends on what DVH thinks.

there's learned helplessness

You're confusing risk aversion and learned helplessness.

Replies from: Richard_Kennaway, ChristianKl
comment by Richard_Kennaway · 2015-07-21T08:34:22.623Z · LW(p) · GW(p)

You're confusing risk aversion and learned helplessness.

Another English irregular verb.

"I can see that this won't work. You are risk-averse. He exhibits learned helplessness."

comment by ChristianKl · 2015-07-21T22:02:51.405Z · LW(p) · GW(p)

No, I do not think that your fallacy depends on what DVH thinks.

If I'm saying something to have an effect in another person then the quality of my reasoning process depends on whether my model of the other person is correct.

It's like debugging a phobia at a LW meetup. People complain that language isn't logical, but in the end the phobia is gone. The fact that the language superficially pattern matches to fallacies is besides the point as long as it has the desired consequences.

You're confusing risk aversion and learned helplessness.

No, I'm talking to a person who at least self-labels as schizoid and about whom I have more information beyond that.

If I would think the issue is risk aversion and I wanted to convince him, I would appeal to the value of courage. Risk aversion doesn't prevent people from considering an option and seeing the act of considering an option as irresponsible.


What result did I achieve here? I got someone who hates his job to think about whether to learn a different skillset to switch to a more enjoyable job and ask for advice about what he could do. He shows more agentship about his situation.

Replies from: Lumifer
comment by Lumifer · 2015-07-21T23:42:14.450Z · LW(p) · GW(p)

If I'm saying something to have an effect in another person then the quality of my reasoning process depends on whether my model of the other person is correct.

LOL. Let me reformulate that: "If I'm trying to manipulate another person, I can lie and that's "besides the point as long as it has the desired consequences". Right? X-)

Replies from: ChristianKl
comment by ChristianKl · 2015-07-22T10:57:16.129Z · LW(p) · GW(p)

Saying "There's no real safe job" is in no lie. It true on it's surface. If my mental model of DVH is correct it leads to an update in a direction that more in line with reality and saying things to move other people to a more accurate way of seeing the world isn't lying.

Replies from: Lumifer
comment by Lumifer · 2015-07-22T15:15:23.506Z · LW(p) · GW(p)

Ahem. So you are saying that if you believe that your lie is justified, it's no lie.

saying things to move other people to a more accurate way of seeing the world isn't lying.

Let's try that on a example. Say, Alice is dating Bob, but you think that Bob is a dirtbag and not good for Alice. You want to move Alice "to a more accurate way of seeing the world" and so you invent a story about how Bob has a hobby of kicking kittens and is an active poster on revenge porn forums. You're saying that this would not be lying because it will move Alice to a more accurate way of seeing Bob. Well...

Replies from: ChristianKl
comment by ChristianKl · 2015-07-22T15:33:23.826Z · LW(p) · GW(p)

No. There are two factors:
1) It's true. There are really no 100% safe jobs.
2) The likely update by the audience is in the direction of a more accurate belief.

Getting Alice to believe that Bob is an active poster on revenge porn forums by saying it likely doesn't fulfill either criteria 1) or criteria 2).

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2015-07-23T08:44:23.539Z · LW(p) · GW(p)

1) It's true. There are really no 100% safe jobs.

There is really no 100% safe anything, but I don't think that when DVH said "I will not leave a safe job for a startup" by "safe" he meant "100% safe".

Replies from: ChristianKl
comment by ChristianKl · 2015-07-23T09:08:29.285Z · LW(p) · GW(p)

There is really no 100% safe anything

That doesn't prevent the statement from being true. The fact that there's no 100% safe anything doesn't turn the statement into a lie while the example that Lumifer provides happens to be clear lying.

meant

I didn't focus on what he "meant" but on my idea of what I believed his mental model to be.

I don't think DVH's mental models have getting inaccurate in any way as a result of my communication. He didn't pick up the belief "Startups as as safe as my current job". I didn't intent to get him to pick up that belief either. I don't believe that statement either.

My statement thus does fulfill the two criteria:
1) It's true on it's surface.
2) It didn't lead to inaccurate beliefs in the person I'm talking with.

Statement that fulfill both of those criteria aren't lies.

Replies from: Jiro, Good_Burning_Plastic
comment by Jiro · 2015-07-23T15:27:47.145Z · LW(p) · GW(p)

Statement that fulfill both of those criteria aren't lies.

That would mean that if you say something that is literally true but intended to mislead, and someone figures that out, it's not a lie.

Replies from: ChristianKl
comment by ChristianKl · 2015-07-23T15:39:05.782Z · LW(p) · GW(p)

I have no problem with including intentions as a third category but in general "see that you intention aren't to mislead" is very simply to "see that you reach an outcome where the audience isn't mislead" so I don't list it separately.

comment by Good_Burning_Plastic · 2015-07-23T10:05:24.056Z · LW(p) · GW(p)

That doesn't prevent the statement from being true.

It doesn't (though it does mostly prevent it from being useful), but the statement you made upthread was not that one. It was "Today there's nothing like a real safe job", in which context "safe" would normally be taken to mean something like "reasonably safe", not "exactly 100% safe".

1) It's true on it's surface.

What do you mean by "on its surface"? What matters is if it's true in its most likely reasonable interpretation in its context.

Meh. Enough with the wordplays and let's get quantitative. What do you think the P(DVH will lose his current job before he wants to|he doesn't leave it for a startup) is? What do you think he thinks it is?

Replies from: ChristianKl
comment by ChristianKl · 2015-07-23T10:33:49.892Z · LW(p) · GW(p)

in which context "safe" would normally be taken to mean something like "reasonably safe", not "exactly 100% safe".

I didn't just say "safe" I added the qualifier "real" to it. I also started the sentence with "today" with makes it more like a general platitude. I specifically didn't say your job isn't safe but made the general statement that no job is really safe.

It happens to be a general platitude commonly repeated in popular culture.

What do you think he thinks it is?

I think he didn't have a probability estimate for that in his mind at the time I was writing those lines. When you assume he had such a thing you miss the point of the exercise.

comment by mrexpresso · 2015-07-23T21:28:01.026Z · LW(p) · GW(p)

does anyone know a program that calculates Bayesian probability's.

Replies from: gwern, Username
comment by gwern · 2015-07-24T03:00:52.554Z · LW(p) · GW(p)

This is far too general a question; there are many programs for calculating many things with 'Bayesian' in their name.

Replies from: mrexpresso
comment by mrexpresso · 2015-07-24T12:55:12.753Z · LW(p) · GW(p)

can you give me an example?

Replies from: Vaniver
comment by Vaniver · 2015-07-24T13:23:33.618Z · LW(p) · GW(p)

One example is BUGS, which uses Gibbs sampling to do Bayesian inference in complicated statistical models.

Tell us what you have, and what you'd like to turn it into.

Replies from: mrexpresso
comment by mrexpresso · 2015-07-24T13:33:13.453Z · LW(p) · GW(p)

thanks for the info!

are there any list (wikipedia list like) of all programs that calculates Bayesian probability's?

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2015-07-25T07:36:18.411Z · LW(p) · GW(p)

Turns out there is. Probably not all of the programs though.

Are you trying to do something specific or are you just curious about learning about Bayesian statistics? The software on that list probably won't be that useful unless you already know a bit about statistics theory and have a specific problem you want to solve.

Replies from: mrexpresso
comment by mrexpresso · 2015-07-25T18:37:07.762Z · LW(p) · GW(p)

thanks!

comment by Username · 2015-08-07T12:38:38.503Z · LW(p) · GW(p)

R

comment by pianoforte611 · 2015-07-24T21:29:23.213Z · LW(p) · GW(p)

Have any snake oil salesmen been right?

I usually immediately disregard anyone who has the following cluster of beliefs:

1: The relevant experts are wrong. 2: I have no relevant expertise in this area. 3: My product/idea/ invention is amazing in a world changing way. 4: I could prove it if only the man didn't keep me down.

Characteristic 2 is somewhat optional, but I'm not sure about it. Examples of snake oil ideas include energy healing, salt water as car fuel and people who believe in a flat earth. Ignoring 2, Ludwig Boltzmann is not an example (he did not believe that proof of atoms was being suppressed).

I think this does a good job of screening out probably dumb ideas, but are there any false positives?

Replies from: knb
comment by knb · 2015-07-25T00:33:18.236Z · LW(p) · GW(p)

Have any snake oil salesmen been right?

No, by definition. Snake oil is defined as "does not work."

But there are examples of denigrated alternative treatments that actually worked to some extent: acupuncture, meditation, aromatherapy etc. Low-carb diets were denigrated for a long time but they've been shown to work at least as well as other diets. Fecal transplants have a long, weird history as an alternative therapy, including things like Bedouins eating camel feces to combat certain infections. The FDA was for a long time very restrictive and skeptical about fecal transplants in spite of lots of positive evidence of their efficacy in certain infections.

1: The relevant experts are wrong. 2: I have no relevant expertise in this area. 3: My product/idea/ invention is amazing in a world changing way. 4: I could prove it if only the man didn't keep me down.

A pretty good heuristic, but it's worthwhile to have some open-minded people who investigate these things.

Replies from: pianoforte611
comment by pianoforte611 · 2015-07-25T01:11:03.148Z · LW(p) · GW(p)

Thanks for the examples:

acupuncture, meditation, aromatherapy etc. Low-carb diets were denigrated for a long time but they've been shown to work at least as well as other diets

None of these seem to fulfill 3. They seem to fall into the category of somewhat decent with lots of exaggerated claims and enthusiastic followers.

Fecal transplants are a great example, although wikipedia says that most historical fecal therapies were consumed, and I don't know if those work (doubt it). Also it doesn't really fulfill 2 - it was doctors that first pioneered it when it was a weird fringe treatment. And thinking something is weird/extreme and fringe is different than thinking its a crackpot idea. But still a good example.

comment by [deleted] · 2015-07-23T23:48:53.640Z · LW(p) · GW(p)

The healthcare startup scene suprises me.

Why doesn't the free home doctor service put free (bulk-billed) medical clinics out of business?

Why did MetaMed go out of business?

Replies from: jam_brand, gjm
comment by gjm · 2015-07-24T11:22:13.633Z · LW(p) · GW(p)

MetaMed's service was expensive. I would guess they didn't find enough takers.

comment by DataPacRat · 2015-07-23T19:42:30.719Z · LW(p) · GW(p)

Coincidence or Correlation?

A couple of months ago, I postponed an overnight camping trip due to a gut feeing. I still haven't taken that particular trip, having focused on other activities.

Today, my local newspaper is reporting that a body was found in that park this morning. My natural human instinct is to think "That could have been me!"... but, of course, instincts are less trustworthy than other forms of thinking.

What are the odds that I'm a low-probability-branch Everett Immortality survivor? Do you think I should pay measurably more attention to such gut feelings in the future? What lessons, if any, are able to be drawn from these circumstances?

Replies from: Elo, Lumifer, None
comment by Elo · 2015-07-24T00:16:37.631Z · LW(p) · GW(p)

This sounds like a case of confirmation bias. In that if your "gut feeling" was never confirmed as something, you probably wouldn't remember having the gut feeling. You could have been waiting every day for the rest of your life, and still not have gotten the gut-success feeling.

That doesn't help you recalibrate about it, but I wouldn't be listening to gut any more or less in the future.

comment by Lumifer · 2015-07-23T19:47:46.372Z · LW(p) · GW(p)

What are the odds that I'm a low-probability-branch Everett Immortality survivor?

Since you are posting, you know you are an Everett branch survivor. Whether that branch is low-probability is, of course, impossible to tell.

Do you think I should pay measurably more attention to such gut feelings in the future?

That depends on gut feelings, but I see no reason to update based on this particular incident.

What lessons, if any, are able to be drawn from these circumstances?

That you should not read the crime / police blotter sections of newspapers.

Replies from: DataPacRat
comment by DataPacRat · 2015-07-23T21:17:00.639Z · LW(p) · GW(p)

Whether that branch is low-probability is, of course, impossible to tell.

Hm... how sure should anyone be of that impossibility? For example, if the number of Everett branches isn't infinite, but merely, say, 10^120, then wouldn't it be hypothetically possible for a worldline that has relatively few other worldlines that are similar enough to interact on the quantum level to have to macroscopicly-observable effects?

I see no reason to update based on this particular incident.

Fair enough.

you should not read the crime / police blotter sections of newspapers.

I don't; the local region has a small enough population that the main newspaper has only a single section to cover all local stories. Unsubscribing from the RSS feed with local crime stories would also unsubscribe me from local politics, events, fluff, and so forth.

Replies from: Lumifer
comment by Lumifer · 2015-07-23T21:25:34.438Z · LW(p) · GW(p)

merely, say, 10^120

The greatness of LW.

merely 10^120 :-D

I don't

Clearly, you do. I wasn't suggesting wearing blinders not to notice them, I suggested not reading them.

comment by [deleted] · 2015-07-26T06:49:06.629Z · LW(p) · GW(p)

But the penultimate question is kinda answerable, isn't it? Have a Gut Feeling Journal and see for yourself whether GF works. It should be useful for calibration, at least, and also fun.

Also, DO pay more attention to crime reports and integrate them into your planning. I would have said seek out such reports, were your newspapers more diversified.

comment by Lumifer · 2015-07-23T15:08:03.226Z · LW(p) · GW(p)

The clusterfuck in medical science with some well-intentioned attempts to do it better, not actually well, but somewhat better.

Edited to add: A follow-up on the deworming wars (which might be of interest to EAs as, I think deworming was considered to be an very effective intervention) in this blog -- and read the discussion in the comments.

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2015-07-23T17:38:43.164Z · LW(p) · GW(p)

From that article:

0.745 was incorrectly truncated to 0.74 rather than 0.75

well...

comment by SilentCal · 2015-07-22T18:59:19.922Z · LW(p) · GW(p)

As far as I can tell, utility functions are not standard in financial planning. I think this is dumb (that is, the neglect is dumb; utility functions are smart). Am I right? Sure, you don't know the correct utility function, but see the case for made-up numbers. My guess is to use log of wealth with extra loss-aversion penalties. Wealth is something between 'net worth' and 'disposable savings'.

I had reason to think about this recently from observing a debate over a certain mean/volatility tradeoff. The participants didn't seem to realize that the right decision depends on the size of the stakes. Now you certainly could realize this intuitively, but an expected-utility calculation would guarantee that you'd pick up on it. Moreover, I tried running the problem with made-up numbers and it became clear that any financially healthy person in that situation should take the riskier higher-mean approach, the opposite conclusion to the consensus.

Replies from: Lumifer
comment by Lumifer · 2015-07-22T19:18:58.589Z · LW(p) · GW(p)

I think this is dumb (that is, the neglect is dumb; utility functions are smart). Am I right?

Not in the first approximation, because utility is (hopefully) a monotonous function and you would end up in the same spot regardless of whether you're maximizing utility or maximizing wealth.

The participants didn't seem to realize that the right decision depends on the size of the stakes.

Well, the first thing that the decision depends on is the risk aversion and there is no single right one-size-fits-all risk aversion parameter (or a function).

But yes, you are correct in that the size of the bet (say, as % of your total wealth) influences the risk-reward trade-off, though I suspect it's usually rolled into the risk aversion.

Note that the market prices risks on the bet-is-a-tiny-percentage-of-total-wealth basis.

Replies from: SilentCal
comment by SilentCal · 2015-07-22T20:22:23.383Z · LW(p) · GW(p)

you would end up in the same spot regardless of whether you're maximizing utility or maximizing wealth.

But under conditions of uncertainty, expected utility is not a monotonic function of expected wealth.

Well, the first thing that the decision depends on is the risk aversion and there is no single right one-size-fits-all risk aversion parameter (or a function).

I'll defer to the SSC link on why I think it would be better to make one up--or rather, make up a utility function that incorporates it.

Note that the market prices risks on the bet-is-a-tiny-percentage-of-total-wealth basis.

Indeed. The case in question wasn't a market-priced risk, though, as the reward was a potential tax advantage.

Replies from: Lumifer
comment by Lumifer · 2015-07-22T20:32:22.062Z · LW(p) · GW(p)

But under conditions of uncertainty, expected utility is not a monotonic function of expected wealth.

Under uncertainty, you must have a risk aversion parameter -- even if you try to avoid specifying one, your choice will point to an implicit one.

You can also use the concept of the certainty equivalent to sorta side-step the uncertainty.

Replies from: SilentCal
comment by SilentCal · 2015-07-22T22:19:55.330Z · LW(p) · GW(p)

Under uncertainty, you must have a risk aversion parameter -- even if you try to avoid specifying one, your choice will point to an implicit one.

A made-up risk aversion parameter might also be a reasonable way to go about things, though making up a utility function and using the implicit risk aversion from that seems easier. The personal financial planning advice I've seen doesn't use any quantitative approach whatsoever to price risk, which leads to people just going with their gut, which is what I'm calling dumb.

Replies from: Lumifer
comment by Lumifer · 2015-07-23T01:33:25.548Z · LW(p) · GW(p)

making up a utility function and using the implicit risk aversion from that seems easier.

Um, I feel there is some confusion here. First, let's make distinct what I'll call a broad utility function and a narrow utility function. The argument to the broad utility function is the whole state of the universe and it outputs how much do you like this particular state of the entire world. The argument to the narrow utility function is a specific, certain amount of something, usually money, and it outputs how much you like this something regardless of the state of the rest of the world.

The broad utility function does include risk aversion, but it is.. not very practical.

The narrow utility function is quite separate from risk aversion and neither of them implies the other one. And they are different conceptually -- the narrow utility function determines how much you like/need something, while the risk aversion function determines your trade-offs between value and uncertainty.

The personal financial planning advice I've seen doesn't use any quantitative approach whatsoever to price risk

Well, I don't expect personal financial planning advice to be of high quality (unless you're a what's called "a high net worth individual" :-D), but its recommendations usually imply a certain price of risk. For example, if a financial planner recommends a 60% stocks / 40% bonds mix over a 100% stocks portfolio, that implies a specific risk aversion parameter.

comment by [deleted] · 2015-07-20T08:21:29.943Z · LW(p) · GW(p)

I have realized I don't understand the first thing about evolutionary psychology. I used to think the selfish gene of a male will want to get planted into as many wombs as possible and this our most basic drive. But actually any gene that would result in having many children but not so many great-great-grandchildren due to the "quality" of our children being low would get crowded out by the genes that do. Having 17 sons of the Mr. Bean type may not be such a big reproductive success down the road.

Since most women managed to reproduce, we can assume a winner strategy is having a large number of daughters but perhaps for sons the selfish gene may want quality and status more than quantity. Anecdotally, in more traditional societies what typically men want is not a huge army of children but a high-status male heir, a "crown prince". Arab men traditionally rename themselves after their first son, Musa's father literally renames himself to Musa's father: Abu-Musa. This sort of suggests they are less interested in quantity...

At this point I must admit I have no longer an idea what the basic biological male drive is. It is not simply unrestricted polygamy and racking up as many notches as possible. It is some sort of a sweet spot between quantity and quality, and in quality not only the genetic quality of the mother matters but also the education of the sons i.e. investing into fathering, the amount of status that can be inherited and so on? Which suggests more of a monogamous drive.

Besides to make it really complicated, while the ancestral father's genes may "assume" his daughters will be able to reproduce to full capacity, there is still a value in parenting and generally quality because if the daughter manages to catch a high quality man, an attractive man, her sons may be higher quality, more attractive guys, and thus her sons can have a higher quantity of offspring and basically the man's "be a good father of my daughter" genes win at the great-grandchildren level!

This kind of modelling actually sounds like something doable with mathemathics, something like game theory, right? We could figure out how the utility function of the selfish gene looks like game-theoretically? Was it done already?

Replies from: knb, Emily, NancyLebovitz, ChristianKl, ChristianKl, None
comment by knb · 2015-07-20T09:41:21.500Z · LW(p) · GW(p)

I have realized I don't understand the first thing about evolutionary psychology.

If you're really curious, I recommend picking up an evolutionary psychology textbook rather than speculating/seeking feedback on speculations from non-experts. Lots of people have strong opinions about Evo Psych without actually having much real knowledge about the discipline.

Anecdotally, in more traditional societies what typically men want is not a huge army of children but a high-status male heir

I don't really believe in this anecdote; large numbers of children are definitely a point of pride in traditional cultures.

Since most women managed to reproduce, we can assume a winner strategy is having a large number of daughters

Surely you don't think daughters are more reproductively successful than sons on average?

Replies from: None
comment by [deleted] · 2015-07-20T09:56:40.609Z · LW(p) · GW(p)

Surely you don't think daughters are more reproductively successful than sons on average?

Surely I do - it is common knowledge today that about 40% of men and 80% of women managed to reproduce?

Replies from: D_Malik
comment by D_Malik · 2015-07-20T10:50:36.471Z · LW(p) · GW(p)

Every child has both a mother and a father, and there are about as many men as women, so the mean number of children is about the same for males as for females. But there are more childless men than childless women, because polygyny is more common than polyandry, ultimately because of Bateman's principle.

comment by Emily · 2015-07-20T09:07:09.106Z · LW(p) · GW(p)

Since most women managed to reproduce, we can assume a winner strategy is having a large number of daughters

But if everyone adopts this strategy, in a few generations women will by far outnumber men, and suddenly having sons is a brilliant strategy instead. You have to think about what strategies are stable in the population of strategies - as you begin to point towards with the comments about game theory. Yes, game theory has of course been used to look at this type of stuff. (I'm certainly not an expert so I won't get into details on how.)

If you haven't read The Selfish Gene by Richard Dawkins, it's a fun read and great for getting into this subject matter. How The Mind Works by Steven Pinker is also a nice readable/popular intro to evolutionary psychology and covers some of the topics you're thinking about here.

comment by NancyLebovitz · 2015-07-20T14:16:09.481Z · LW(p) · GW(p)

As I understand it, humans are on the spectrum between have maximum number of offspring with low parental investment and have a smaller number with high parental investment. There are indicators (size difference between sexes, size of testes, probably more) which puts us about a third of the way towards the high investment end. So, there's infidelity and monogamy and parents putting a lot into their kids and parents abandoning their kids.

Humans are also strongly influenced by culture, so you also get customss like giving some of your children to a religion which requires celibacy, or putting your daughters at risk of dowry murder.

Biology is complicated. Applying simple principles like males having a higher risk of not having descendants won't get you very far.

I'm reminded of the idea that anti-oxidants are good for you. It just didn't have enough detail (which anti-oxidants? how much? how can you tell whether you're making things better).

Replies from: James_Miller
comment by James_Miller · 2015-07-20T14:34:39.695Z · LW(p) · GW(p)

Humans are also strongly influenced by culture

Or cultural variation is mostly determined by genetic variation. It's hard to empirically distinguish the two.

Replies from: ChristianKl, Richard_Kennaway
comment by ChristianKl · 2015-07-20T17:32:00.121Z · LW(p) · GW(p)

You can do historic comparison. 500 hundred years ago people in Europe acted very differently than they do today. On the other hand their genes didn't change that much.

comment by Richard_Kennaway · 2015-07-21T08:26:22.365Z · LW(p) · GW(p)

Or cultural variation is mostly determined by genetic variation. It's hard to empirically distinguish the two.

It is even theoretically possible? If there are causal influences in both directions between X and Y, is there a meaningful way to assign relative sizes to the two directions? Especially if, as here, X and Y are each complex things consisting of many parts, and the real causal diagram consists of two large clouds and many arrows going both ways between them.

comment by ChristianKl · 2015-07-20T10:27:35.575Z · LW(p) · GW(p)

There no "the selfish gene of the man". There especially no "the selfish gene of the woman" given how all the genes in woman are also in men. Humans have between 20000 to 25000 genes and all of them are "selfish".

Replies from: None
comment by [deleted] · 2015-07-20T12:23:54.172Z · LW(p) · GW(p)

Yet compared to women, men have not as many copies of genes. Perhaps there are 'selfish chromosome parts'?:)

Replies from: ChristianKl
comment by ChristianKl · 2015-07-20T14:10:18.017Z · LW(p) · GW(p)

A gene that on the X chromosome "wants" to be copied regardless of whether it's in the male or female body. Thinking in terms of the interest of genes means not only thinking on the level of an individual specimen.

comment by ChristianKl · 2015-07-20T10:41:21.421Z · LW(p) · GW(p)

Anecdotally, in more traditional societies what typically men want is not a huge army of children but a high-status male heir

Most of evolution happened in hunter gatherer arrangements not in traditional farmer cultures.

comment by [deleted] · 2015-07-20T18:37:56.576Z · LW(p) · GW(p)

No background in evolutionary psychology, but I'm wondering to which degree 'good fatherhood' can be encoded in genes at all. Perhaps maximal reproduction is the most strongly genetically preprogrammed goal in males but it's cutltural mechanisms that limit this drive (via taboos, marriage etc.) due to advantages for the culture as a whole.

Replies from: Lumifer, None
comment by Lumifer · 2015-07-20T18:54:13.189Z · LW(p) · GW(p)

I'm wondering to which degree 'good fatherhood' can be encoded in genes at all

Why not? A male's genes do not succeed when he impregnates a woman -- they only succeed when the child grows to puberty and reproduces. If the presence of a father reduces e.g. infant mortality, that's a strong evolutionary factor.

Replies from: None
comment by [deleted] · 2015-07-20T21:55:50.216Z · LW(p) · GW(p)

But how significant did the the male father role used to be among hunter-gatherers for a good upbringing of a child? If that task was for example shared between the group members (which I think I’ve read before it was) then it’s questionable whether there would be significant differences in knowing one’s genetic father or not. One hint that this might have been the default mode among hunter-gatherers is that monogamy is a minority marriage type among human cultures today 1 (meaning if polygamy was prevalent, it would have been difficult to ensure that all partners of an alpha male would remain faithful). I also think I’ve read that in many ingenious people, women are readily shared among the alpha males. Besides that, it seems that most things that have to do with reproduction considerations seem to be either on the physical attraction level or on a very high cognitive level (Are there enough resources for the upbringing? Is the the mother’s environment healthy?). Predetermined high-level stuff is memetically encoded rather than genetically (or it is just common sense our cognitive abilities enable us to have).

Edited for clarity. Please consider removing the downvote if it makes sense now to you.

Replies from: James_Miller, Lumifer, VoiceOfRa
comment by James_Miller · 2015-07-20T23:01:26.877Z · LW(p) · GW(p)

Our (nearly) cavemen-optimized brains fear our children will starve or be eaten if we don't help them. Sexual jealousy is probably genetically encoded meaning lots of men want their mates to be exclusive to them. The following is pure speculation with absolutely no evidence behind it: but I wonder if a problem with open relationships involving couples planning on having kids is that the man might (for genetic reasons) care less for a child even if he knows with certainty that the child is theirs. A gene that caused a man to care more for children whose mothers were thought to be sexually exclusive with the man might increase reproductive fitness.

Replies from: VoiceOfRa, None, None
comment by VoiceOfRa · 2015-07-21T07:37:37.601Z · LW(p) · GW(p)

but I wonder if a problem with open relationships involving couples planning on having kids is that the man might (for genetic reasons) care less for a child even if he knows with certainty that the child is theirs.

Yes, until recently it was impossible to know with certainty that the child was his.

And even today feminist organizations are doing their best to keep it that way. For example, they managed to criminalize paternity testing in France.

Replies from: Username
comment by Username · 2015-07-22T11:23:14.321Z · LW(p) · GW(p)

they managed to criminalize paternity testing in France

By that standard, sex is also criminalized in many countries -- after all, it's only legal if the participants consent.

Personally, I'm not a big fan of the French law, but your interpretation of facts seems a little... creative.

Replies from: Jiro
comment by Jiro · 2015-07-22T22:08:23.592Z · LW(p) · GW(p)

They criminalized it for the main purpose that one would need to use it for.

comment by [deleted] · 2015-07-24T14:06:23.854Z · LW(p) · GW(p)

I'm still unsure why I'm vehemently being downvoted for taking up this position. Perhaps it's because people confuse it for men's rights extremist thoughts? Why is the possibility being completely disregarded here that it's only memes and a small set of genetic predispositions (such as reward from helping others via empathy and strong empathy for small humans) that jumpstart decent behavior? I think I've read somewhere that kittens learn how to groom by watching other cats. If other mammals can't fully encode basic needs such as hygiene genetically, how can complex human behaviors? An important implication from this would be that culture carries much more value than we would otherwise attribute to it.

comment by [deleted] · 2015-07-21T03:00:17.533Z · LW(p) · GW(p)

Our (nearly) cavemen-optimized brains fear our children will starve or be eaten if we don't help them.

There is a strong predetermined empathy for cute things with big eyes, yes, but is there predetermined high level thinking about sex and offspring? I rather doubt that while OP appears to assume this as a given fact.

comment by Lumifer · 2015-07-21T01:27:20.218Z · LW(p) · GW(p)

But how significant is the 'traditional' male father role for a good upbringing of a child?

If the traditional male role involves making sure the pregnant or nursing woman does not starve, very.

Monogamy is a minority marriage type among human cultures

Heh. How about among successful human cultures? :-D

Replies from: None
comment by [deleted] · 2015-07-21T02:34:11.822Z · LW(p) · GW(p)

See the link above; it's not clear that the food provider role of males was actually widely present in prehistoric people, and the upbringing of the children might have been predominantly a task carried out by the entire group, not by a father/mother family structure.

Heh. How about among successful human cultures? :-D

Not sure what causes your amusement. Isn't there still the possibility that this is memetics rather than genetics?

Replies from: Lumifer
comment by Lumifer · 2015-07-21T02:45:20.440Z · LW(p) · GW(p)

it's not clear that the food provider role of males was actually widely present in prehistoric people

I don't see support of this statement in your linked text (which, by the way, dips into politically correct idiocy a bit too often for my liking).

Not sure what causes your amusement.

I'm easily amused :-P

Isn't there still the possibility that this is memetics rather than genetics?

What exactly is "this"? Are you saying that there is no genetic basis for males to be attached to their offspring and any attachment one might observe is entirely cultural?

Replies from: None
comment by [deleted] · 2015-07-21T03:16:36.794Z · LW(p) · GW(p)

Here is the part I'm referring to: "Nor does the ethnographic record support the idea of sedentary women staying home with the kids and waiting for food to show up with the hubby. We know that women hunt in many cultures, and even if the division of labor means that they are the plant gatherers, they work hard and move around; note this picture (Zihlman 1981:92) of a !Kung woman on a gathering trip from camp, carrying the child and the bag of plants obtained and seven months pregnant! She is averaging many km per day in obtaining the needed resources."

What exactly is "this"? Are you saying that there is no genetic basis for males to be attached to their offspring and any attachment one might observe is entirely cultural?

Attachment to cute babies is clearly genetically predetermined, but I'm trying to argue that it's not clear at all that considerations whether or not to have sex are genetically determined by other things than physical attraction.

Replies from: Lumifer
comment by Lumifer · 2015-07-21T03:35:07.990Z · LW(p) · GW(p)

Here is the part I'm referring to

Yes, and how does it show that "it's not clear that the food provider role of males was actually widely present in prehistoric people"? The observation that women "work hard and move around" does not support the notion that they can feed themselves and their kids without any help from males.

I'm trying to argue that it's not clear at all that considerations whether or not to have sex are genetically determined by other things than physical attraction.

I am not sure I understand. Are you saying that the only genetic imperative for males is to fuck anything that moves and that any constraints on that are solely cultural? That's not where you started. Your initial question was:

But how significant is the 'traditional' male father role for a good upbringing of a child?

Replies from: None
comment by [deleted] · 2015-07-21T04:16:54.264Z · LW(p) · GW(p)

Yes, and how does it show that "it's not clear that the food provider role of males was actually widely present in prehistoric people"? The observation that women "work hard and move around" does not support the notion that they can feed themselves and their kids without any help from males.

At least it provides evidence that upbringing of the offspring could have worked without a father role. Here are a couple of other hints that may support my argument: Among apes the father is mostly unknown; The unique size and shape of the human penis among great apes is thought to have evolved to scoop out sperm of competing males; The high variability of marriage types suggest that not much is pretetermined in that regard; The social brain hypothesis might suggest that our predecessors had to deal with a lot of affairs and intrigues.

I am not sure I understand. Are you saying that the only genetic imperative for males is to fuck anything that moves and that any constraints on that are solely cultural? That's not where you started.

Well, whatever the individual sexual attraction is, but yes. At least, I'm arguing that we can't reject that possibility.

Your initial question was: But how significant is the 'traditional' male father role for a good upbringing of a child?

That's part of the same complex: If it hasn't been significant then there wouldn’t even have been be evolutionary pressure for caring farthers (assuming high-level stuff like that can be selected for at all).

comment by VoiceOfRa · 2015-07-21T07:31:23.459Z · LW(p) · GW(p)

Monogamy is a minority marriage type among human cultures

But not among individual humans, i.e., most men in polygynous cultures couldn't afford more than one wife.

comment by [deleted] · 2015-07-21T07:30:12.881Z · LW(p) · GW(p)

I think it can be. If the basic program of the selfish gene is "try to implant me in 100 wombs" once he realizes it is not really likely, there can be a plan B "have a son who will be so high quality and status that he will implant me in 100 wombs".

Replies from: None
comment by [deleted] · 2015-07-22T21:24:26.295Z · LW(p) · GW(p)

But couldn't high quality and status be highly correlated with attractiveness so that this this trait prevents other traits from being selected for?

comment by [deleted] · 2015-07-20T15:04:45.191Z · LW(p) · GW(p)

Pua x EA?

Replies from: ChristianKl, ChristianKl, None, Elo
comment by ChristianKl · 2015-07-20T22:03:33.573Z · LW(p) · GW(p)

Associating EA publicly with pickup is likely not good for the EA brand.

I however could imagine a weekend EA event that about doing shared street fundraising and at the same time comfort zone expansion and package it as a personal growth workshop.

On top of my hand I can think of one person who lives in London who I imaging has both the skills to lead such a workshop, who would benefit from doing such a project and who I know well. She's also female with reduces possible risk for the EA brand.

comment by ChristianKl · 2015-07-20T21:22:35.914Z · LW(p) · GW(p)

I suffer from relationship imposter syndrome (it's probably avoidant personality disorder or something...)

The idea of imposter syndrome is that you actually have success and don't feel like you deserve it. Given that you lately wrote that you never actually asked out a girl on which you had a crush before is that true?

comment by [deleted] · 2015-07-20T21:14:11.348Z · LW(p) · GW(p)

I think the real question is - Can YOU? Spend a week on the street doing this, film it and edit, then upload it to a bunch of PUA forums. If you get any traction, come back here with your results, and you'll likely get a better reaction than the flurry of downvotes I assume you'll get mentioning pickup on its own.

comment by Elo · 2015-07-21T05:57:22.926Z · LW(p) · GW(p)

I know of PUA knowledged people who work as charity fundraisers as they see this as an ability to practice their skills while being paid for something.

comment by Username · 2015-07-21T12:04:59.735Z · LW(p) · GW(p)

LW protip: in order to get a lot of upvotes, find political threads and post comments that imply that US government and liberal academia are fabricating facts and spreading propaganda. Don't worry, you don't have to do any dirty work and figure out if in that particular case it is correct or not, it doesn't matter. You might need to wait a few days, but eventually you'll receive a bunch of upvotes, all coming in a very short period of time.

Replies from: Dahlen, James_Miller, fubarobfusco
comment by Dahlen · 2015-07-21T13:04:25.167Z · LW(p) · GW(p)

Wasn't there a less passive-aggressive way of expressing this complaint, or a more appropriate context for it?

comment by James_Miller · 2015-07-21T21:38:03.167Z · LW(p) · GW(p)

Please link to a few examples of where this has been successful.

comment by fubarobfusco · 2015-07-21T15:54:18.386Z · LW(p) · GW(p)

I for one would appreciate it if the discussions of geopolitics, immigration policy, monetary policy, factional and sectional politics, adherence to various national leaders — and anything else about "Us vs. Them" where "Us" and "Them" are defined by which section of Spaceship Earth the parties happen to have spawned in, — would kindly fuck off back to the comments sections of newspaper websites or some other appropriate forums.

The idea of LW as an explicitly Enlightenment project, one that actually contemplates ideas such as "the coherent extrapolated volition of humankind," "applying the discovery of biases to improve our thinking," and "refining the art of human rationality," is something rare and valuable.

Yet another politics comment section, another outrage amplifier, is not.

Replies from: Viliam
comment by Viliam · 2015-07-22T10:55:25.565Z · LW(p) · GW(p)

Rational debate is so hard to find these days, we have to protect it. I wouldn't be surprised to learn that US government and liberal academia are fabricating facts and spreading propaganda, just to create conflicts among people who merely happened to be born in different cultures or subgroups. I will not post specific examples, to avoid mindkilling, but you probably know what I mean. We should not take part in this insanity.

(Am I doing this right?)