Comment by gunnar_zarncke on Spaghetti Towers · 2019-02-06T21:25:44.865Z · score: 2 (1 votes) · LW · GW

The general patterns is

Systems in general work poorly or not at all.


Which also has lots of examples but needs to be taken not too serious.

Comment by gunnar_zarncke on Meditations on Momentum · 2019-01-07T17:53:05.034Z · score: 2 (1 votes) · LW · GW

Well, can't disagree with such an abstract approach. Must be true somewhere.

But I do. The world must look like that if you run a fast strategy. From here where I am with a slow strategy in the upper middle of the range where it looks mostly flat and the ends far away and the strategy is mostly to keep it that way.

As usual Scott Alexander explains it much better:

Comment by gunnar_zarncke on Death in Groups · 2018-04-07T15:26:39.455Z · score: 3 (1 votes) · LW · GW

I tend to agree with this view. I think that is also one of the aspects implied (sic) by the implicit and explicit communication post: The value of maintaining a highly cohesive and committed team may be a higher value (for a military force) than the risk of loss of life - because in a real war many more lives will be lost (at least that is the reasoning of the military I guess).

Comment by gunnar_zarncke on Open Thread April 2018 · 2018-04-06T21:50:40.568Z · score: 7 (2 votes) · LW · GW

I don't think fortnightly will work. That's why I left that out. Adding a tags rule without tags makes no sense either.

Ask a lesswronger.

That's a bit difficult if there is no place to ask. I like the posts on LW 2.0 but I miss the open discussions.

Comment by gunnar_zarncke on Open Thread April 2018 · 2018-04-06T21:48:09.853Z · score: 8 (3 votes) · LW · GW

I think MIT’s new AlterEgo headset still falls into the category "Devices and Gadgets" of When does technological enhancement feel natural and acceptable? But it's still a pretty nice step forward.

The device, called AlterEgo, can transcribe words that wearers verbalise internally but do not say out loud, using electrodes attached to the skin.
“Our idea was: could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”
Comment by gunnar_zarncke on Open Thread April 2018 · 2018-04-06T21:14:33.211Z · score: 3 (3 votes) · LW · GW

An interesting though somewhat bizarre prediction on the difficulty of building AI by Scott Adams in a recent Periscope session of him (paraphrased from memory):

"The perception that building human intelligence seems so difficult results from a perceptual distortion. Namely that human intelligence is something great when in fact we humans do not possess superior rationality. We only think we do. We just bounce around randomly and try to explain that as something awesome after the fact. Building artificial intelligence then is hard because we try to build something that doesn't exist. On the other hand building e.g. a robot that moves around arbitrarily based on some complex inner mechanism and generates explanations why it does so would be easy and appear very intelligent."

The thing is that this is a testable approach and prediction. I want to document it here partly because he claims that he has said that for some years now.

Open Thread April 2018

2018-04-06T21:02:38.311Z · score: 14 (4 votes)
Comment by gunnar_zarncke on About: LessWrong · 2018-04-06T20:56:43.949Z · score: 2 (2 votes) · LW · GW

Does something like Open Threads exist in LW 2.0? When I create one how would anybody get to know about it?

Comment by gunnar_zarncke on A LessWrong Crypto Autopsy · 2018-02-04T23:54:44.857Z · score: 8 (3 votes) · LW · GW
The only reason I persisted was because I was interested in the cryptography aspect and wanted to be a part of an up-and-coming technology.

And that is a reward I guess a very high fraction of the people actually 'investing' in Bitcoin had. Those hackers, nerds, tech enthusists didn't need high fractions of likelihood times payoff.

And maybe the true lesson to draw from this is not to look at an abstract payoff but at the social dynamic: Are there enough people attracted to something.

Comment by gunnar_zarncke on Singularity Mindset · 2018-01-21T20:53:04.119Z · score: 3 (1 votes) · LW · GW

I think this goes beyond math and is really a general pattern about learning by system 1 and 2 interacting. It's just more clearly visible with math because it is neccessarily more precise. I once described it here (before knowing about system 1 and 2 terminology):

Comment by gunnar_zarncke on Making Exceptions to General Rules · 2018-01-18T21:36:34.271Z · score: 3 (1 votes) · LW · GW

That sounds very close to the meta rule of being only allowed to change the rule to more precise rule. So you have the rule of not eating cookies and come across a very special cookie. Making an exception opens the door to arbitrary exceptions. But what about changing the rule to allow only cookies that you have never eaten before? That is clearly a rule that allows this special cookie and also future special cookies, satisfies the culinary curiosity without noticaby impacting the calories.

Comment by gunnar_zarncke on Why did everything take so long? · 2017-12-30T23:16:05.592Z · score: 3 (1 votes) · LW · GW

Reality has a surprising amount of detail (there was a post about this explaining it at the example of contructing a simple wooden ladder which I can't find, but I bet there are a lot comparable descriptions out there). Or take a candle. I guess you have used one recently. Looks pretty simple, right? Just use some wax and a wick. Turns out that people have used candles since ages. They were frequently used in rome for example. But the easy to use candles of our time are pretty recent. Recent as in last century. Before that

  • they didn't have wicks that burned themselves away and you had to cut them all the time
  • there was no good wax. Most candles were made of fat with lots of residue that stank and smoked. Bees wax was much better but harder to get

To fix these things you need much better raw materials and production processes...

See this article about candle history (German, but I guess Google translate is good enough).

And you can look at any kind of thing we take for granted and it is basically not posible to grasp all of it. The classical example is I, Pencil: My Family Tree as told to Leonard E. Read Most things depend on the presence of a whole environment - and take part in bringing it about. You could see it as a co-evolution of lots of inventions. Something just hinted at in the comment about roads needed for wheels (and actually you benefit from having wheels when building roads...).

I think this is one of the main overlooked points when talking about the possibility of space travel, esp. interstellar one. Even if you assume AIs. But let's not. As mentioned in another comment we not really know what kind of coordination problems it comes with. Scaling isn't automatic. Just look at Moore's law. Sure we continue to scale, but we pile technology on technology on technology to do so, And we can't just invent the last one. And neither can a future AI. You need the whole stack (OK granted, you might be able to simplify, but still). And it will keep growing and might become inherently unmanageable. Remember: The price of a chip factories also continues to grow and that might be the limiting factor. See e.g. McKinsey on Semiconductors 2013.

Comment by gunnar_zarncke on Against Love Languages · 2017-12-29T19:23:59.740Z · score: 3 (1 votes) · LW · GW

Added: I wonder whether this is a kind of niche need of our kind of folks. Or maybe it is the other way around and I project, because I had also trouble of understanding other people especially people my age. On the other had I could always relate well to older (adults when I was young) and younger, esp. children.

Comment by gunnar_zarncke on Against Love Languages · 2017-12-29T19:21:07.707Z · score: 3 (1 votes) · LW · GW

Wow. That's almost exactly the same as what I offerered in a discussion about the love languages. I also couldn't relate to either of the 5 strongly. I all see all of them of kind of having their place. But I was also missing point 6: making an effort to understand the other person. At least that was I have been desperately missing for most of my life and which I got mostly from basically one (male) friend only. Except for recently when a date turned out to not be a partner but the person being able to give me the feeling of bein understood and related to closely.

Comment by gunnar_zarncke on Happiness Is a Chore · 2017-12-20T21:31:44.111Z · score: 3 (1 votes) · LW · GW

This kind of illustrates the point, right? We only profess to strive for happiness, but actually don't care that much. Just kidding. I guess proper estimates of likelihood of success vs cost I probably also wouldn't make the travel. But what you could do is try meditation. A rationalist starter can be found e.g. here.

Comment by gunnar_zarncke on Happiness Is a Chore · 2017-12-20T15:39:31.096Z · score: 29 (8 votes) · LW · GW

TL;DR Extracted quotes of what I see as the key points:

[The] human activity [of] "pursuing happiness" [...] seems to be in the same category as other common activities such as "acquiring education", "helping people", "talking to friends" (or should I say "talking" to "friends") and so on. Which is to say, people do them in a way which is outwardly convincing enough to allow everyone to keep up the social pretenses. This is way different from what you'd see people do if they actually cared. The simple matter of fact is that the human brain is a kludge, [...]. Almost anything they claim to be doing isn't for real. This is true even when they themselves know about this. The best you can do is gradually nudge yourself in the right direction, gaining new footholds in consistency and consequentialism painstakingly and precariously.

[...] I have felt levels of happiness which are far above the upper limit of your mental scale. I know exactly how to be happy. And yet I find myself not consistently applying my own methods. Do you realize how impossibly mind-twisting this situation is? What happens in reality is that I enjoy and see great value in happiness when it happens, but when it doesn't I only work on it grudgingly. It's like with exercise, which is great but I'm rarely enthusiastic about starting it. The problem is not that I don't value happiness enough. The problem is rather that there is no gut-level motivational gradient to get actual happiness. There are gradients for all sorts of things which are crappy, fake substitutes. Once you know the taste of the real thing, they aren't fun at all. But you still end up optimizing for them, because that's what your brain does.

Comment by gunnar_zarncke on Happiness Is a Chore · 2017-12-20T15:36:07.359Z · score: 6 (3 votes) · LW · GW

From a former post of him I trust that he actually figured it out for real. It's mainly about meditation. I have made a comparable experience (actually with much less effort than even learning swimming, though not teachable but basically by luck). And I can tell you it is true. I can also make myself happy by small mental effort. But what for? All the complexity of human experience gets lost if you just switch on one part of it without the rest.

Comment by gunnar_zarncke on Moloch's Toolbox (1/2) · 2017-11-05T08:36:30.328Z · score: 2 (1 votes) · LW · GW

At least the first two chapter links are broken (currently). It says

Sorry, we couldn't find what you were looking for.
Comment by gunnar_zarncke on Intercellular competition and the inevitability of multicellular aging · 2017-11-05T08:25:34.380Z · score: 0 (0 votes) · LW · GW

My first though was whether this might be applied to organisations (cells correspond to individuals and (multi-celled) organisms correspond to organisations). And what the differences are. Companies seem to change but not so much to age. My guess is that the assumption in the article doesn't hold there:

A central assumption of our work is that the distinction between c and v represents a natural categorization of cellular traits; that is, somatic mutations or other cellular degradation events tend to primarily affect only one of the two traits: cellular cooperation in the case of mutations to tumor suppressors and oncogenes and vigor in the case of basic cellular metabolism and other internal housekeeping functions.

And considering AI I think it would be quite possible to engineer it such that it wouldn't either.

Comment by gunnar_zarncke on Intercellular competition and the inevitability of multicellular aging · 2017-11-04T12:33:51.140Z · score: 2 (2 votes) · LW · GW


We lay out the first general model of the interplay between intercellular competition, aging, and cancer. Our model shows that aging is a fundamental feature of multicellular life. Current understanding of the evolution of aging holds that aging is due to the weakness of selection to remove alleles that increase mortality only late in life. Our model, while fully compatible with current theory, makes a stronger statement: Multicellular organisms would age even if selection were perfect. These results inform how we think about the evolution of aging and the role of intercellular competition in senescence and cancer.

Full text:

Note I came across it via this link which is not really saying what they model:

Intercellular competition and the inevitability of multicellular aging

2017-11-04T12:32:54.879Z · score: 1 (1 votes)
Comment by gunnar_zarncke on Thinking Toys · 2017-10-21T17:15:04.679Z · score: 6 (2 votes) · LW · GW

Yeah, it's totally un-intuitive that the titles are also links. I guess even with better formatting (underlines) it wouldn't be much better. I have some experience with our internal wiki where this was an issue for many people too. Never make titles links.

Comment by Gunnar_Zarncke on [deleted post] 2017-10-20T22:29:33.152Z

I think there are a few other dimensions. Not sure whether you see this as a different category:
size of the community - bigger communities/teams/companies inherently need different organisational means and there seem to be non-linearities involved i.e. there are certain optima of organisation (like a single person doing all the paperwork) and growing beyond what can be handled at that size requires leaving a local optimum. This seems to be one core insight of Growing Pains which I'm currently reading and which is totally relevant (though focussed on businesses).

type of the community - what is the main type of purpose of the community?

  • mutual support

  • relaxed company

  • getting something done

  • advertising for a cause

I'm uncertain whether this makes sense of whether is should be along social/religious/economic reasons.

Other relevant links:



Comment by gunnar_zarncke on Why no total winner? · 2017-10-17T21:50:46.766Z · score: 2 (0 votes) · LW · GW

I think that is mainly the point argued in more detail by jedharris. I think it would really be valuable to understand that mechanism in more detail.

Comment by gunnar_zarncke on Polling Thread October 2017 · 2017-10-10T22:00:51.235Z · score: 0 (0 votes) · LW · GW

Currently (6 votes) it at first looks like Domainmodeling is leading. But depending on how lower ranks are weighed it could also be Stackoverflow or (my favorite) "Modelling the programs operation".

Comment by gunnar_zarncke on Polling Thread October 2017 · 2017-10-10T21:56:31.069Z · score: 0 (0 votes) · LW · GW

I didn't mean this to be about what is 'required' but how the environment overall is perceived to be. When I discussed this with my boys (who also have different environments - school, friends, at home - I left the specific environment open too. I talked more like how they see 'the world' around them.

Comment by gunnar_zarncke on Polling Thread October 2017 · 2017-10-10T21:53:35.286Z · score: 0 (0 votes) · LW · GW

I would average. After all even in one environment there are very many samples I guess, even if they cluster. But don't worry too much. It's just an LW poll :-)

Comment by gunnar_zarncke on Polling Thread October 2017 · 2017-10-08T21:14:45.957Z · score: 0 (0 votes) · LW · GW

Could you take some kind of average? Normally I try to provide some kind of "Other/See results" option, but it's difficult with these kind of graded polls.

Polling Thread October 2017

2017-10-07T21:32:00.810Z · score: 3 (3 votes)
Comment by gunnar_zarncke on Polling Thread October 2017 · 2017-10-07T21:25:04.061Z · score: 0 (0 votes) · LW · GW

In which Different World do you live?

The SSC article Different Worlds discussed how different people perceive the (same) world to be quite different places. Let's find out whether that is also the case for the limited LW population.

My prediction (based on the follow-up SSC post is that the

This poll is based on a poll I conducted with my four boys (ages 6 to 13) after reading the SSC article. I found it quite surprising how different even such a presumably homogeneous group perceives their environment.

This poll is structured into two parts:

1) The first part is about your environment; how you see the people in the world around you. 2) The second part asks the same questions about you; how you see yourself.

Please consider taking a break between both parts and cover your answers from the first part.

Part 1:

How much action do you perceive in your environment?


How mindful is your environment?


How smart are people in your environment on average?


How good are people in general?


How does your environment deal with minorities and human and behavioral variety?


How much are people together or do things together?


How are decisions in your environment typically made?


With how much force are things typically done in your environment? How careful are communications?


How are things organised in your environment?


How does your environment deal with risks?




Pause here




Part 2:

How active are you?


How mindful in your communication are you?


How smart are you?


How good are you?


How do you deal with minorities and human and behavioral variety?


How much do you prefer to do things with others?


How do you make decisions?


With how much force do you act and communicate?


How organised are you?


How do you deal with risks?



When I did my evaluation I considered counting each point as roughly 1/2 standard deviation from the mean. I'm pretty sure I didn't stick to it though.

Differences to the poll I did with my children:

  • That poll had a numeric scale from -5 to +5 which I decided was not suitable for the LW poll format.
  • That poll was done with all of them together, so they heard each others answers.
  • I skipped the intelligence and benevolence questions for self rating explaining that a) talking about ones intelligence is often problematic and b) that everyone is the here of their own story.

I didn't change the order or direction of the questions. I think the choice of questions leaves something to be improved - I came up with them in a train ride with the boys.

Comment by gunnar_zarncke on Polling Thread October 2017 · 2017-10-07T16:34:43.172Z · score: 1 (1 votes) · LW · GW

Discussion Thread

Discussions e.g. about alternatives to conduct polls go here. All other top-level comments should be polls or similar.

Comment by gunnar_zarncke on Instrumental Rationality 3: Interlude I · 2017-10-07T15:18:38.855Z · score: 4 (2 votes) · LW · GW

One other technique for countering Fading Novelty I use is to turn insights into Anki cards. Anki's spaced repetition logic ensures that you will be reminded of this insight exactly around the time when you are about to forget it and thus either you just refresh it or it will feel new. In the latter case the excitement will also come back and you get a chance for a new start.

Comment by gunnar_zarncke on Slack · 2017-10-04T21:23:14.374Z · score: 4 (2 votes) · LW · GW

I think we have to distinguish slack from freedom or indeed total absence of constraints. When I was younger I was fond of saying that freedom is overrated, because all this strinvong for freedom comes at significant costs of their own. Deliberately limiting oneself can indeed create some slack. For example I don't have a drivers license (initially for environmental reasons), which might look like a lack of freedom to go where I want. But I noticed that this doesn't take notable options away (I live in a big city with good public transportation). I and my environment adapt and if I really need a car I can take a taxi from all the saved car costs. Maybe not the best example to illustrate this, but the best I currently have on offer :-)

Comment by gunnar_zarncke on Slack · 2017-10-04T21:16:40.717Z · score: 8 (5 votes) · LW · GW

>[a] buffer that's not being used productively is more like clutter than slack.

I like this specific observation and totally agree. Buffers can be slack, but there are definitely other ways. The key seems to be granting options - like with the pull-cord.

Comment by gunnar_zarncke on Slack · 2017-10-04T11:20:21.745Z · score: 6 (3 votes) · LW · GW

On management you write

Slack in project management is the time a task can be delayed without causing a delay to either subsequent tasks or project completion time. The amount of time before a constraint binds.

I think this is a nice short reference, but a lot lurks behind, because slack in project or process management has a long history and a lot of theory behind it. I think slack in this context can be equated with buffer capacity, at least mostly. Buffers can be good or bad. Toyota saw buffers as bad and invented Just in Time to deal with the consequences. If we follow their insights it is possible to go without much slack and still reap most benefits. But does this translate to private life? Maybe a better or at least more intermediate trade-off is Drum Buffer Rope. It may depend on personal style and situation. I know a few people who plan their life heavily and reap efficiency gains. Are there other insights from production that we could compare?

Comment by gunnar_zarncke on [Slashdot] We're Not Living in a Computer Simulation, New Research Shows · 2017-10-03T11:12:06.926Z · score: 3 (3 votes) · LW · GW

I'd say it proves that we are not living in a simulation that

a) runs in a universe that has the same computational constraints as ours and b) simulates quantum effects faithfully at macroscopic levels

Comment by gunnar_zarncke on Feedback on LW 2.0 · 2017-10-03T10:18:49.976Z · score: 1 (1 votes) · LW · GW

It took me some time to notice that the up-down buttons are not for some kind of chapter back/forth navigation but for voting...

Comment by gunnar_zarncke on [Slashdot] We're Not Living in a Computer Simulation, New Research Shows · 2017-10-03T10:15:05.491Z · score: 3 (3 votes) · LW · GW

I disagree with the claim. I'm posting this link to foster discussion of what might be simulated and what might not.

I might agree with the technical claim - precisely simulating macroscopic results of quantum effects - if I were qualified which I am not. But I don't think that is necessary. If scientists can come up with measurable macroscopic effects (like the cited one), then a sufficiently sophisticated simulation can come up with observations matching these expectations.

[Slashdot] We're Not Living in a Computer Simulation, New Research Shows

2017-10-03T10:10:07.587Z · score: 1 (1 votes)
Comment by gunnar_zarncke on Blind Goaltenders: Unproductive Disagreements · 2017-09-29T08:02:49.384Z · score: -2 (2 votes) · LW · GW

You write

If you're worried about an oncoming problem and discussing it with others to plan, your ideal interlocutor, generally, is someone who agrees with you about the danger.

and I'd like to add the disclaimer '...if you want to focus on the problem'. Which you might want to as in you given main example of AI risk. It might not be the best way in general (and you explicitly say "general" there). It might not be the best way if the pro and con positions are more well-known or more equally distributed in the general population (or at least in that part of the population that is educated such things).

Comment by gunnar_zarncke on The Virtue of Numbering ALL your Equations · 2017-09-29T07:40:12.854Z · score: 1 (1 votes) · LW · GW

And I thought it was about *all* equations, not just those in articles/papers...
But seriously it reminds me of the practice of a) dating all your notes (which I do) and b) numbering all your notes (which I sadly don't). The latter is the practice much lauded of Niklas Luhmann and his Zettelkasten. See Luhmanns Zettelkasten (English) and Wikipedia on Zettelkasten (Deutsch)

Comment by gunnar_zarncke on Beta - First Impressions · 2017-09-29T07:25:16.728Z · score: 1 (1 votes) · LW · GW

I have no idea where else to ask this, so I do it here: Where can I find an introduction to how the site works? Where can I ask questions about site mechanics? LW had the newbies thread, the open and stupid questions thread and the wiki. While I recognize a lot of part here on LW 2.0 some seem different and I'd like to have a place for it. I would be willing to write a post for it, but I'm not sure whether I overlooked something and/or just add to the confusion.

Comment by gunnar_zarncke on [question] Recommendations for fasting · 2017-07-15T22:42:09.575Z · score: 0 (0 votes) · LW · GW

This is a late follow-up. I didn't get around to do fasting for quite some time, but did it for one week in January 2017. There is a write-up on FB. It was interesting but didn't change my behavior (or weight) noticably. More interesting was the social dimension: The discussions around it led a skeptic friend to look into fasting regimes and try intermittent fasting. And he still does and it has measurable positive effects and apparently practically no downsides.

Comment by gunnar_zarncke on Interpreting Deep Neural Networks using Cognitive Psychology (DeepMind) · 2017-07-10T21:10:46.585Z · score: 0 (0 votes) · LW · GW

Source: Kaj's feed.

Interpreting Deep Neural Networks using Cognitive Psychology (DeepMind)

2017-07-10T21:09:51.777Z · score: 1 (1 votes)

Using Machine Learning to Explore Neural Network Architecture (Google Research Blog)

2017-06-29T20:42:00.214Z · score: 0 (0 votes)

Does your machine mind? Ethics and potential bias in the law of algorithms

2017-06-28T22:08:26.279Z · score: 0 (0 votes)
Comment by gunnar_zarncke on Learning from Human Preferences - from OpenAI (including Christiano, Amodei & Legg) · 2017-06-18T21:16:00.443Z · score: 1 (1 votes) · LW · GW

I keep saying that AI may need a human 'caregiver' and I meant something like this post. While I'm not sure I explained it clearly enough or whether that is really what it will amount to in the end I believe that we could have learned this a approach by listening to social scientists more closely (pedagogues in this case).

Comment by gunnar_zarncke on Cognitive Science/Psychology As a Neglected Approach to AI Safety · 2017-06-18T21:09:03.894Z · score: 1 (1 votes) · LW · GW

I would add that to the list didactic and pedagogy. I think these subjects could provide a deeper understanding of the processes going ob when "training an AI" - because in the end what didactic and pedagogy is about dealing with very complex and hard to understand AIs - growing humans.

While taking lessons from these disciplines might indeed not the most abstract and straightforward way to go about AI control it would on the other hand also not very smart to disregard a sizable body of knowledge here.

Maybe not every AI scientist should look into these domains, but I believe those inclined could learn a lot.

Comment by gunnar_zarncke on Where do hypotheses come from? · 2017-06-11T09:37:41.775Z · score: 0 (0 votes) · LW · GW

Wow. Does the result mean that any algorithm generating hypotheses via sampling will have the same biases?

Comment by gunnar_zarncke on Open thread, Apr. 24 - Apr. 30, 2017 · 2017-05-09T07:14:25.432Z · score: 0 (0 votes) · LW · GW

The color processing system in the human brain is not that plastic. The higher levels probably yes, but the lower levels: No. Sure you can perceive and have benefits from these filters, but it's not exactly the same as having earch processing or luminance and chrominance built into your hardware.

Comment by gunnar_zarncke on From data to decisions: Processing information, biases, and beliefs for improved management of natural resources and environments · 2017-05-08T21:49:00.493Z · score: 1 (1 votes) · LW · GW


Our different kinds of minds and types of thinking affect the ways we decide, take action, and cooperate (or not). Derived from these types of minds, innate biases, beliefs, heuristics, and values (BBHV) influence behaviors, often beneficially, when individuals or small groups face immediate, local, acute situations that they and their ancestors faced repeatedly in the past. BBHV, though, need to be recognized and possibly countered or used when facing new, complex issues or situations especially if they need to be managed for the benefit of a wider community, for the longer-term and the larger-scale. Taking BBHV into account, we explain and provide a cyclic science-infused adaptive framework for (1) gaining knowledge of complex systems and (2) improving their management. We explore how this process and framework could improve the governance of science and policy for different types of systems and issues, providing examples in the area of natural resources, hazards, and the environment. Lastly, we suggest that an “Open Traceable Accountable Policy” initiative that followed our suggested adaptive framework could beneficially complement recent Open Data/Model science initiatives.

The part that I liked best:

Table 1. Some Problematic or Incorrect Assumptions Resulting from Human Biases, Beliefs, Heuristics, and Values (BBHV) The above assumptions, based on the perspectives of the authors, can influence the pursuit and conduct of science (including the production of models) and the implementation of science into policy.

  • The past does not inform the future, and earth systems science has no relevance to resource allocations or to the emplacement of human infrastructure. The records of tree rings, paleoflood deposits, tsunami deposits, historical accounts of infrequent natural hazards (earthquakes, hurricanes, floods, debris flows, volcanic eruptions, fire) can all be ignored.
  • Discount the future and address only immediate, local, human needs (or threats): human ingenuity will always rise to meet the needs of future generations. (And population growth is not a problem since it increases the gene pool and produces more talent that will solve all problems.)
  • Ecosystems (or parts thereof) do not have lagged responses: policy or management actions have only immediate effects.
  • Believe what you see, ignore what you do not: the invisible (e.g., groundwater, microbes) can be ignored when constructing models. (Groundwater and surface water are not connected and do not impact each other (or water quality). And only highly visible contaminants, or immediate acute health threats, matter: invisible contaminants, or slow-acting threats, do not.)
  • Pests, parasites, and predators have no useful functions, are always evil, and should be eliminated. Pets and charismatic biota are the only species, apart from humans (and food-providing biota), worth worrying about.
  • Because this is what people care about, the study of biotic species that are charismatic (or serve as a food or recreational resource) can be assumed to inform the assessment of environmental conditions in a region.
  • Nature provides only benefits to humans (whomever, wherever, and whenever those people may be); the disservices of nature do not need to be included in the design, quantification, and valuation of ecosystem services. (And the services do not include provisioning of mineral and energy resources.)
  • No modern day actions have irreversible consequences: resources are infinite, species can be resurrected, and past environmental conditions can be restored.
  • Humans are separate from nature, and their enterprise is superior to nature's. (Build the infrastructure, construct the levees, drain the wetlands.)

From data to decisions: Processing information, biases, and beliefs for improved management of natural resources and environments

2017-05-08T21:47:35.097Z · score: 1 (2 votes)
Comment by gunnar_zarncke on Biases and Fallacies Game Cards · 2017-04-27T21:37:22.176Z · score: 0 (0 votes) · LW · GW

Much better game posted later on:

Comment by gunnar_zarncke on Open thread, Apr. 24 - Apr. 30, 2017 · 2017-04-27T21:25:17.748Z · score: 0 (0 votes) · LW · GW

Does anybody here know about ? Seems they are building

Simple games that can reduce stress levels and boost your confidence and self-esteem!

Anybody tried this or know whether this is legit?

Comment by gunnar_zarncke on I Want To Live In A Baugruppe · 2017-03-23T20:48:06.027Z · score: 4 (4 votes) · LW · GW

Another data point: My smallish community (7.5 couples plus some singles) managed to continue a once-a-month get-together on some friday evenings despite children getting born and growing up. I think key to this is that it's okay for parents to bring their children and let them stay awake for longer then normal (like 10 pm) or being okay with the children falling asleep on a lap or couch which talk continues.

One key benefit of these get-togethers is (and that is kind of a general rule) that the more parent and children are there the less the parents have to care for the children because those mostly enjoy themselves and if just one parent mostly suffices to fix things.

Comment by gunnar_zarncke on I Want To Live In A Baugruppe · 2017-03-23T20:35:26.430Z · score: 3 (3 votes) · LW · GW

interested, living in Hamburg, Germany and trying to buy/rent more houses in my municipality for purposes like this at least in longer timeframe. I used to have a page here where I advertised community space but was never approached. I guess it's partly a coordination problem.

Comment by gunnar_zarncke on Introduction to Local Interpretable Model-Agnostic Explanations (LIME) · 2017-02-09T08:30:16.970Z · score: 0 (0 votes) · LW · GW

Another (earlier) blog post about it:

Introduction to Local Interpretable Model-Agnostic Explanations (LIME)

2017-02-09T08:29:40.668Z · score: 4 (5 votes)

Interview with Nassim Taleb 'Trump makes sense to a grocery store owner'

2017-02-08T21:52:21.606Z · score: 1 (2 votes)

Slate Star Codex Notes on the Asilomar Conference on Beneficial AI

2017-02-07T12:14:46.189Z · score: 13 (14 votes)

Polling Thread January 2017

2017-01-22T23:26:15.964Z · score: 1 (2 votes)

Could a Neuroscientist Understand a Microprocessor?

2017-01-20T12:40:04.553Z · score: 10 (10 votes)

Scott Adams mentions Prediction Markets and explains Cognitive Blindness bias

2016-12-20T21:23:33.468Z · score: 3 (4 votes)

Take the Rationality Test to determine your rational thinking style

2016-12-09T23:10:00.251Z · score: 1 (2 votes)

OpenAI releases Universe an interface between AI agents and the real world

2016-12-07T22:04:32.139Z · score: 2 (3 votes)

Slashdot: study Finds Little Lies Lead To Bigger Ones

2016-10-26T06:53:29.557Z · score: 0 (1 votes)

Scientists Create AI Program That Can Predict Human Rights Trials With 79 Percent Accuracy

2016-10-26T06:47:49.124Z · score: 0 (1 votes)

US tech giants found Partnership on AI to Benefit People and Society to ensure AI is developed safely and ethically

2016-09-29T20:39:48.969Z · score: 4 (5 votes)

Open Thread May 23 - May 29, 2016

2016-05-22T21:11:56.868Z · score: 4 (5 votes)

Open Thread March 21 - March 27, 2016

2016-03-20T19:54:49.073Z · score: 3 (4 votes)

[Link] Using Stories to Teach Human Values to Artificial Agents

2016-02-21T20:07:47.994Z · score: 1 (2 votes)

Polling Thread January 2016

2016-01-03T17:43:17.911Z · score: 4 (7 votes)

Life Advice Repository

2015-10-18T12:08:04.730Z · score: 9 (10 votes)

Crazy Ideas Thread - October 2015

2015-10-06T22:38:06.480Z · score: 7 (8 votes)

Polling Thread - Tutorial

2015-10-01T21:47:38.805Z · score: 5 (6 votes)

Meetup : LessWrong Hamburg 2015 Q4

2015-09-22T20:54:35.430Z · score: 1 (2 votes)

Meetup : LessWrong Hamburg

2015-09-22T20:43:30.553Z · score: 1 (2 votes)

[Link] Marek Rosa: Announcing GoodAI

2015-09-14T21:48:15.364Z · score: 6 (7 votes)

Group rationality diary for July 12th - August 1st 2015

2015-07-26T23:31:05.196Z · score: 6 (7 votes)

LessWrong Hamburg Meetup July 2015 Summary

2015-07-18T23:13:20.023Z · score: 7 (8 votes)

List of Fully General Counterarguments

2015-07-18T21:49:41.608Z · score: 9 (10 votes)

Biases and Fallacies Game Cards

2015-07-15T08:19:35.453Z · score: 7 (8 votes)

Crazy Ideas Thread

2015-07-07T21:40:48.931Z · score: 22 (23 votes)

Meetup : LessWrong Hamburg

2015-06-25T23:40:50.966Z · score: 3 (4 votes)

[Link] Self-Representation in Girard’s System U

2015-06-18T23:22:21.142Z · score: 2 (9 votes)

[Link] Robots Program People

2015-06-15T08:42:11.732Z · score: 2 (3 votes)

European Community Weekend 2015 Impressions Thread

2015-06-14T20:21:05.673Z · score: 19 (20 votes)

Summary of my Participation in the Good Judgment Project

2015-06-03T21:51:07.821Z · score: 7 (8 votes)

[Link] Throwback Thursday: Are asteroids dangerous?

2015-05-23T08:00:24.415Z · score: 1 (2 votes)

[link] Bayesian inference with probabilistic population codes

2015-05-13T21:11:05.519Z · score: 10 (10 votes)

[Link] Death with Dignity by Scott Adams

2015-05-12T21:34:49.246Z · score: 5 (5 votes)

When does technological enhancement feel natural and acceptable?

2015-05-01T21:11:11.164Z · score: 8 (5 votes)

[link] The surprising downsides of being clever

2015-04-18T20:33:12.086Z · score: 1 (6 votes)

Effective Sustainability - results from a meetup discussion

2015-03-29T22:15:10.978Z · score: 9 (12 votes)

Meetup : LessWrong-like Meetup Hamburg

2015-03-22T20:53:40.977Z · score: 1 (2 votes)

Summary and Lessons from "On Combat"

2015-03-22T01:48:56.630Z · score: 17 (20 votes)

Bragging Thread February 2015

2015-02-08T15:19:25.491Z · score: 6 (7 votes)

[link] Speed is the New Intelligence

2015-01-28T11:11:56.860Z · score: 11 (14 votes)

Suggestions for 31C3 (Chaos Communication Congress)

2014-12-27T10:08:11.620Z · score: 5 (6 votes)