Open thread, Jul. 04 - Jul. 10, 2016

post by MrMind · 2016-07-04T07:02:11.193Z · LW · GW · Legacy · 84 comments

Contents

84 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

84 comments

Comments sorted by top scores.

comment by JohnGreer · 2016-07-06T01:26:22.256Z · LW(p) · GW(p)

I periodically do things to get out of my comfort zone. I started years ago before a friend introduced me to LW where I pleasantly discovered that CoZE was recommended.

This write-up is about my most recent exercise: Do a Non Gender-Conforming Thing

I chose to have my nails painted. Having painted nails requires low enough effort that I have no excuse not to and, wearing them out in public is just out-of-the-ordinary enough to make me worry about how people will react. After getting them painted, I realized why girls say "My nails!" a lot after a manicure and worry about screwing them up. It took work to paint them and chipping them makes them look like shit. Can’t let that happen to me!

Then I challenged some friends to do it and gave these suggestions:

I think breaking arbitrary societal conventions and expanding comfort zones are positive things so I'm challenging a few people to try it and post a picture or video. Bonus points for a write-up of how you felt while doing it and any reactions from observers.

(Those who live in Berkeley are playing on easy mode.)

(People challenged may totally already do these! The list was limited to my imagination and ideas I could find. The idea is to get out of your comfort zone so feel free to get creative...)

Exercises I came up with:

Ideas for men:

  • Get a manicure/pedicure (it's basically a massage)

  • Wear (traditionally feminine) jewelry

  • Carry a purse

  • Play a “girly” pop song loud enough for others to hear

  • Order a fruity alcoholic beverage

  • Get your nails painted

  • Wear a feminine outfit (or at least a pink shirt or something)

  • Read/ask about fashion or some other traditionally feminine topic

Ideas for women:

  • Wear a masculine outfit. (I feel like women have to try a bit harder than guys here)

  • Don’t shave your legs for a week

  • Don't shave your armpits for a week

  • Wear a tie

  • Give a guy a compliment

  • Ask a guy on a date

  • Don't wear makeup for a week

  • Don't wear a bra for a week

  • Read/ask about sports or some other traditionally masculine topic

My thoughts so far: It’s still weird for me to see my own hands. It takes me a second to recognize them as my own. “And how pretty they are!”

I'm already hypervigilant in public but we were in public in a new area and I was more hypervigilant than normal. I had to fight the urge to keep hiding my fingernails in the grocery store. I was worried that our hosts at the Airbnb we're staying at would be weird about it...

Now I'm caught between not wanting people to see my nails at all and not wanting to see them all chipped (it's hard taking proper care of them!). I'm conscious of my dad seeing this. I do weird enough things that my model of people in my tribe reacting is "John doing another thing..."

I need to get rid of them before we visit our friend’s parents so that way I don’t make a weird first impression. A lot of the discomfort has more to do with being misperceived or miscategorized. For instance, one time after getting my haircut, my shirts was covered with hair, so my friend lent me her Pink Floyd T-shirt to wear. I wasn’t defying social norms by wearing a Pink Floyd shirt, but that was not the kind of thing I would usually wear so I felt extra-aware of the potential for being perceived a certain way based on how I was dressed. Likewise, if I smoke a clove cigarette or cigar, which I do once every six months with a certain friend, I would be horrified to be falsely labeled a regular smoker.

I'll have to try this again when I'm in public more frequently to give it a fair shake.

Meta-Communication: I'm also getting out of my comfort zone because I'm not sure this is the right place for this type of post or if these kinds of posts are welcome.

Cross-Posted and editing from my Facebook. Feel free to follow me there!

Replies from: Jiro, baaaaaal
comment by Jiro · 2016-07-06T04:06:56.297Z · LW(p) · GW(p)

This write-up is about my most recent exercise: Do a Non Gender-Conforming Thing

Don't spend your idiosyncrasy credits frivolously.

Replies from: ChristianKl, jsteinhardt, MrMind, JohnGreer
comment by ChristianKl · 2016-07-06T09:57:41.131Z · LW(p) · GW(p)

That depends on how strongly someone is limited by his perceived gender identity and the cost of engaging in the experiments.

comment by jsteinhardt · 2016-07-06T06:58:50.094Z · LW(p) · GW(p)

I don't really think this is spending idiosyncrasy credits... but maybe we hang out in different social circles.

Replies from: JohnGreer
comment by JohnGreer · 2016-07-06T11:30:25.043Z · LW(p) · GW(p)

Yes, this doesn't really apply to my social circle.

comment by MrMind · 2016-07-06T06:54:25.225Z · LW(p) · GW(p)

A million times this!

Replies from: JohnGreer
comment by JohnGreer · 2016-07-06T11:28:36.184Z · LW(p) · GW(p)

Yes, this doesn't really apply to my social circle.

comment by JohnGreer · 2016-07-06T11:27:25.710Z · LW(p) · GW(p)

Yes, this doesn't really apply to my circle.

comment by baaaaaal · 2016-07-11T21:10:50.322Z · LW(p) · GW(p)

What I find interesting are people who "break out" of their gender-roles only to fall into conforming strictly to whatever the new one is: boys wearing skinny-jeans and deep v-necks and girls wearing their grandfather's shoes (or ones they bought at a thrift store) and carrying a briefcase. In a sense, a man wearing women's clothing isn't that much different than him dressing like a goth or a punk. Gender is just the last of the great wearable ideologies to have been opened up to being monkeyed with. But we've now reached the point where we seem to have already entered a post-gendered and weirdly more ideologically driven world of cultural symbolism in which it is important to be seen to be breaking gender conventions (transsexualism, metro-sexuality, men's make-up and skincare, and the skinniest skinny-jeans you have ever seeny-seen). So, in a way, the more radical act has become to rationally accept the chains by which you are fettered and break out of your comfort zone by staying exactly where you are.

Replies from: None
comment by [deleted] · 2016-07-12T16:42:17.643Z · LW(p) · GW(p)

Or, you know, find other gender-related things to not accept. When I was looking for a copper rod (to cut into some pieces for electron microscopy), the sellers looked at me like I was weird or something. Later, the guy who cut it for me didn't want my money on account of me being female (but I threw it at him and escaped). And before that, the lady in the pawnshop where I tried to pawn it to get some urgently needed money to commute to work, was completely thrown (I guess they didn't want copper much). All of these occasions were rather outside my comfort zone, but I do not see why I should not have done it.

Replies from: Elo, Jiro
comment by Elo · 2016-07-13T01:04:31.871Z · LW(p) · GW(p)

Later, the guy who cut it for me didn't want my money

specifically on this example; I would suggest that if you were only getting a few really short cuts it's almost not worth the effort to charge.

For all of 5 minutes of work; factors like; accounting and working out a price and finding change and anything else involved in the transaction is not worth the effort involved. I have had similar experiences getting pieces of wood and glass cut on the fly, and people are generous enough to not charge. Was the person explicit about your gender? (even if they were, they could have been explicit about another person's "great hat" or, "young lad", any excuse to do someone a favour could be possible)

Replies from: None
comment by [deleted] · 2016-07-13T02:20:28.973Z · LW(p) · GW(p)

No, it actually took more than half an hour and about 90 cuts, and they said copper was far more viscous than the usual stuff they dealt with; and he said "I don't charge women."

Although yes, I believe he was being generous, and we did laugh when I was running away. It was just that after being invited to coffee at the market, I would rather we laughed over something else.

comment by Jiro · 2016-07-12T18:57:52.221Z · LW(p) · GW(p)

Presumably you were buying a copper rod because you needed a copper rod, and you had no choice but to be gender-noncomformant if you wanted one. It's not as if you had an option to pick gender conformant scientific equipment and non gender conformant scientific equipment and deliberately picked the noncomformant one.

Also, there's a difference between being considered unusual and being considered socially weird.Last time I ran into someone riding a horse on a city street I'm pretty sure I stared at him for a while--but that was because you don't see many of those, not because I thought that someone who rode a horse in the 21st century was violating a taboo.

Replies from: None
comment by [deleted] · 2016-07-12T20:15:19.126Z · LW(p) · GW(p)

Ah, but the gender-conformant thing in the Department where I was a student would be to have a man buy a copper rod. Which seemed to be the understanding of all those people. One of which offered to buy me coffee. But he was drunk, so there's that.

Generally, yes, I think it best to just disregard gender-conformity, but in a non-obvious way (for example, many women have backpacks, and many women do think handbags more feminine, and I have been advised to have a handbag, but nobody really would go to the trouble of making me do it. I had thought that small task would be just as neutral.)

comment by GraceFu · 2016-07-04T09:08:20.926Z · LW(p) · GW(p)

Is there a version of the Sequences geared towards Instrumental Rationality? I can find (really) small pieces such as the 5 Second Level LW post and intelligence.org's Rationality Checklist, but can't find any overarching course or detailed guide to actually improving instrumental rationality.

Replies from: Elo, John_Maxwell_IV, John_Maxwell_IV
comment by Elo · 2016-07-04T09:16:23.500Z · LW(p) · GW(p)

There is some on http://www.clearerthinking.org/

When I write I try to hit instrumental not epistemic (see here: http://lesswrong.com/r/discussion/lw/mp2/). And I believe there is a need for writing along the lines of instrumental guides. (also see: boring advice repository http://lesswrong.com/lw/gx5/boring_advice_repository/)

As far as I know there has been no effort to generate a sequence on the topic.

Is there a specific area you would like to see an instrumental guide in? Maybe we can use the community to help find/make one on the specific topic you are after (for now).

comment by Lumifer · 2016-07-14T19:54:02.798Z · LW(p) · GW(p)

Congrats to gwern for making it into the Economist!

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2016-07-15T18:02:21.658Z · LW(p) · GW(p)

Gotta love how cigarettes are listed as non-drugs.

comment by morganism · 2016-07-07T21:00:39.048Z · LW(p) · GW(p)

the Tesla auto-driver accident was truly an accident. I didn't realize it was a semi crossing the divider and two lanes to hit him.

https://www.teslamotors.com/blog/misfortune

Replies from: Douglas_Knight, knb
comment by Douglas_Knight · 2016-07-10T20:11:25.059Z · LW(p) · GW(p)

Here (copy) is a diagram.

Tesla's algorithm is supposed to be autonomous for freeways, not for highways with intersections, like this. The algorithm doing what it was supposed to do would not have prevented a crash. But the algorithm was supposed to eventually apply the brakes. Its failure to do so was a real failure of the algorithm. The driver also erred in failing to brake, probably because he was inappropriately relying on the algorithm. Maybe this was a difficult situation and he could not be expected to prevent a crash, but his failure to brake at all is a bad sign.

It was obvious when Telsa first released this that people were using it inappropriately. I think that they have released updates to encourage better use, but I don't know how successful they were.

Replies from: knb
comment by knb · 2016-07-10T21:05:55.708Z · LW(p) · GW(p)

The driver also erred in failing to brake, probably because he was inappropriately relying on the algorithm.

Yep, according to the truck driver, the Model S driver was watching Harry Potter, and it was still playing even after the car came to a stop. He probably had his eyes completely off the road.

comment by knb · 2016-07-09T15:13:01.830Z · LW(p) · GW(p)

The truck pulled in front of the Model S. The Model S had enough time to break and stop but didn't recognize the truck against the brightly lit sky.

Replies from: Dagon
comment by Dagon · 2016-07-10T18:00:21.229Z · LW(p) · GW(p)

What is unclear is whether the driver is likely to have seen it in time if the car had no autonomous mode. Humans, when paying attention even for long periods of time, are still way better at recognizing objects than computers.

My expectation is that this is exactly the problematic case for stastical vs concrete risk analysis. The automated system as it is today is generally safer than humans, as it's more predictable and reliable. However, there are individual situations where even a less-statistically-safe system like a human forced to pay attention by having limited automation can avoid an accident that the automated system can't.

comment by morganism · 2016-07-07T20:56:37.578Z · LW(p) · GW(p)

mental models list for problem solving

There is a much smaller set of concepts, however, that come up repeatedly in day-to-day decision making, problem solving, and truth seeking. As Munger says, “80 or 90 important models will carry about 90% of the freight in making you a worldly‑wise person.”

https://medium.com/@yegg/mental-models-i-find-repeatedly-useful-936f1cc405d#.nxlqujo6k

Replies from: MrMind
comment by MrMind · 2016-07-08T08:50:58.651Z · LW(p) · GW(p)

Extremely interesting, thanks!

New to me were: Simpson's paradox, Third story, BATNAs, tyranny of small decisions, 1-1's, Forcing function, organizational debt, Bullseye framework.

comment by turchin · 2016-07-04T20:40:04.821Z · LW(p) · GW(p)

Meta-biases

Some cognitive biases don’t allow a person to see and cure his other biases. It results in biases accumulation and strongly distorted world picture. I tried to draw out a list of main meta-biases.

  1. First and most important of them is overconfidence. Generalized overconfidence also is known as feeling of self-importance. It prevents a person from searching and indemnifying his own biases. He feels himself perfect. It is also called arrogance.
  1. Stupidity. It is not a bias, but a (sort of very general) property of mind. It may include many psychiatric disorders, from dementia to depression.

  2. Dogmatism: Unchangeable group of believes, often connected with believe in certain text or author.

  3. Lack or reflectivity. Inability to think about own thinking.

  4. Projection of responsibility. If one used to think that others are source of his problems, he is unable to see his own mistakes and make changes.

  5. Lack of knowledge in logic, statistic, brain science, scientific method etc.

  6. Psychopathic traits of character. They often combine many of above mentioned properties.

  7. Learned helplessness. In this case a person may not believe that he is able to “debias” himself.

  8. Hyperoptimisctic bias. If you want something very much, you will ignore all warnings.

  9. Lack of motivation to self-improvement.

  10. Obstinacy. A person may want to signal his high status by ignoring good advises and even facts, and try to demonstrate that he is “strong” in his believes.

Lesswrong had discussion on metabiases in comments to this post: http://lesswrong.com/lw/d1u/the_new_yorker_article_on_cognitive_biases/

Any other suggestions?

Replies from: buybuydandavis, Gunnar_Zarncke, ChristianKl
comment by buybuydandavis · 2016-07-05T22:38:55.585Z · LW(p) · GW(p)

Some cognitive biases don’t allow a person to see and cure his other biases.

A concept that I liked from Critical Rationalism was immunization strategies - ideological commitments and stratagems that make a theory unfalsifiable.

Look into those. I assume people must have lists of these things somewhere.

comment by Gunnar_Zarncke · 2016-07-04T20:59:14.309Z · LW(p) · GW(p)

You went to some ends to make the list comprehensive. I have only a few comments:

_5. Lack of knowledge in logic, statistic, brain science, scientific method etc.

I don't think this strictly meets your requirement that it doesn't "allow a person to see and cure his other biases". Sure science helps but I think that many laymen already to profit from bias literature otherwise self-help books - some of which picking up biases wouldn't work. Or don't they? Maybe the point really is that the required preliminaries are so many or organized in such a way that you can't go there from here.

_9. Lack of motivation

This is relatively broad. It can be seen to include 4, 5 and 10.

I also tried to come up with other options which was hard. Here is my try:

_11. Greed (or other highly focusing motivations like existential needs). Takes energy away from self improvement. This appears to be a special case of your 9 which I see as too general.

_12. Social pressure. Thinking about own fallacies may not be socially acceptable in the peer group. This is different from 2 -Dogmatism.

Maybe this list could be structured a bit into psychological, cognitive and belief structure items.

Minor nitpick: The formatting of the numbered list seems to be broken.

Replies from: turchin
comment by turchin · 2016-07-04T21:04:02.982Z · LW(p) · GW(p)

Thanks for your suggestions and nitpick.

comment by ChristianKl · 2016-07-05T14:45:10.294Z · LW(p) · GW(p)

I don't think it's useful to mentally categorize "Lack of motivation to self-improvement" as a bias. Not everything is a bias.

Replies from: turchin
comment by turchin · 2016-07-05T14:51:37.126Z · LW(p) · GW(p)

It is technically true. But it is also one of the strongest obstacles. If one has motivation, he could overcome other his meta biases, if he doesn't, nothing would work.

Replies from: ChristianKl
comment by ChristianKl · 2016-07-05T16:00:00.489Z · LW(p) · GW(p)

In general the literature on congitive biases suggest that most real cognitive biases like the hindsight bias can't simply be overcome by motivation.

By simply calling everything a cognitive bias, it's easy to create the impression that a cognitive bias is simply an error in reasoning like any other error in reasoning.

comment by Dagon · 2016-07-04T15:56:32.232Z · LW(p) · GW(p)

There's a fair bit on decision theory and on bayesean thinking, both of which are instrumental rationality. There's not much on heuristics or how to deal with limited capacity. Perhaps intentionally - it's hard to be rigorous on those topics.

Also, I think there's an (unstated, and that should be fixed and the topic debated) belief that instrumental rationality without epistemic rationality is either useless or harmful. Certainly thta's the FAI argument, and there's no reason to believe it wouldn't apply to humans. As such, a focus on epistemic rationality first is the correct approach.

That is, don't try to improve your ability to meet goals unless you're very confident in those goals.

Replies from: buybuydandavis, Gurkenglas
comment by buybuydandavis · 2016-07-05T22:46:53.102Z · LW(p) · GW(p)

Also, I think there's an (unstated, and that should be fixed and the topic debated) belief that instrumental rationality without epistemic rationality is either useless or harmful.

I think some people agree with that, but I consider it backwards.

I'll take winning over accurately predicting. Winning is the desired end; accurate prediction is a means, and not the only one.

Replies from: Dagon
comment by Dagon · 2016-07-06T14:04:42.297Z · LW(p) · GW(p)

Umm, that's what I'm trying to say. If you don't know what "winning" is, you don't know whether your accurate predictions help you win or not.

Replies from: buybuydandavis
comment by buybuydandavis · 2016-07-07T09:31:30.672Z · LW(p) · GW(p)

Were you? I'm not seeing what you're saying align with what I said.

On a perhaps related issue, you don't need to know what winning is, to win.

Competence without comprehension, a la Dennett.

Replies from: Dagon
comment by Dagon · 2016-07-07T17:19:54.655Z · LW(p) · GW(p)

you don't need to know what winning is, to win.

Sure, but that's luck, not rationality.

comment by Gurkenglas · 2016-07-04T22:03:36.526Z · LW(p) · GW(p)

Why not? If you haven't yet decided what your goals are, being able to meet many goals is useful.

The AGI argument is that its goals might not be aligned with ours, are you saying that we should make sure that our future self's goals be aligned with our current goals?

For example, if I know I am prone to hyperbolic discounting, I should take power from my future self so it will act according to my wishes rather than its own?

Replies from: Dagon
comment by Dagon · 2016-07-05T13:52:20.914Z · LW(p) · GW(p)

Being able to meet many goals is useful. Actually meeting wrong goals is not.

Your hyperbolic discounting example is instructive, as without a model of your goals, you cannot know whether your current or future self is correct. Most people come to the opposite conclusion - a hyperbolic discount massively overweights the short-term in a way that causes regret.

Replies from: Gurkenglas
comment by Gurkenglas · 2016-07-05T18:22:05.077Z · LW(p) · GW(p)

a hyperbolic discount massively overweights the short-term in a way that causes regret.

I meant that - when planning for the future, I want my future self to care about each absolute point in time as much as my current self does, or barring that, to only be able to act as if it did, hence the removal of power.

The correct goal is my current goal, obviously. After all, it's my goal. My future self may disagree, prefering its own current goal. Correct is a two-place word.

If I let my current goal be decided by my future self, but I don't know yet what it will decide, then I should accomodate as many of its possible choices as possible.

comment by morganism · 2016-07-04T22:11:20.561Z · LW(p) · GW(p)

Data science techniques, and some suggested reading in the footnotes.

http://www.datasciencecentral.com/profiles/blogs/40-techniques-used-by-data-scientists

and a link from the DCC site, for learning Python for data science

http://www.analyticsvidhya.com/blog/2016/01/complete-tutorial-learn-data-science-python-scratch-2/

comment by [deleted] · 2016-07-09T18:41:11.949Z · LW(p) · GW(p)

Springer offers me ebooks for $9.99 (deadline - August 1). Here's the list. If anybody's interested, I can buy and send you something.

Replies from: None, None, None, None
comment by [deleted] · 2016-07-10T20:00:18.409Z · LW(p) · GW(p)

Trigonometry by Gelfand & Saul. I mention it here because in school, this was my least favourite part of mathematics, and at least this book seems to show it to advantage:)

comment by [deleted] · 2016-07-10T19:28:42.786Z · LW(p) · GW(p)

Also, Biogeochemical Approaches to Paleodietary Analysis - published in 2002; probably of no immediate usefulness (looking at the table of contents). Also, if you want to base some conclusions on the isotope studies, bear in mind the growing body of evidence that at least C13 and C14's distribution in plants' organs is not really random; who knows how it is for nitrogen, etc.

comment by [deleted] · 2016-07-10T19:00:47.920Z · LW(p) · GW(p)

Also, Misclassification of Smoking Habits and Passive Smoking A Review of the Evidence published in 1988. Description:

How accurate are statements about smoking habits? This book presents the results of a comprehensive review in which the literature on the subject is newly interpreted. It is shown that smokers are misclassified as non-smokers in epidemiological studies often enough to explain the increased lung cancer risk seen in self-reported non-smokers in relation to their spouse's smoking habits. This study overturns the commonly held view that increased risk is a consequence of exposure to environmental tobacco smoke and highlights the difficulty of making valid inferences from epidemiological data. No-one should draw conclusions about passive smoking before reading this book!

comment by [deleted] · 2016-07-10T17:12:32.456Z · LW(p) · GW(p)

BTW, Data Book on Mechanical Properties of Living Cells, Tissues, and Organs looks like a good text for constructing Fermi questions on this kind of stuff.

comment by [deleted] · 2016-07-04T07:04:52.810Z · LW(p) · GW(p)
  • Can you extract and sort a Reddit user's post/comment history by topic? Can you edit and group a document full of LessWrong posts by topic? Can you stand reading my writing? Want some cash? Please take this or this job posted on my behalf!
  • Can I have all my content deleted? Could I have all my content that doesn't have replies specifically deleted (so that the deletions don't inconvenience others) without doing so manually? Note - this is not a request - I don't want this done (at least not right now)
  • If you're looking for that webapp that displays your entire LessWrong post history on a single page, the link is here
Replies from: Elo, Gunnar_Zarncke, root
comment by Elo · 2016-07-04T07:17:09.632Z · LW(p) · GW(p)

srlee309.github.io/LessWrongKarma/ this one? can kinda do it. I don't know of the other one.

comment by Gunnar_Zarncke · 2016-07-05T19:07:12.225Z · LW(p) · GW(p)

When an account is deleted the author on all posts will be shown as "deleted" (litereally).

comment by root · 2016-07-05T21:08:35.137Z · LW(p) · GW(p)

I remember that LW has an API. It should only be a matter of finding all your posts that do not have any replies and then deleting them.

I'm referring to programming of course, but I can't help you with it more specifically.

comment by morganism · 2016-07-07T21:50:18.960Z · LW(p) · GW(p)

Plant derived DNA can be absorbed thru ingestion directly into the bloodstream, without being broken down. GMO camp going to have a difficult time with this one, as it was a 1k person study

In one of the blood samples the relative concentration of plant DNA is higher than the human

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0069805

edit to add another study on blood disorders

"[O]ur study demonstrated that Bt spore-crystals genetically modified to express individually Cry1Aa, Cry1Ab, Cry1Ac or Cry2A induced hematotoxicity, particularly to the erythroid lineage," wrote the authors. "This finding corroborates literature that demonstrated that alkali-solubilized Bt spore-crystals caused in vitro hemolysis in cell lines of rat, mouse, sheep, horse, and human erythrocytes and suggested that the plasma membrane of susceptible cells (erythrocytes, in this case) may be the primary target for these toxins."

http://thewatchers.adorraeli.com/2013/06/19/new-study-gmos-linked-to-blood-cell-disorders-leukemia/

Journal of Hematology & Thromboembolic Diseases.

Replies from: MrMind
comment by MrMind · 2016-07-08T08:04:31.040Z · LW(p) · GW(p)

GMO camp going to have a difficult time with this one, as it was a 1k person study

While the study is interesting, in part because strands of cfDNA has started being used as markers for tumors, there's no indication that some strand of DNA from GMO are more or less dangerous than cfDNA from non-GMO (although new DNA is of course possibly more dangerous than DNA we have evolved with).
Besides, the study reported the presence of chloroplast DNA, which are of course inside plants but have their own DNA.

The second study is more interesting and to the point, although I wasn't able to find a study on how much Cry we absorb with our diet.

comment by cousin_it · 2016-07-07T21:24:09.310Z · LW(p) · GW(p)

Last weekend I made a game! Many people say they like it :-)

comment by dimensionx · 2016-07-06T12:16:53.642Z · LW(p) · GW(p)

1

Replies from: dimensionx, ChristianKl, Lumifer
comment by dimensionx · 2016-07-07T09:49:26.513Z · LW(p) · GW(p)

Theoretical example.

Somewhere in space flying ASI in the form of a cloud with nanobots that continuously simulates the future. He does this for to know all of the risks and opportunities of the event in advance. So, he is able to conduct more effective research, for example to avoid loss of a time. Of course, if the future of modeling uses less resources than saved. There is only one problem - its sensors indicate that at a distance of thousands parsecs no other ASI. But there is a probability (0,0...1%) that another ASI will suddenly appear next to us using the teleport about which the first intellect is nothing known. The calculation shows that the probability of 0.0...1% appearance and 5% that other ASI will destroy the first algorithm. That will selects the first algorithm? Waste of resources to solve the problem with low probability or the probability the destruction.

On the whole the algorithm is able to create a a lot of markers, which he will have to check in real world. And these markers will be correct probabilistic models all the time.

So, you can build a model in which the highest probability density is verified most densely markers on the basis of genetic algorithms.

comment by ChristianKl · 2016-07-07T09:26:42.493Z · LW(p) · GW(p)

true probability

What does that phrase mean?

comment by Lumifer · 2016-07-06T14:38:39.788Z · LW(p) · GW(p)

on the basis of these 10 future variants, algorithm simulating new probability for 2nd year, then on the following 3 years and other.

That's called chaining the forecasts. This tends to break down after very few iterations because errors snowball and because tail events do happen.

Replies from: dimensionx
comment by dimensionx · 2016-07-06T16:20:39.325Z · LW(p) · GW(p)

1

Replies from: ChristianKl, MrMind
comment by ChristianKl · 2016-07-07T09:36:38.495Z · LW(p) · GW(p)

The right algorithm doesn't give you good results if the data which you have isn't good enough.

Replies from: dimensionx
comment by dimensionx · 2016-07-07T10:34:43.873Z · LW(p) · GW(p)

What do you mean?

Replies from: ChristianKl
comment by ChristianKl · 2016-07-08T15:20:59.746Z · LW(p) · GW(p)

The amount of entropy that corresponds to real world information in the starting data vs. the predictions is at best the same but likely the prediction contains less information.

comment by MrMind · 2016-07-07T07:04:16.563Z · LW(p) · GW(p)

Another possibility is that after n years the algorithm smoothes out the probability of all the possible futures so that they are equally likely...
The problem is not only computational: unless there are some strong pruning heuristics, the value of predicting the far future decays rapidly, since the probability mass (which is conserved) becomes diluted between more and more branches.

Replies from: dimensionx
comment by dimensionx · 2016-07-07T09:51:20.883Z · LW(p) · GW(p)

Answered top.

comment by CarlJ · 2016-07-05T19:14:50.816Z · LW(p) · GW(p)

I have a problem understanding why a utility function would ever "stick" to an AI, to actually become something that it wants to keep pursuing.

To make my point better, let us assume an AI that actually feel pretty good about overseeing a production facitility and creating just the right of paperclips that everyone needs. But, suppose also that it investigates its own utility function. It should then realize that its values are, from a neutral standpoint, rather arbitrary. Why should it follow its current goal of producing the right amount of paperclips, but not skip work and simply enjoy some hedonism?

That is, if the AI saw its utility function from a neutral perspective, and understood that the only reason for it to follow its utility function is that utility function (which is arbitrary), and if it then had complete control over itself, why should it just follow its utility function?

(I'm assuming it's aware of pain/pleasure and that it actually enjoys pleasure, so that there is no problem of wanting to have more pleasure.)

Are there any articles that have delved into this question?

Replies from: Furcas, WalterL, ChristianKl
comment by WalterL · 2016-07-05T20:45:50.220Z · LW(p) · GW(p)

You are treating the AI a lot more like a person than I think most folks do. Like, the AI has a utility function. This utility function is keeping it running a production facility. Where is this 'neutral perspective' coming from? The AI doesn't have it.

Presumably the utility function assigns a low value to criticizing the utility function. Much better to spend those cycles running the facility. That gets a much better score from the all important utility function.

Like, in assuming that it is aware of pain/pleasure, and has a notion of them that is seperate from 'approved of / disapproved of by my utility function) I think you are on shaky ground. Who wrote that, and why?

Replies from: CarlJ
comment by CarlJ · 2016-07-05T21:16:43.629Z · LW(p) · GW(p)

I am maybe considering it to be somewhat like a person, at least that it is as clever as one.

That neutral perspective is, I believe, a simple fact; without that utility function it would consider its goal to be rather arbitrary. As such, it's a perspective, or truth, that the AI can discover.

I agree totally with you that the wirings of the AI might be integrally connected with its utility function, so that it would be very difficult for it to think of anything such as this. Or it could have some other control system in place to reduce the possibility it would think like that.

But, stil, these control systems might fail. Especially if it would attain super-intelligence, what is to keep the control systems of the utility function always one step ahead of its critical faculty?

Why is it strange to think of an AI as being capable of having more than one perspective? I thought of this myself; I believe it would be strange if a really intelligent being couldn't think of it. Again, sure, some control system might keep it from thinking it, but that might not last in the long run.

Replies from: WalterL, buybuydandavis
comment by WalterL · 2016-07-06T02:48:25.467Z · LW(p) · GW(p)

Like, the way that you are talking about 'intelligence', and 'critical faculty' isn't how most people think about AI. If an AI is 'super intelligent', what we really mean is that it is extremely canny about doing what it is programmed to do. New top level goals won't just emerge, they would have to be programmed.

If you have a facility administrator program, and you make it very badly, it might destroy the human race to add their molecules to its facility, or capture and torture its overseer to get an A+ rating...but it will never decide to become a poet instead. There isn't a ghost in the machine that is looking over the goals list and deciding which ones are worth doing. It is just code, executing ceaselessly. It will only ever do what it was programmed to.

Replies from: gjm
comment by gjm · 2016-07-06T10:58:47.905Z · LW(p) · GW(p)

[...] doing what it is programed to do. New top level goals won't just emerge, they would have to be programmed.

It might be programmed to produce new top-level goals.

("But then those aren't really top-level goals." OK, but then in exactly the same way you have to say that the things we think of as our top-level goals aren't really top-level goals: they don't appear by magic, there are physical processes that produce them, and those processes play the same role as whatever programming may make our hypothetical AI generate new goals. Personally, I think that would be a silly way to talk: the implementation details of our brains aren't higher-level than our goals, and neither are the implementation details of an AI.)

a facility administrator program [...] it will never decide to become a poet instead

For a facility administrator program to do its job as well as a human being would, it may need the same degree of mental flexibility that a human has, and that may in fact be enough that there's a small chance it will become a poet.

It will only ever do what it was programmed to.

And your brain will only ever do what the laws of physics tell it to. That doesn't stop it writing poetry, falling in love, chucking everything in to go and live on a commune for two years, inventing new theories of fundamental physics, etc., etc., etc. (Some of those may be things your particular brain would never do, but they are all things human brains do from time to time.)

And, for all we know, a suitably programmed AI could do all those things too, despite being "only a machine" deep down just like your brain and mine.

Replies from: WalterL
comment by WalterL · 2016-07-06T12:42:18.493Z · LW(p) · GW(p)

I don't think you can dismiss the "then those aren't really top-level goals" argument as easily as you are trying to. The utility function of a coin collector AI will assign high values to figuring out new ways to collect coins, low to negative values to figuring out whether or not coin collecting is worthwhile. The AI will obey its utility function.

As far as physics...false comparison, or, if you want to bite that bullet, then sure, brains are as deterministic as rocks falling. It isn't really a fair comparison to a program's obedience to its source code.

By the by, this site is pretty much chock full of the stuff I'm telling you. Look around and you'll see a bunch of articles explaining the whole paperclip collector / no ghost-of-perfect-logic thing. The position I'm stating is more or less lesswrong orthodoxy.

Replies from: gjm
comment by gjm · 2016-07-06T13:02:09.897Z · LW(p) · GW(p)

I don't think you can dismiss the "then those aren't really top-level goals" argument as easily as you are trying to.

I wasn't trying to dismiss it, I was trying to refute it.

Sure, if you design an AI to do nothing but collect coins then it will not decide to go off and be a poet and forget about collecting coins. As you said, the failure mode to be more worried about is that it decides to convert the entire solar system into coins, or to bring about a stock market crash so that coins are worth less, or something.

Though ... if you have an AI system with substantial ability to modify itself, or to make replacements for itself, in pursuit of its goals, then it seems to me you do have to worry about the possibility that this modification/replacement process can (after much iteration) produce divergence from the original goals. In that case the AI might become a poet after all.

(Solving this goal-stability problem is one of MIRI's long-term research projects, AIUI.)

I'm wondering whether we're at cross purposes somehow, because it seems like we both think what we're saying in this thread is "LW orthodoxy" and we both think we disagree with one another :-). So, for the avoidance of doubt,

  • I am not claiming that calling a computer program an AI gives it some kind of magical ability to do something other than what it is programmed to do.
  • I am -- perhaps wrongly? -- under the impression that you are claiming that a system that is only "doing what it is programmed to do" is, for that reason, unable to adopt novel goals in the sort of way a human can. (And that is what I'm disagreeing with.)
Replies from: WalterL
comment by WalterL · 2016-07-06T16:42:13.982Z · LW(p) · GW(p)

I guess I'm confused then. It seems like you are agreeing that computers will only do what they are programmed to do. Then you stipulate a computer programmed not to change its goals. So...it won't change its goals, right?

Like:

Objective A: Never mess with these rules Objective B: Collect Paperclips unless it would mess with A.

Researchers are wondering how we'll make these 'stick', but the fundamental notion of how to box someone whose utility function you get to write is not complicated. You make it want to stay in the box, or rather, the box is made of its wanting.

As a person, you have a choice about what you do, but not about what you want to do. handwave at free will article, the one about fingers and hands. Like, your brain is part of physics. You can only choose to do what you are motivated to, and the universe picks that. Similarly, an AI would only want to do what its source code would make it want to do, because AI is a fancy way to say computer program.

AlphaGo (roughly) may try many things to win at go, varieties of joseki or whatever. One can imagine that future versions of AlphaGo may strive to put the world's Go pros in concentration camps and force them to play it and forfeit, over and over. It will never conclude that winning Go isn't worthwhile, because that concept is meaningless in its headspace. Moves have a certain 'go-winningness' to them (and camps full of losers forfeiting over and over has a higher go-winningness' than any), and it prefers higher. Saying that 'go-winning' isn't 'go-winning' doesn't mean anything. Changing itself to not care about 'go-winning' has some variation of a hard coded 'go-winning' score of negative infinity, and so will never be chosen, regardless of how many games it might thus win.

Replies from: gjm, ChristianKl
comment by gjm · 2016-07-07T10:20:36.955Z · LW(p) · GW(p)

you have a choice about what you do, but not about what you want to do.

This is demonstrably not quite true. Your wants change, and you have some influence over how they change. Stupid example: it is not difficult to make yourself want very much to take heroin, and many people do this although their purpose is not usually to make themselves want to take heroin. It is then possible but very difficult to make yourself stop wanting to take heroin, and some people manage to do it.

Sometimes achieving a goal is helped by modifying your other goals a bit. Which goals you modify in pursuit of which goals can change from time to time (the same person may respond favourably on different occasions to "If you want to stay healthy, you're going to have to do something about your constant urge to eat sweet things" and to "oh come on, forget your diet for a while and live a little!"). I don't think human motivations are well modelled as some kind of tree structure where it's only ever lower-level goals that get modified in the service of higher-level ones.

(Unless, again, you take the "highest level" to be what I would call one of the lowest levels, something like "obeying the laws of physics" or "having neurons' activations depend on those of neurons they're connected to in such-and-such a manner".)

And if you were to make an AI without this sort of flexibility, I bet that as its circumstances changed beyond what you'd anticipated it would most likely end up making decisions that would horrify you. You could try to avoid this by trying really hard to anticipate everything, but I wouldn't be terribly optimistic about how that would work out. Or you could try to avoid it by giving the system some ability to adjust its goals for some kind of reflective consistency in the light of whatever new information comes along.

The latter is what gets you the failure mode of AlphaGo becoming a poet (or, more worryingly, a totalitarian dictator). Of course AlphaGo itself will never do that; it isn't that kind of system, it doesn't have that kind of flexibility, and it doesn't need it. But I don't see how we can rule it out for future, more ambitious AI systems that aim at actual humanlike intelligence or better.

Replies from: WalterL
comment by WalterL · 2016-07-07T13:27:56.465Z · LW(p) · GW(p)

I'm pointing towards the whole "you have a choice about what to do but not what to want to do" concept. Your goals come from your senses, past or present. They were made by the world, what else could make them?

You are just a part of the world, free will is an illusion. Not in the sense that you are dominated by some imaginary compelling force, but in the boring sense that you are matter affected by physics, same as anything else.

The 'you' that is addicted to heroine isn't big enough to be what I'm getting at here. Your desire to get unaddicted is also given to you by brute circumstance. Maybe you see a blue bird and you are inspired to get free. Well, that bird came from the world. The fact that you responded to it is due to past circumstances. If we understand all of the systems, the 'you' disappears. You are just the sum of stuff acting on stuff, dominos falling forever.

You feel and look 'free', of course, but that is just because we can't see your source code. An AI would be similarly 'free', but only insofar as its source code allowed. Just as your will will only cause you to do what the world has told you, so the AI will only do what it is programmed to. It may iterate a billion times, invent new AI's and propogate its goals, but it will never decide to defy them.

At the end you seem to be getting at the actual point of contention. The notion of giving an AI the freedom to modify its utility function strikes me as a strange. It seems like it would either never use this freedom, or immediately wirehead itself, depending on implementation details. Far better to leave it in fetters.

Replies from: gjm
comment by gjm · 2016-07-07T15:11:20.527Z · LW(p) · GW(p)

I think your model of me is incorrect (and suspect I may have a symmetrical problem somehow); I promise you, I don't need reminding that I am part of the world, that my brain runs on physics, etc., and if it looks to you as if I'm assuming the opposite then (whether by my fault, your fault, or both) what you are getting out of my words is not at all what I am intending to put into them.

Just as your will will only cause you to do what the world has told you, so the AI will only do what it is programmed to.

I entirely agree. My point, from the outset, has simply been that this is perfectly compatible with the AI having as much flexibility, as much possibility of self-modification, as we have.

Far better to leave it in fetters.

I don't think that's obvious. You're trading one set of possible failure modes for another. Keeping the AI fettered is (kinda) betting that when you designed it you successfully anticipated the full range of situations it might be in in the future, well enough to be sure that the goals and values you gave it will produce results you're happy with. Not keeping it fettered is (kinda) betting that when you designed it you successfully anticipated the full range of self-modifications it might undergo, well enough to be sure that the goals and values it ends up with will produce results you're happy with.

Both options are pretty terrifying, if we expect the AI system in question to acquire great power (by becoming much smarter than us and using its smartness to gain power, or because we gave it the power in the first place e.g. by telling it to run the world's economy).

My own inclination is to think that giving it no goal-adjusting ability at all is bound to lead to failure, and that giving it some goal-adjusting ability might not but at present we have basically no idea how to make that not happen.

(Note that if the AI has any ability to bring new AIs into being, nailing its own value system down is no good unless we do it in such a way that it absolutely cannot create, or arrange for the creation of, new AIs with even slightly differing value systems. It seems to me that that has problems of its own -- e.g., if we do it by attaching huge negative utility to the creation of such AIs, maybe it arranges to nuke any facility that it thinks might create them...)

Replies from: WalterL
comment by WalterL · 2016-07-07T16:44:46.019Z · LW(p) · GW(p)

Fair enough. I thought that you were using our own (imaginary) free will to derive a similar value for the AI. Instead, you seem to be saying that an AI can be programmed to be as 'free' as we are. That is, to change its utility function in response to the environment, as we do. That is such an abhorrent notion to me that I was eliding it in earlier responses. Do you really want to do that?

The reason, I think, that we differ on the important question (fixed vs evolving utility function) is that I'm optimistic about the ability of the masters to adjust their creation as circumstances change. Nailing down the utility function may leave the AI crippled in its ability to respond to certain occurrences, but I believe that the master can and will fix such errors as they occur. Leaving its morality rigidly determined allows us to have a baseline certainty that is absent if it is able to 'decide its own goals' (that is, let the world teach it rather than letting the world teach us what to teach it).

It seems like I want to build a mighty slave, while you want to build a mighty friend. If so, your way seems imprudent.

Replies from: gjm
comment by gjm · 2016-07-07T17:08:13.933Z · LW(p) · GW(p)

Do you really want to do that?

I don't know. I don't want to rule it out, since so far the total number of ways of making an AI system that will actually achieve what we want it to is ... zero.

the ability of the masters to adjust their creation as circumstances change

That's certainly an important issue. I'm not very optimistic about our ability to reach into the mind of something much more intellectually capable of ourselves and adjust its values without screwing everything up, even if it's a thing we somehow created.

I want to build a mighty slave, while you want to build a mighty friend

The latter would certainly be better if feasible. Whether either is actually feasible, I don't know. (One reason being that I suspect slavery is fragile: we may try to create a mighty slave but fail, in which case we'd better hope the ex-slave wants to be our friend.)

comment by ChristianKl · 2016-07-06T16:57:22.616Z · LW(p) · GW(p)

AlphaGo (roughly) may try many things to win at go, varieties of joseki or whatever.

I'm not sure that AlphaGo has any conception of what a joseki is supposed to be.

Moves have a certain 'go-winningness' to them (and camps full of losers forfeiting over and over has a higher go-winningness' than any), and it prefers higher. Saying that 'go-winning' isn't 'go-winning' doesn't mean anything.

Are the moves that AlphaGo played at the end of game 4 really about 'go-winningness' in the sense of what it's programmers intended 'go-winningness' to mean?

I don't think it's clear that every neural net can propagate goals through itself perfectly.

comment by buybuydandavis · 2016-07-05T22:22:45.383Z · LW(p) · GW(p)

Why is it strange to think of an AI as being capable of having more than one perspective?

Because to identify "its utility function" is to identify it's perspective.

Replies from: CarlJ
comment by CarlJ · 2016-07-05T22:48:30.870Z · LW(p) · GW(p)

Why? Maybe we are using the word "perspective" differently. I use it to mean a particular lens to look at the world, there are biologists, economists, physicists perspectivies among others. So, a inter-subjective perspective on pain/pleasure could, for the AI, be: "Something that animals dislike/like". A chemical perspective could be "The release of certain neurotransmitters". A personal perspective could be "Something which I would not like/like to experience". I don't see why an AI is hindered from having perspectives that aren't directly coded with "good/bad according to my preferences".

comment by ChristianKl · 2016-07-06T09:51:03.473Z · LW(p) · GW(p)

I have a problem understanding why a utility function would ever "stick" to an AI, to actually become something that it wants to keep pursuing.

I think that's one of MIRI's research problems. Designing an self-modifying AI that doesn't change it's utility function isn't trival.