What posts do you want written?

post by Mark Xu (mark-xu) · 2020-10-19T03:00:26.341Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    66 Mark Xu
    27 Mark Xu
    20 johnswentworth
    19 Mark Xu
    17 Mark Xu
    16 Daniel Kokotajlo
    15 Daniel Kokotajlo
    14 Mark Xu
    14 James_Miller
    12 Daniel Kokotajlo
    11 Mary Chernyshenko
    10 Daniel Kokotajlo
    10 Mark Xu
    9 Daniel Kokotajlo
    9 Zian
    9 Piotr Orszulak
    8 Daniel Kokotajlo
    7 Daniel Kokotajlo
    5 ChristianKl
    4 aa.oswald
    1 niplav
    1 niplav
    1 Mary Chernyshenko
None
No comments

I have many posts that I want written but do not have time to write and I suspect there are other people that feel similarly. This post [LW · GW] on the Solomonoff prior was one example, until I got fed up and just wrote it.

Please write one post idea per answer so they can be voted on seperately.

Answers

answer by Mark Xu · 2020-10-19T03:03:08.955Z · LW(p) · GW(p)

A review of Thinking Fast and Slow that focuses on whether or not various parts of the book replicated.

comment by aa.oswald · 2020-10-21T13:47:11.458Z · LW(p) · GW(p)

Honestly, I would like to see this for pretty much any pop-science psychology book that trends in the rationality sphere. 

answer by Mark Xu · 2020-10-19T03:02:11.030Z · LW(p) · GW(p)

A solid, minimal-assumption description of value handshakes. This SSC post contains the best description of which I'm aware, which I think is slightly sad:

Values handshakes are a proposed form of trade between superintelligences. Suppose that humans make an AI which wants to convert the universe into paperclips. And suppose that aliens in the Andromeda Galaxy make an AI which wants to convert the universe into thumbtacks.

When they meet in the middle, they might be tempted to fight for the fate of the galaxy. But this has many disadvantages. First, there’s the usual risk of losing and being wiped out completely. Second, there’s the usual deadweight loss of war, devoting resources to military buildup instead of paperclip production or whatever. Third, there’s the risk of a Pyrrhic victory that leaves you weakened and easy prey for some third party. Fourth, nobody knows what kind of scorched-earth strategy a losing superintelligence might be able to use to thwart its conqueror, but it could potentially be really bad – eg initiating vacuum collapse and destroying the universe. Also, since both parties would have superintelligent prediction abilities, they might both know who would win the war and how before actually fighting. This would make the fighting redundant and kind of stupid.

Although they would have the usual peace treaty options, like giving half the universe to each of them, superintelligences that trusted each other would have an additional, more attractive option. They could merge into a superintelligence that shared the values of both parent intelligences in proportion to their strength (or chance of military victory, or whatever). So if there’s a 60% chance our AI would win, and a 40% chance their AI would win, and both AIs know and agree on these odds, they might both rewrite their own programming with that of a previously-agreed-upon child superintelligence trying to convert the universe to paperclips and thumbtacks in a 60-40 mix.

This has a lot of advantages over the half-the-universe-each treaty proposal. For one thing, if some resources were better for making paperclips, and others for making thumbtacks, both AIs could use all their resources maximally efficiently without having to trade. And if they were ever threatened by a third party, they would be able to present a completely unified front.

comment by Yoav Ravid · 2021-04-06T16:08:53.736Z · LW(p) · GW(p)

Yeah, I also just looked for an explanation of it and your comment with Scott's quote was the best I found. I made a tag for it, Value handshakes [? · GW], with this quote to start it out, so others can expand on it.

answer by johnswentworth · 2020-10-19T16:27:31.660Z · LW(p) · GW(p)

A review of The Design of Everyday Things, ideally with some discussion of how the ideas there intersect with rationality-adjacent topics.

answer by Mark Xu · 2020-10-19T03:07:18.720Z · LW(p) · GW(p)

A minimal-assumption description of Updateless Decision Theory. This wiki page describes the basic concept, but doesn't include motivation, examples or intuition.

answer by Mark Xu · 2020-10-19T03:04:23.408Z · LW(p) · GW(p)

A thorough description of how to do pair debugging, a CFAR exercise partially described here [LW · GW].

comment by Kaj_Sotala · 2020-10-19T11:03:41.279Z · LW(p) · GW(p)

As a response to this request, wrote something here [LW · GW].

comment by Neel Nanda (neel-nanda-1) · 2020-10-20T05:41:11.344Z · LW(p) · GW(p)

I've written up my thoughts on doing (informal) pair debugging from the debugger perspective here [LW · GW]

answer by Daniel Kokotajlo · 2020-10-20T07:43:33.638Z · LW(p) · GW(p)

Against GDP as a metric for timelines and takeoff speeds: I think that world GDP growth increasing significantly from its current rate is something which could happen years before, OR YEARS AFTER, transformative AI. Or anything in between. I think it is a poor proxy for what we care about and that people currently go astray on several occasions when they rely on it too heavily. I think this goes for timelines, but also for takeoff speeds: GDP growth doubling in one year before it doubles in four years is a bad proxy for fast vs. slow takeoff.

answer by Daniel Kokotajlo · 2020-10-20T07:33:14.389Z · LW(p) · GW(p)

A response and critique of Ajeya Cotra's awesome timelines report.

answer by Mark Xu · 2020-10-21T17:24:07.047Z · LW(p) · GW(p)

An intuitive explanation of the kelly criterion, with a bunch of worked examples. Zvi's post [LW · GW] is good but lacks worked examples and justification for heuristics. Jacobian [LW · GW] advises us to Kelly bet on everything, but I don't understand what a "kelly bet" is in all but the simplest financial scenarios.

answer by James_Miller · 2020-10-19T19:27:23.342Z · LW(p) · GW(p)

Metformin as a rationalist win.  For several years I have been taking 2 grams of Metformin a day for anti-aging reasons.  There is a vast literature on Metformin and as a mere economist I'm unqualified to summarize it.  But my (skin-in-the-game) guess is that all adults over 40 (and perhaps simply all adults) should be taking Metformin and I would love if someone with a bio-background wrote up a Metformin literature review understandable to those of us who understand statistics but not much about medicine.  The reason why Metformin might be universally beneficial and yet not generally taken is because no one holds a patent on Metformin (it's cheap), in the US you need a prescription to get it, and the medical system doesn't consider aging to be a disease.

comment by Piotr Orszulak (piotr-orszulak) · 2020-10-21T18:28:50.588Z · LW(p) · GW(p)

Hello James.

I have not heard about anti aging effects but apart from standard indications, I know it helps to loose weight and to an extent prevents obesity. In a oblique manner it may be also a way to deage yourself but... How do you know about the anti-aging effect and what does it mean really? It doesn't reverse time obviously.

I am sorry, to doubt. It just seems to be an extraordinary claim.

Best regards, Piotr, anaesthetist intensivist.

Replies from: James_Miller
comment by James_Miller · 2020-10-22T00:54:30.026Z · LW(p) · GW(p)

Much of the harm of aging is the increased likelihood of getting many diseases such as cancer, heart disease, alzheimer's, and strokes as you age.  From my limited understanding, Metformin reduces the age-adjusted chance of getting many of these diseases and thus it's reasonable, I believe, to say that Metformin has anti-aging effects.

Replies from: piotr-orszulak
comment by Piotr Orszulak (piotr-orszulak) · 2020-10-22T05:42:38.981Z · LW(p) · GW(p)

Oh, ok, I get it slows down ageing. I hoped that you may know of some evidence that it reverses degeneration. in retrospect, I can see that you wrote anti and not de ageing , so the misunderstanding is entirely my fault. Thanks for your clarification 😊

comment by romeostevensit · 2020-10-20T18:25:35.747Z · LW(p) · GW(p)

Berberine supposedly has many of the same effects and potentially fewer side effects and is OTC.

comment by niplav · 2020-10-19T20:13:23.970Z · LW(p) · GW(p)

Have you by any chance seen this? (It's not published yet, but I read it a year ago and thought it was quite good, as far as I can judge such things).

Replies from: James_Miller
comment by James_Miller · 2020-10-19T20:33:15.305Z · LW(p) · GW(p)

Thanks!

answer by Daniel Kokotajlo · 2020-10-20T07:47:34.211Z · LW(p) · GW(p)

Persuasion tools: What they are, how they might get really good prior to TAI, how that might change the world in important ways (e.g. it's an x-risk factor and possibly a point of no return) and what we can do about it now.

answer by Mary Chernyshenko · 2020-10-19T14:48:00.793Z · LW(p) · GW(p)

A review of the history of translations of Aesop's and other similar fables with the emphasis on what was added, subtracted or equivocated by the translators. Such as, did the original Fox tell himself the grapes were sour, or did he announce it to the world at large?

answer by Daniel Kokotajlo · 2020-10-20T07:40:34.840Z · LW(p) · GW(p)

Ships as precedent for AI: Lots of the arguments against fast takeoff, against AGI, against discontinuous takeoff, against local takeoff and decisive strategic advantage, are somewhat mirrored by arguments that could have been made in the middle ages about ships. I think that history turned out to mostly support the fast/AGI/discontinuous/local/DSA side of those arguments.

answer by Mark Xu · 2020-10-19T03:12:54.637Z · LW(p) · GW(p)

I want more people to write down their models for various things. For example, a model I have of the economy is that it's a bunch of boxes with inputs and outputs that form a sparsely directed graph. The length of the shortest cycle controls things like economic growth and AI takeoff speeds.

Another example is that people have working memory in both their brains and their bodies. When their brain-working-memory is full, information gets stored in their bodies. Techniques like focusing [? · GW] are often useful to extact information stored in body-working-memory.

comment by Mary Chernyshenko (mary-chernyshenko) · 2020-10-19T14:39:01.701Z · LW(p) · GW(p)

(Body memory is great. When I worked in a shop and could not find an item by the end of the day, because my eyes refused to scan the whole depth of the shelves, I was told to close my eyes and "just take the thing". The arm remembers.)

comment by Piotr Orszulak (piotr-orszulak) · 2020-10-19T18:27:48.022Z · LW(p) · GW(p)

I find your post confusing. Do you believe in body - mind dualism or was it just a manner of speaking? Maybe you mean that "body memory" is an intuitive subconscious process in the brain?

Replies from: mark-xu
comment by Mark Xu (mark-xu) · 2020-10-19T19:37:40.657Z · LW(p) · GW(p)

Maybe you mean that "body memory" is an intuitive subconscious process in the brain?

Yes, but I like thinking of it as "body memory" because it is easier to conceptualize.

Replies from: piotr-orszulak
comment by Piotr Orszulak (piotr-orszulak) · 2020-10-21T18:30:11.955Z · LW(p) · GW(p)

Ok, thanks for the clarification.

comment by FCCC · 2020-10-21T12:39:51.193Z · LW(p) · GW(p)

Here's a model I made recently about when a goal is "good" [? · GW].

answer by Daniel Kokotajlo · 2020-10-20T07:44:56.443Z · LW(p) · GW(p)

I promised a followup to my Soft Takeoff can Still Lead to DSA [LW · GW] post. Well, maybe it's about time I delivered...

answer by Zian · 2020-10-20T05:05:28.570Z · LW(p) · GW(p)

I've read that Less Wrong attracts people with mental health concerns so articles about using mental health related information may be useful.

answer by Piotr Orszulak · 2020-10-19T18:19:46.084Z · LW(p) · GW(p)

Hello. I would like to read about a fine line between the Sunk Cost Fallacy and remuneration delay in a long term investment, whether in relationship or changing workplace, and ways to discern the difference. Thank you.

comment by Pattern · 2020-10-19T20:15:30.047Z · LW(p) · GW(p)

What's remuneration delay?

Replies from: piotr-orszulak
comment by Piotr Orszulak (piotr-orszulak) · 2020-10-21T18:09:57.329Z · LW(p) · GW(p)

Sorry, English is not my first language. What I mean by renumeration delay is a waiting period between e.g. sowing and harvesting the crops. So in my original question I imply that I have difficulty to discern whether the crops will show up at all.

answer by Daniel Kokotajlo · 2020-10-20T07:37:21.529Z · LW(p) · GW(p)

Explanation of how what we really care about when forecasting timelines is not the point when the last human is killed, nor the point where AGI is created, but the point where it's too late for us to prevent the future from going wrong. And, importantly, this point could come before AGI, or even before TAI. It certainly can come well before the world economy is growing at 10%+ per year. (I give some examples of how this might happen)

answer by Daniel Kokotajlo · 2020-10-20T07:34:32.479Z · LW(p) · GW(p)

Argument that AIs are reasonably likely to be irrational, tribal, polarized, etc. as much or more than humans are. More broadly an investigation of the reasons for and against that claim.

answer by ChristianKl · 2020-10-19T23:55:56.305Z · LW(p) · GW(p)

I have learned Belief Reporting from Leverage Research at a two hour workshop someone gave at the European Community Weekend. I think it would be great if someone would write a post on the technique. 

answer by aa.oswald · 2020-10-21T13:52:15.840Z · LW(p) · GW(p)

I would like to see lukeprogs happiness sequences [? · GW] updated for 2020

answer by niplav · 2020-11-03T07:11:15.179Z · LW(p) · GW(p)

An overview of past attempts at wireheading humans/animals, what the effects were & how we could do better.

answer by niplav · 2020-11-03T07:10:25.519Z · LW(p) · GW(p)

A formal statement of the problem of Pascal's mugging (or a discussion of several ways to formally state it), and a summary/review of different people's approaches to solving/dissolving it.

answer by Mary Chernyshenko · 2020-10-25T15:41:47.724Z · LW(p) · GW(p)

An overview of the common disagreements between landscape designers, interior designers etc. and their clients. (A friend of mine had to explain that she liked her windows shaded by the tree, it made the house cooler during summer.) As in, what people tend to miscommunicate, overrule, not order, repair etc.

No comments

Comments sorted by top scores.