Sayan's Braindump

post by sayan · 2019-09-04T08:51:56.549Z · LW · GW · 29 comments

29 comments

Comments sorted by top scores.

comment by sayan · 2019-09-04T09:01:55.031Z · LW(p) · GW(p)

Extremely low probability events are great as intuition pumps, but terrible as real world decisionmaking.

comment by sayan · 2019-09-04T08:55:24.374Z · LW(p) · GW(p)

Would CIRL with many human agents realistically model our world?

What does AI alignment mean with respect to many humans with different goals? Are we implicitly assuming (with all our current agendas) that the final model of AGI is to being corrigible with one human instructor?

How do we synthesize goals of so many human agents into one utility function? Are we assuming solving alignment with one supervisor is easier? Wouldn't having many supervisors restrict the space meaningfully?

comment by sayan · 2019-09-04T09:02:54.300Z · LW(p) · GW(p)

Is there a good bijection between specification gaming and wireheading vs different types of Goodhart's law?

Replies from: sayan
comment by sayan · 2019-09-04T09:00:10.393Z · LW(p) · GW(p)

Speculation: People never use pro-con lists to actually make decisions, they rather use them rationalizingly to convince others.

comment by sayan · 2019-09-04T08:59:05.731Z · LW(p) · GW(p)

The internet might be lacking multiple kind of curation and organization tools? How can we improve?

comment by sayan · 2019-11-23T10:43:42.671Z · LW(p) · GW(p)

Are Dharma traditions that posit 'innate moral perfection of everyone by default' reasoning from the just world fallacy?

Replies from: mr-hire, gworley
comment by Matt Goldenberg (mr-hire) · 2019-11-23T18:40:17.856Z · LW(p) · GW(p)

I wonder if there's a game theoretic and evolutionary argument that could be made here about cooperation being the sane default in the absence of other priors.

comment by Gordon Seidoh Worley (gworley) · 2019-11-24T00:20:11.932Z · LW(p) · GW(p)

What Dharma traditions in particular so you have in mind, because I can't think of one i would describe as saying everyone had innate "moral" perfection unless you sufficiently twist around the word "moral" such that it's use is confusing at best.

comment by sayan · 2019-11-23T10:41:42.312Z · LW(p) · GW(p)

Can we have a market with qualitatively different (un-interconvertible) forms of money?

Replies from: strangepoop
comment by a gently pricked vein (strangepoop) · 2019-11-24T07:39:50.491Z · LW(p) · GW(p)

I'm interested in this. The problem is that if people consider the value provided by the different currencies at all fungible, side markets will pop up that allow their exchange.

An idea I haven't thought about enough (mainly because I lack expertise) is to mark a token as Contaminated if its history indicates that it has passed through "illegal" channels, ie has benefited someone in an exchange not considered a true exchange of value, and so purists can refuse to accept those. Purist communities, if large, would allow stability of such non-contaminated tokens.

Maybe a better question to ask is "do we have utility functions that are partial orders and thus would benefit from many isolated markets?", because if so, you wouldn't have to worry about enforcing anything, many different currencies will automatically come into existence and be stable.

Of course, more generally, you wouldn't quite have complete isolation, but different valuations of goods in different currencies, without "true" fungibility. I think it is quite possibe that our preference orderings are in fact partial and the current one-currency valuation of everything might be improved.

comment by sayan · 2019-09-04T09:03:58.701Z · LW(p) · GW(p)

It is so difficult to understand the difference and articulate in pronunciation some accent that is not one's native, because of the predictive processing of the brain. Our brains are constantly appropriating signals that are closely related to the known ones.

comment by sayan · 2019-11-23T10:40:36.899Z · LW(p) · GW(p)

How would signalling/countersignalling work in a post-scarcity economy?

Replies from: DonyChristie
comment by Pee Doom (DonyChristie) · 2019-11-27T06:02:10.082Z · LW(p) · GW(p)

Can you define a post-scarcity economy in terms of what you anticipate the world to look like?

comment by sayan · 2019-11-23T10:38:37.199Z · LW(p) · GW(p)

What are some effective ways to reset the hedonic baseline?

comment by sayan · 2019-09-18T09:17:23.574Z · LW(p) · GW(p)

What gadgets have improved your productivity?

For example, I started using a stylus few days ago and realized it can be a great tool for a lot of things!

Replies from: jimrandomh, FactorialCode
comment by jimrandomh · 2019-09-19T01:23:15.432Z · LW(p) · GW(p)
  • Multiple large monitors, for programming.
  • Waterproof paper in the shower, for collecting thoughts and making a morning todo list
  • Email filters and Priority Inbox, to prevent spurious interruptions while keeping enough trust that urgent things will generate notifications, that I don't feel compelled to check too often
  • USB batteries for recharging phones - one to carry around, one at each charging spot for quick-swapping
comment by FactorialCode · 2019-09-19T03:00:58.642Z · LW(p) · GW(p)

I find having a skateboard is a compact way to shave minutes off of the sections of my commute where I would otherwise have to walk. It turns a 15 minute walk to the bus stop into a 5 minute ride, which adds up in the long run.

comment by sayan · 2019-09-04T09:07:28.573Z · LW(p) · GW(p)

If there is no self, what are we going to upload to the cloud?

Replies from: Viliam
comment by Viliam · 2019-09-04T21:21:47.887Z · LW(p) · GW(p)

The brain, I guess.

comment by sayan · 2019-09-04T08:58:03.213Z · LW(p) · GW(p)

Pathological examples of math are analogous to adversarial examples in ML. Or are they?

comment by sayan · 2019-09-04T08:57:31.481Z · LW(p) · GW(p)

What are the possible failure modes of AI-aligned Humans? What are the possible misalignment scenarios? I can think of malevolent uses of AI tech to enforce hegemony and etc etc. What else?

comment by sayan · 2019-09-04T08:56:52.318Z · LW(p) · GW(p)

What's a good way to force oneself outside their comfort zone where most expectations and intuitions routinely fail?

This might become useful to build antifragility about expectation management.

Quick example - living without money in a foreign nation.

Is it possible to design a personal or group retreat for this?

Replies from: Viliam
comment by Viliam · 2019-09-04T22:19:26.354Z · LW(p) · GW(p)

What kills you doesn't make you stronger. You want to get out of your comfort zone, not out of your survival zone.

Replies from: sayan
comment by sayan · 2019-09-05T14:50:38.043Z · LW(p) · GW(p)

Okay, natural catastrophes might not be a good example. (Edited)

Replies from: Pattern
comment by Pattern · 2019-09-05T18:52:04.421Z · LW(p) · GW(p)

Helping out with disaster/emergency relief efforts might get people out of their comfort zone.

Replies from: Viliam
comment by Viliam · 2019-09-05T21:32:12.096Z · LW(p) · GW(p)

Generally, if you want to go outside of your comfort zone, you might as well do something useful (either for yourself, or for others).

For example, if you try "rejection therapy" (approaching random people, getting rejected, and thus teaching your System 1 that being rejected doesn't actually hurt you), you could approach people with something specific, like giving them fliers, or trying to sell something. You may make some money as a side effect, and in addition to expanding your comfort zone also get some potentially useful job experience. If you travel across difficult terrain, you could also transport some cargo and get paid for it. If you volunteer for an organization, you will get some advice and support (the goal is to do something unusual and uncomfortable, not to optimize for failure), and you will get interesting contacts (your LinkedIn profile will be like: "endorsed for skills: C++, object-oriented development, brain surgery, fire extinguishing, assassination, cooking for homeless").

You could start by obtaining a list of non-governmental organizations in your neighborhood, calling them, and asking whether they need a temporary volunteer. (Depending on your current comfort zone, this first step may already be outside of it.)

comment by sayan · 2019-09-04T08:53:21.081Z · LW(p) · GW(p)

Where is the paradigm for Effective Activism? On a first thought, it doesn't even seem to be difficult to do better than status quo.

Replies from: Viliam
comment by Viliam · 2019-09-04T22:30:39.447Z · LW(p) · GW(p)

How specifically would you do better than status quo?

I could easily dismiss some charities for causes I don't care about, or where I think they do more harm than good. Now there are still many charities left whose cause I approve of, and that seems to me like they could help. How do I choose among these? They publish some reports, but are the numbers there the important ones, or just the ones that are easiest to calculate?

For example, I don't care if your "administrative overhead" is 40%, if that allows you to spend the remaining 60% ten times more effectively than a comparable charity with smaller overhead. Unfortunately, the administrative overhead will most likely be included in the report, with two decimal places; but the achieved results will be either something nebulous (e.g. "we make the world a better place" or "we help kids become smarter"), or they will describe the costs, not the outcomes (e.g. "we spent 10 millions to save the rainforest" or "we spent 5 milions to teach kids the importance of critical thinking").

Now, I don't have time and skills to become a full-time charity researcher. So if I want to donate well, I need someone who does the research for me, and whose integrity and sanity I can trust.