Comment by pattern on Extraordinary ethics require extraordinary arguments · 2019-02-19T23:22:07.489Z · score: 1 (1 votes) · LW · GW

A few ideas:

A. Utility maximization - if you donate a bunch of money to an effective charity and they save 100 lives, but in the course of delivering say, medical supplies to somewhere (which is how 100 lives were saved) one of the volunteers falls out of a boat and drowns:

1) It couldn't have been predicted in advance.

2) The gains out weigh the benefits.

So, the real question shouldn't be "Will something bad happen?" But "is it remotely plausible that the bad will outweigh the good?" Yes doing homework might, once in a million person years, result in some terrible tragedy, and if you're really worried about paper then go electronic. The scenario I outlined sounds kind of ridiculous. Most actions don't have big impacts by themselves/destroy the world (or have tremendous impact). If you determine the course of action with the highest utility, it might not be as high as you think, but it's still the best thing you can do. (Using time effectively might involve lots of planning, but that might be less important (currently) than doing what you know you have to do right now.)

B. Is the "Skeptic Demon" helping? 'Cause if it's never done anything useful, and if it's getting on something it's got on before, it probably doesn't have anything new to say. (Writing stuff down might help. It seems like it'd be more useful to apply to inaction, if that makes sense?)

Comment by pattern on Implications of GPT-2 · 2019-02-19T23:06:45.373Z · score: 1 (1 votes) · LW · GW

That's not how it was trained?

Comment by pattern on Epistemic Tenure · 2019-02-19T22:48:05.426Z · score: 11 (3 votes) · LW · GW

I enjoyed the MtG post by the way. It was brief, and well illustrated. I haven't seen other posts that talked about that many AI things on that level before. (On organizing approaches, as opposed to just focusing on one thing and all its details.)

Comment by pattern on How the MtG Color Wheel Explains AI Safety · 2019-02-17T02:48:25.695Z · score: 1 (1 votes) · LW · GW

The weirdest part about "an optimization demon" is "this is our measure of good (outcomes), but don't push to hard towards it and you'll get something bad", when intuitively something that is optimizing at our expense would have a harder time meeting stricter constraints.

The reasoning behind it is a) us and b) everything we call brains, being the result of "pushing too hard". It's not immediately clear how a "semi-optimization demon" would come to be, or what that would mean.

It's also not clear how when and how you'd have the issue aside from running a genetic algorithm for ages.

Comment by Pattern on [deleted post] 2019-02-17T02:06:31.184Z

The title and the question seem fairly different.

Comment by pattern on Cooperation is for Winners · 2019-02-15T17:32:05.776Z · score: 1 (1 votes) · LW · GW
Only 20% of LessWrong participants active enough to fill out a survey have ever written a post.

If you accidentally post a draft, which is very easy to do, you lose a lot of karma.

Comment by pattern on How does OpenAI's language model affect our AI timeline estimates? · 2019-02-15T17:26:37.106Z · score: 0 (4 votes) · LW · GW

OpenAI's "safety" move (not releasing the model) reduces the scrutiny it can receive, which makes its impact on forecasts conditional on how good you think it is, when you haven't seen it.

Comment by pattern on Arguments for moral indefinability · 2019-02-13T03:04:49.299Z · score: 5 (4 votes) · LW · GW
there's a "non-identity problem" type thing about whether we can harm future agents by setting up the memetic environment such that they'll end up having less easily satisfiable goals, compared to an alternative where they'd find themselves in larger agreement and therefore with more easily satisfiable goals

I hadn't heard of that before, I'm glad you mentioned it. Your comment (as a whole) was both interesting/insightful/etc. and long, and I'd be interested in reading any future posts you make.

Comment by pattern on Arguments for moral indefinability · 2019-02-13T02:58:01.055Z · score: 2 (2 votes) · LW · GW

I think the least repugnant aspect of a perfect moral theory* to sacrifice might be simplicity, the way you mean it. (Though intuitively, a lot of conditions would have to be met for that to seem a reasonable move to make, personally.)

I'm not clear on how "moral undefinability" would look different from "defining morality is hard".

*General moral theory.

Comment by pattern on Open Thread February 2019 · 2019-02-12T21:55:43.651Z · score: 2 (2 votes) · LW · GW

Linkpost.

Comment by pattern on Fighting the allure of depressive realism · 2019-02-12T02:04:18.548Z · score: 1 (1 votes) · LW · GW

Good luck!

(Btw. the title made me think a) of the claim that gratitude journaling works and b) made me wonder if CBT is associated with that practice.)

Comment by pattern on How important is it that LW has an unlimited supply of karma? · 2019-02-12T01:50:23.028Z · score: 2 (2 votes) · LW · GW
There is no limit to how many different posts and comments one can do this to. In this sense there is an unlimited supply of karma to be handed out.

So infinite posts * 1 sock puppet = infinite karma.

One cannot get high karma by producing a small amount of content that a small number of users likes a lot.

Aside from the fact that both posts and comments can be upvoted, there's double upvoting (though I'm not sure how that is calculateed from one's karma) so:

One can get high karma from a small amount of content that a small number of sufficiently high karma users that double up vote it. (Though sequence length may be rewarded more than brevity, and while there may be a loose correlation (longer sequence requires more time) we might suppose there is a correlation going the other way - more time is required to make what would otherwise be longer posts shorter, and the same may be said of sequences.)

Comment by pattern on AI Alignment is Alchemy. · 2019-02-07T01:19:26.168Z · score: 1 (1 votes) · LW · GW

How was this posted in 2019, with comments from 2018?

Comment by pattern on Thoughts on Ben Garfinkel's "How sure are we about this AI stuff?" · 2019-02-06T23:52:38.529Z · score: 3 (2 votes) · LW · GW

OTTOMH - Off the top of my head

Comment by pattern on Should we bait criminals using clones ? · 2019-02-06T19:26:31.652Z · score: 1 (1 votes) · LW · GW

I think there's a lot more to be gained from using the necessary technology and resources in other ways. Sure, you could try to prevent people from robbing banks by nuking banks if someone tries to rob them - but it's a serious waste of resources.

Comment by pattern on Is the World Getting Better? A brief summary of recent debate · 2019-02-06T19:02:47.745Z · score: 3 (2 votes) · LW · GW

A more important question is What is the rate of progress? How fast is the world getting better? (With the answer being a negative number if it is getting worse.)

Comment by pattern on How to notice being mind-hacked · 2019-02-05T19:10:22.694Z · score: 1 (1 votes) · LW · GW

If identity shifts are good, can an identity shift to an unchanging state be bad?

Comment by pattern on How to notice being mind-hacked · 2019-02-05T19:07:41.905Z · score: 1 (1 votes) · LW · GW

Suppose we are open to ideas for a reason.* Then we would need a greater reason still, to not be so.

*This practice is associated with an idea about ideas, and might be applied only to lesser ideas. (Or apply with a degree inversely proportional to idea level. For instance, to prove that all actions are equally useful requires much more evidence, than to prove that one action is more than/less than/equal in value to another.)

Comment by pattern on How to stay concentrated for a long period of time? · 2019-02-05T19:01:40.636Z · score: 1 (1 votes) · LW · GW

This comment seems identical to one here (by the same person).

Comment by pattern on How to stay concentrated for a long period of time? · 2019-02-05T19:00:57.249Z · score: 0 (0 votes) · LW · GW

No, both were comments, one on a question, and the one I linked to was a comment on an answer. As the author retracted the one above, it seemed reasonable they might wish to do the same with a duplicate.

Comment by pattern on Quantifying Human Suffering and "Everyday Suffering" · 2019-02-05T06:33:15.739Z · score: 0 (0 votes) · LW · GW

As they say, an ounce of prevention is worth a pound of cure.

Comment by pattern on How to stay concentrated for a long period of time? · 2019-02-05T00:31:08.660Z · score: 0 (0 votes) · LW · GW

This comment seems identical to another one here. (I am also curious about what caused this change in belief.)

Comment by pattern on How to notice being mind-hacked · 2019-02-02T23:57:04.000Z · score: 2 (2 votes) · LW · GW

What makes "conversion" different from "deconversion"? (Aside from a life of Pi scenario where someone is converted to 3 religions.)

Comment by pattern on Drexler on AI Risk · 2019-02-02T05:12:12.229Z · score: 1 (1 votes) · LW · GW

Can you construct agents that are guaranteed to 'achieve Pareto optimality'?

Comment by pattern on Applied Rationality podcast - feedback? · 2019-02-01T21:49:36.121Z · score: 1 (1 votes) · LW · GW

A podcast name suggestion:

Growth

Comment by pattern on Masculine Virtues · 2019-01-31T01:55:34.867Z · score: 1 (1 votes) · LW · GW

What is "it"?

Comment by pattern on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-29T02:18:50.194Z · score: 1 (1 votes) · LW · GW

Why would you want to do that?

Comment by pattern on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-29T02:14:58.414Z · score: 3 (2 votes) · LW · GW
But if that's the case, why not just do it[?]

1) Their goal was a really good bot - and that hadn't been done before (apparently). To implement handicaps to begin with would have been... very optimistic.

2) They don't know what will work for sure until they try it.

3) Expense. (Training takes time and money.)

Comment by pattern on Is Agent Simulates Predictor a "fair" problem? · 2019-01-24T20:38:01.244Z · score: 5 (3 votes) · LW · GW

Where does the 'Agent Simulates Predictor' problem come from?

Comment by pattern on CDT=EDT=UDT · 2019-01-22T20:57:54.864Z · score: 1 (1 votes) · LW · GW

If someone made a glossary, what terms would you want in it?

(The closest thing right now might be https://wiki.lesswrong.com/wiki/LessWrong_Wiki)

Comment by pattern on Should questions be called "questions" or "confusions" (or "other")? · 2019-01-22T19:44:07.076Z · score: 2 (2 votes) · LW · GW

I think it'd be useful if questions could be refined - I've seen a number of questions where the asker's idea of what they wanted to know/how to ask it, changed in response to feedback. Currently, there are "sequences" of posts. A set of ever more refined questions might be useful for similar reasons. (As we only have level 1 questions and people thinking about what level 2 would be based on feedback, in the absence of level 2 question being asked and getting feedback, we probably won't see level 3 questions any time soon.)

Comment by pattern on Life can be better than you think · 2019-01-22T00:31:04.019Z · score: 2 (2 votes) · LW · GW
Acceptance is useful when you cannot change a problem. Acceptance is useful when you cannot change a problem. Both those sentences can be true at the same time.

Is there supposed to be a difference between the two sentences other than emphasis?

Comment by pattern on CDT=EDT=UDT · 2019-01-15T02:33:35.508Z · score: 0 (0 votes) · LW · GW
unfortunately "coordination" lacks a snappy three-letter acronym.

I propose the following three letters: "YOU" (possibly as a backronym).

Comment by pattern on What are the open problems in Human Rationality? · 2019-01-15T02:28:58.116Z · score: 0 (0 votes) · LW · GW

While disagreements are often matters of miscommunication, mistakes are possible, especially for those who are not all knowing and perfect in their calculations.

Comment by pattern on What are the open problems in Human Rationality? · 2019-01-15T02:18:11.230Z · score: 1 (1 votes) · LW · GW

That'd probably be messy.

Comment by pattern on Open Thread January 2019 · 2019-01-15T01:54:27.461Z · score: 0 (0 votes) · LW · GW

What does the law of small numbers have to do with few people posting in "The Welcome and Open Thread" in the beginning of January of 2019? (A claim which would be nice to compare against the usual amount of posting in The Open Thread in January, and what factors are in effect.)

Comment by pattern on Open Thread January 2019 · 2019-01-15T01:50:20.370Z · score: 1 (1 votes) · LW · GW

In place of "Moore's law stops" let us say 'the doubling time increases from X to Y.'

Comment by pattern on Why is so much discussion happening in private Google Docs? · 2019-01-13T02:03:39.885Z · score: 7 (3 votes) · LW · GW
Personally I much prefer to comment in public places, due to not wanting my comments to be "wasted", so I'm having trouble understanding the psychology of people who seem to prefer the opposite.

1) In a word - karma. (While people do judge, there is a lower risk of not being able to say, comment there or elsewhere (for a long time) if you got a lot of negative karma from one place. As someone who got some negative karma from a comment that entirely missed the point of a piece because the piece wasn't clear, I can imagine this problem would be MUCH worse with a draft - it's all about idea formation, and and karma might be detrimental to that, especially since it isn't necessarily a good metric for the task at hand, whether that's discussion &/or turning a draft into a paper. (A system which encourages people to purge comments of theirs that are downvoted may not be the best place to write drafts.))

2) Maybe it's because more people use google docs - possibly also for other things. Greater tool familiarity = greater tool use.

3) A different group. People are discussing things in a different place because of who is there, or because people do things differently there or because things work differently there. (Maybe people are notified when comments are made and that's a valuable feature in writing drafts?)

Comment by pattern on What is a reasonable outside view for the fate of social movements? · 2019-01-09T23:10:48.210Z · score: 1 (1 votes) · LW · GW

I've read a lot of books, and it's the only example I was able to come up with. I'm not aware of a lot unfinished series actually, just a) a few series which might not have an ending, and b) a handful of series I never finished.

Comment by pattern on What are questions? · 2019-01-09T22:59:36.653Z · score: 9 (6 votes) · LW · GW

TL;DR:

"What's this?" is a question. (As is, "Is this a rock?")

EDIT (continuation):

Labeling something as unidentified, and requesting information, or requesting confirmation of a label.

Why? Asking for a cause, or justification, intent, or purpose. (In the event of a disaster, indicating that something wasn't planned is also a response, so 'a lack of intent' is also an answer, as may be an apology*.) May also indicate distress.

Subset: How? indicates an inquiry into the means by which a state was reached, or the methods by which an end was achieved, but is also an exclamation of surprise, and may be a request for information similar to What?**.

When/Where? Asking for a location in space or time, especially when one possess part of a location.

*I'm not sure what the name for the general category is. (It may also includes pleasantries.)

**The common element seems to be confusion, and sometimes wanting information, or time to process.

Comment by pattern on Disadvantages of Card Rebalancing · 2019-01-07T01:32:07.127Z · score: 2 (1 votes) · LW · GW

~Typo thread:

I have always had trouble learning foreign languages. The closest I’ve come is being able to learn card games and their associated vocabulary. Not ebing a
Comment by pattern on Will humans build goal-directed agents? · 2019-01-05T18:27:30.607Z · score: 7 (2 votes) · LW · GW

To clarify the definition of "goal-directed" used here: is AlphaGo (Zero) goal-directed?

Comment by pattern on What is a reasonable outside view for the fate of social movements? · 2019-01-05T18:11:27.312Z · score: 2 (2 votes) · LW · GW
it would be hard to list even one of them

Aside from self-published books, there are also series which got published up to a point, but the publishers didn't accept the next book in the series. (And no, I did not write either of these.)

Comment by pattern on What do you do when you find out you have inconsistent probabilities? · 2019-01-04T04:54:40.026Z · score: 0 (0 votes) · LW · GW

There may be a difference between "No God" and "Not God". P(¬G) includes every other possibility - 2 gods, 3, 0, aliens creating humans, this is a simulation, everything we can think of and more. For this reason, some suggest odds over priors (and using Bayes rule appropriately) because the sum of probabilities we consider need not be one - we may determine of the possibilities we are considering that one is not likely to be true, in place of determining what is true. (For example, if we are considering the possibility that deck of cards someone else is using for a poker game is ordinary, or has 4 extra aces, we may acquire enough evidence, that the second possibility has an order of magnitude more probability. There might not be 8 aces, but we may be very confident that either the deck is not ordinary, or someone is cheating (possibly the person who is shuffling the deck).)

Also, if you ever use Bayes rule, and say "that can't be right because of X", keep going. Are there more givens you're missing?

Comment by pattern on Learning-Intentions vs Doing-Intentions · 2019-01-04T04:32:58.777Z · score: -3 (2 votes) · LW · GW
Comment by pattern on Thoughts on Q&A so far? · 2019-01-04T04:16:45.200Z · score: 2 (2 votes) · LW · GW

I don't know how difficult it would be, but if users could mark a comment as a possible answer, so it would be reviewed by admins, that might mean admins don't have to read all the comments to see if any should be answers.

Comment by pattern on Thoughts on Q&A so far? · 2019-01-04T04:10:55.043Z · score: 1 (1 votes) · LW · GW

Different types of questions seems useful. What categories sound like a good idea?

Comment by pattern on Thoughts on Q&A so far? · 2019-01-04T04:10:01.212Z · score: 1 (1 votes) · LW · GW

1. Concrete:

Suggested feature: adding a "link option" to answers. I'm not sure what this is actually called, but it's a feature that comments have. For example, here is a link to this comment.

2. Vague:

Currently, questions seem to just be one off. It's also not entirely clear how they might be integrated with a sequence, feature wise*, as they're currently used. The tone seems to be different, and maybe that kind of thing will just end up in comments sections instead, in response to posts. There are cases where if a question was refined and re-asked, and both were linked together like posts in a sequence (the back and forward buttons), it seems like that would useful. Questions also lack a "answered" aspect.

*I could be wrong about this, but I am assuming a sequence may not currently contain questions, whether or not doing so would be a good idea. I am not testing this out because I am not aware of a way to make empty posts and questions to see how sequence features work that is not a recipe for negative karma.

Comment by pattern on Require contributions in advance · 2019-01-04T03:16:19.218Z · score: 3 (2 votes) · LW · GW

I wish I'd read this 3 years ago.

Comment by pattern on Why I expect successful (narrow) alignment · 2018-12-30T18:52:13.270Z · score: 3 (2 votes) · LW · GW
The big thing Eliezer seems to believe, which I don't think any mainstream AI people believe, is that shoving a consequentialist with preferences about the real world into your optimization algorithm is gonna be the key to making it a lot more powerful.

From the article you linked:

Point two: The class of catastrophe I’m worried about mainly happens when a system design is supposed to contain two consequentialists that are optimizing for different consequences, powerfully enough that they will, yes, backchain from Y to X whenever X is a means of influencing or bringing about Y, doing lookahead on more than one round, and so on. When someone talks about building a system design out of having two of *those* with different goals, and relying on their inability to collude, that is the point at which I worry that we’re placing ourselves into the sucker’s game of trying to completely survey a rich strategic space well enough to outsmart something smarter than us.

[emphasis mine]

The piece seems to be about how trying to control AI by dividing power is a bad idea, because then we're doomed if they ever figure out how to get along with each other.

[Accidental Post.]

2018-09-13T20:41:17.282Z · score: -3 (2 votes)