Posts

Pattern's Shortform Feed 2019-05-30T21:21:23.726Z · score: 13 (3 votes)

Comments

Comment by pattern on My (Mis)Adventures With Algorithmic Machine Learning · 2020-09-22T04:34:12.618Z · score: 2 (1 votes) · LW · GW

Nitpick:

concatonation

concatenation

Comment by pattern on Why GPT wants to mesa-optimize & how we might change this · 2020-09-21T03:48:37.270Z · score: 2 (1 votes) · LW · GW

Note that there's a risk of mesa-optimization developing if lookahead improves performance at any point during GPT's training.)

Why?

Comment by pattern on Where Experience Confuses Physicists · 2020-09-17T19:22:41.229Z · score: 2 (1 votes) · LW · GW

Did you ever ending up posting those quotes?

Comment by pattern on What are examples of simpler universes that have been described in order to explain a concept from our more complex universe? · 2020-09-17T17:09:20.238Z · score: 2 (1 votes) · LW · GW

Was the grid world Conway's Game of Life?

Comment by pattern on Covid 9/17: It’s Worse · 2020-09-17T16:19:26.122Z · score: 2 (1 votes) · LW · GW

A vaccine will be available in October if Trump is able to override the CDC and FDA, and make it happen by fiat to help its reelection chances.

Its? (The trump administration?)

 

I have a strong preference on outcomes, which readers can presumably guess – but saying it outright wouldn’t convince anyone.

As a utilitarian, or as a matter of "values"?

Comment by pattern on The Counterfactual Prisoner's Dilemma · 2020-09-17T16:08:10.360Z · score: 2 (1 votes) · LW · GW

I was pointing out a typo in the Original Post. That said, that's a great summary.

 

Perhaps an intermediate position could be created as follows:

Given a graph of 'the tree' (including the branch you're on), position E is

expected utility over branches

position B is

you only care about your particular branch.

Position B seems to care about the future tree (because it is ahead), but not the past tree. So it has a weight of 1 on the current node and it's descendants, but a weight of 0 on past/averted nodes, while Position E has a weight of 1 on the "root node" (whatever that is). (Node weights are inherited, with the exception of the discontinuity in Position B.)

An intermediate position is placing some non-zero weight on 'past nodes', going back along the branch, and updating the inherited weights. Aside from a weight of 1/2 being placed along all in branch nodes, another series could be used, for example: r, r^2, r^3, ... for 0<r<1. (This series might allow for adopting an 'intermediate position' even when the branch history is infinitely long.)

There's probably some technical details to work out, like making all the weights add up to 1, but for a convergent series that's probably just a matter of applying an appropriate scale factor for normalization. For r=1/2, the infinite sum is 1, so no additional scaling is required. However this might not work (the sum across all node's rewards times their weight might diverge) on an infinite tree where the rewards grow too fast...

 

(This was an attempt at outlining an intermediate position, but it wasn't an argument for it.)

Comment by pattern on The Counterfactual Prisoner's Dilemma · 2020-09-16T20:23:33.229Z · score: 2 (1 votes) · LW · GW

So [why] do we care about what would have happened if we had?

Comment by pattern on Free Money at PredictIt: 2020 General Election · 2020-09-15T16:32:45.380Z · score: 2 (1 votes) · LW · GW

Thus, given how crazy this market could get later, and given I already tied up my funds, I’m [not] going to take the arbitrage here, at least not yet. I might take it later, but for now I want to reserve the right to make a better play.

 

Thanks for writing this. How prediction markets work in practice is interesting.

Comment by pattern on Radical Probabilism · 2020-09-11T17:23:53.634Z · score: 2 (1 votes) · LW · GW
  • I do not understand how Jeffrey updates lead to path dependence. Is the trick that my probabilities can change without evidence, therefore I can just update B without observing anything that also updates A, and then use that for hocus pocus? Writing that out, I think that's probably it, but as I was reading the essay I wasn't sure which bit was where the key step was happening.

TL:DR;

Based on Radical Probabilism and Bayesian Conditioning (page 4 and page 5), the path depends on the order evidence is received in, but the destination does not.

 

From the text itself:

The "issue" is mentioned:

An attractive feature of Jeffrey’s kinematics is that it allows one to be a fallibilist about evidence and yet still make use of it. An apparent sighting of one’s friend across the street, for instance, can be revised subsequently when you are told that he is out of the country. A closely related feature is the order-dependence of Jeffrey conditioning: conditioning on a particular redistribution of probability over a partition {Ai} and then on a redistribution of probability over another partition {Bi} will not in general yield the same posterior probability as conditioning first on the redistribution over {Bi} and 4 See Howson [8] for a full development of this point. A Bayesian might however take this as an argument against full belief in any contingent proposition. 4 then on that over {Ai}. This property, in contrast to the first, has been a matter of concern rather than admiration; a concern for the most part based on a confusion between the experience or evidence and its effect on the mind of the agent.5

And explained:

Suppose, for instance, that I expect an essay from a student. I arrive at work to find an unnamed essay in my pigeonhole with familiar writing. I am 90% sure that it is from the student in question. But then I find that he left me a message the day before saying that he thinks that he may well not be able to bring me the essay in the next couple of days. In the light of all that I have learnt, I now lower to 30% my probability that the essay was from him. Suppose now I got the message before the essay. The final outcome should be the same, but I will get there a different way: perhaps by my probabilities for the essay coming from him initially going to 10% and then rising to 30% on finding the essay. The important thing is this reversal of the order of experience does not produce a reversal of the order of the probabilities: I do not think it 30% likely that I will get the essay after hearing the message and then revise it to 90% after checking my pigeonhole. The same experiences have different effects on my probabilities depending on the order in which they occur. (This is, of course, just a particular application of the rule that my posteriors depend both on the priors and the inputs).

Comment by pattern on Updates Thread · 2020-09-11T02:12:57.768Z · score: 2 (1 votes) · LW · GW

What games changed your mind?

Comment by pattern on [AN #115]: AI safety research problems in the AI-GA framework · 2020-09-03T20:35:43.598Z · score: 2 (1 votes) · LW · GW

Decision Points in AI Governance

...

(These actions should not have been predetermined by existing law and practice.)

Should not have been, or should not be?

Comment by pattern on If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach? · 2020-09-03T03:02:33.892Z · score: 2 (1 votes) · LW · GW

What's a MOOC? (And do you have any good/representative examples?)

Comment by pattern on Basic Inframeasure Theory · 2020-09-01T16:41:18.116Z · score: 2 (1 votes) · LW · GW

(Reformatted Latex so the comment text editor won't reject it.)

If you have a point , and some other point  that's an sa-measure, we might as well add  to . Why? Well, given some positive functional  (and everything we're querying our set  with is a positive functional by Proposition 1,

There's a missing end parenthesis (to match the opening parenthesis on line 2), although it's not completely clear where it goes:

  • replacing "Proposition 1," with "Proposition 1." or
  • at the end of the equation (at the end of the quote).

(Maybe there's also something else going on with the paragraph after the quote, which continues the sentence.)

 

"add all the points you possibly can that don't affect the  value for any "[.]

 

The set of minimal points is denoted [.]

 

Looking back at our second desideratum, it says "Our notion of a hypothesis in this setting should collapse "secretly equivalent" sets, such that any two distinct hypotheses behave differently in some relevant aspect. This will require formalizing what it means for two sets to be "meaningfully different", finding a canonical form for an equivalence class of sets that "behave the same in all relevant ways", and then proving some theorem that says we got everything."

[Emphasis added.]

Some way of noticing that first " has been opened and isn't closed until the end of the paragraph might make the quotes section easier to parse - at the risk of making a section stick out due to complexity rather than because you want to emphasize it a lot.

Comment by pattern on Introduction To The Infra-Bayesianism Sequence · 2020-09-01T14:57:53.199Z · score: 2 (1 votes) · LW · GW

The REVERSE HEADS Environment always you 0.5 reward if the coin comes up tails, but [if] it comes up heads, saying "tails" gets you 1 reward and "heads" gets you 0 reward. We have Knightian uncertainty between the two environments.

 

In the next post, (#2)

The post after that (#3)

2 more posts to look forward to.

Later posts (not written yet) will be about the "1 reward forever" variant of Nirvana and InfraPOMDP's (~#4), developing inframeasure theory more(~#5), applications to various areas of alignment research(~#6), the internal logic which infradistributions are models of (~#7), unrealizable bandits (~#8), game theory (~#9), attempting to apply this to other areas of alignment research (~#10), and... look, we've got a lot of areas to work on, alright? (*)

Plus a speculative/possible 7 more after that assuming no overlap or multi-post topics. (~#6 and ~#10 already being counted as 2 posts.)

*More leaning on the unenumerated possibilities.

 

I look forward to seeing more of this!

Comment by pattern on [AN #114]: Theory-inspired safety solutions for powerful Bayesian RL agents · 2020-08-31T01:00:34.930Z · score: 4 (2 votes) · LW · GW

Thus, to cover as much [of] the literature as possible

Comment by pattern on Covid 8/27: The Fall of the CDC · 2020-08-28T18:08:48.208Z · score: 5 (3 votes) · LW · GW

It’s a statement one makes when one is doing word manipulations. Where one sees a 35% on a piece of paper, doesn’t notice at all that this is from 11.9% to 8.7%, and decides that means that 35% of all patients will go from dying to not dying rather than 3.2%.

Where does the 35% come from?

A reduction from 11.9% to 8.7% means a decrease of 3.2%. As a fraction of the 11.9%, 3.2% is about 27% of the 11.9, not 35%.

Comment by pattern on Becoming an EA-influencer on YouTube etc. · 2020-08-21T20:32:05.682Z · score: 3 (2 votes) · LW · GW

Given the lack of responses so far, perhaps this would get a better reception at the EA forum?

Comment by pattern on On Systems - Living a life of zero willpower · 2020-08-21T03:28:41.751Z · score: 2 (1 votes) · LW · GW
  • Policies - eg what [?] do when I am too tired to focus while working
Comment by pattern on Radical Probabilism · 2020-08-20T17:31:30.018Z · score: 10 (2 votes) · LW · GW

This link doesn't seem to work:

I was a Teenage Logical Positivist (Now a Septuagenarian Radical Probabilist), Richard Jeffrey. 

I haven't checked the others.

Comment by pattern on Looking for adversarial collaborators to test our Debate protocol · 2020-08-19T16:55:33.975Z · score: 2 (1 votes) · LW · GW

If you would be interested in participating conditional on us offering pay or prizes, that's also useful to know.

Do you want this feedback at the same address?

Comment by pattern on Swiss Political System: More than You ever Wanted to Know (III.) · 2020-08-12T15:39:22.780Z · score: 2 (1 votes) · LW · GW

Blue stands for FDP.[ ]The Liberals,

 

It [has only] ever happened four times. Twice in 19th century, never in 20th century and twice in 21st century. As a consequence, federal councilor spends on average ten years in the office.

 

You have to thin[k] twice

 

 

It has evolved a specific culture, that is passed from generation to generation since 1848.

That is an interesting link.

Comment by pattern on Fantasy-Forbidding Expert Opinion · 2020-08-10T16:39:03.558Z · score: 2 (3 votes) · LW · GW

"Why don't you believe in fairies?" I ask.

Because I haven't seen them. (This might be fairly extreme, and rule out komodo dragons as well as dragons.* This can get more complicated easily: micro-organisms and people who say 'I believe in X because I saw it even if I couldn't touch it or take a picture of ti.')

*This has the benefit of protection against impersonal deepfakes if actually followed.

Comment by pattern on Matt Goldenberg's Short Form Feed · 2020-08-10T15:55:22.493Z · score: 2 (1 votes) · LW · GW

You have a podcast?

Comment by pattern on Adele Lopez's Shortform · 2020-08-09T14:08:58.532Z · score: 6 (3 votes) · LW · GW

Part of it seems like a matter of alignment. It seems like there's a difference between 

  • Someone getting someone else to do something they wouldn't normally do, especially under false pretenses (or as part of a deal and not keeping up the other side)

and

  • Someone choosing to go to an oracle AI (or doctor) and saying "How do I beat this addiction that's ruining my life*?"

*There's some scary stories about what people are willing to do to try to solve that problem, including brain surgery.

Comment by pattern on A Hierarchy of Abstraction · 2020-08-09T02:32:23.723Z · score: 2 (1 votes) · LW · GW

the insolubility of the quintic

It's that the general form is unsolvable, not specific examples, without better tools than the usual ones: +, -, *, /, sqrt, ^, etc. I've heard that with hypergeometric functions it's doable, but the same issue reappears for polynomials of higher degree there as well.

Comment by pattern on Diagramming "Replacing Guilt," Part 1 · 2020-08-07T14:06:29.274Z · score: 2 (1 votes) · LW · GW

This drawing is trying to emphasize the fact that even though your perceptions of the world are probably distorted, there is a difference between trying to change the world [and] deliberately skewing your model to make yourself feel better.

Comment by pattern on Raemon's Shortform · 2020-08-05T19:08:50.527Z · score: 2 (1 votes) · LW · GW

(Example: LessWrong deliberately doesn't show users the view-count of their posts. We already have the entire internet as the control group for what happens if you give people view-counts – they optimize for views, and you get clickbait. Is this patronizing? Yeah. Am I 100% confident it's the right call? No. But, I do think if you want to build a strong intellectual culture, it matters what kinds of Internet Points you give [or don't give] people, and this is at least a judgment call you need to be capable of making)

One could argue that view counts aren't view counts - they're click counts.

And people still have a metric they can optimize: the number of comments the post received.

Comment by pattern on Tools for keeping focused · 2020-08-05T15:25:00.433Z · score: 2 (1 votes) · LW · GW

Links that don't work:

https://www.lesswrong.com/posts/mXgsd5o9uuYaQKHMz/witch-actions.png

https://www.lesswrong.com/posts/mXgsd5o9uuYaQKHMz/witch-advanced.png

Found here:

I found Witch slightly unintuitive to configure, so if you’re curious, here are screenshots of my configs: “actions” tab, “advanced” tab.

Comment by pattern on My paper was signalling the whole time - Robin Hanson wins again · 2020-08-05T15:19:16.841Z · score: 3 (2 votes) · LW · GW

What was your argument in the paper?

Comment by pattern on Would AGIs parent young AGIs? · 2020-08-03T05:18:18.088Z · score: 2 (1 votes) · LW · GW

This seemed to be evidence that there are growing costs to continual self-modification in software systems that might limit this strategy.

It's an unusual case, but AlphaGo provides an example of something being removed and retrained and getting better.

 

Outside of that - perhaps. The viability of self-modifying software...I guess we'll see. For a more intuitive approach, let's imagine an AGI is a human emulation except it's immortal/doesn't die of old age. (I.e. maybe the 'software' in some sense doesn't change but the knowledge continues to accumulate and be integrated in a mind.)

1. Why would such an AI have 'children'?

2. How long do software systems last when compared to people?

Just reasoning by analogy, yes 'mentoring' makes sense, though maybe in a different form. One person teaching everyone else in the world sounds ridiculous - with AGI, it seems conceivable. Or in a different direction, imagine if when you forgot about something you just asked your past self.

 

Overall, I'd say it's not an necessary thing, but for agents like us it seems useful, and so the scenario you describe seems probable, but not guaranteed.

Comment by pattern on TurnTrout's shortform feed · 2020-08-02T02:18:30.194Z · score: 2 (1 votes) · LW · GW

Would you prioritize the young from behind the veil of ignorance?

Comment by pattern on Would AGIs parent young AGIs? · 2020-08-02T02:16:25.235Z · score: 14 (4 votes) · LW · GW

Duplicates - digital copies as opposed to genetic clones - might not require new training (unless a whole/partial restart/retraining was being done).

When combined with self-modification, there could be 'evolution' without 'deaths' of 'individuals' - just continual ship of Theseus processes. (Perhaps stuff like merging as well, which is more complicated.)

Comment by pattern on Power as Easily Exploitable Opportunities · 2020-08-02T02:05:11.176Z · score: 2 (1 votes) · LW · GW

There's probably an adversarial input of strange motor commands you could issue which would essentially incapacitate all the soldiers just because they're looking at you since their brains are not secure systems. 

What?

Comment by pattern on Sunny's Shortform · 2020-08-01T21:33:53.989Z · score: 2 (1 votes) · LW · GW

it (the negative experiences) - Are *they (the negative experiences) the result of (people with a "culture" who's rules rules you don't understand) expecting you to read *their mind, and go along with their "culture", instead of asking you to go along with their culture?

Comment by pattern on Sunny's Shortform · 2020-07-31T19:36:43.998Z · score: 2 (1 votes) · LW · GW

I think this feeling is generated by various negative experiences I've had with people around me, who, no matter where I am, always seem to share between them one culture or another that I don't really understand the rules of. This leads to a lot of interactions where I'm being told by everyone around me that I'm being a jerk, even when I can "clearly see" that their is nothing I could have done that would have been correct in their eyes, or that what they wanted me to do was impossible or unreasonable.

Is it because they're expecting you to read their mind, and go along with their "culture", instead of asking you?

Comment by pattern on Would a halfway copied brain emulation be at risk of having different values/identity? · 2020-07-30T17:30:43.225Z · score: 3 (2 votes) · LW · GW

Trait by trait doesn't seem like a likely copy means.

One hemisphere, then the other, almost does though.

 

I find this idea disturbing because it implies that emulating any brain (and possibly copying de novo AI as well) would inevitably result in creating and destroying multiple different personality/value sets that might count as separate people in some way. No one has ever brought this up as an ethical issue about uploads as far as I know (although I have never read "Age of Em" by Robin Hanson), and my background is not tech or neuroscience, so there is probably something I am missing .

Suppose, as you were waking up, different parts of the brain would 'come online'. In theory, it could be the same thing. (With the 'incomplete parts' running even.)

Comment by pattern on New Paper on Herd Immunity Thresholds · 2020-07-30T16:00:08.339Z · score: 2 (1 votes) · LW · GW

Flow:

Immunity passports two months ago.

As in we should have had those two months ago?

Don’t all yell at once. My model Doesn’t think anyone was convinced. Why?

Capitalization in the middle of a sentence is an unusual form of emphasis.

 

Nitpick:

fractal

On one hand this should be "Self similar". But the word "fractal" is commonly used this way.

 

Response:

This is a great post, and I've really appreciated this series. Thank you.

Comment by pattern on [AN #110]: Learning features from human feedback to enable reward learning · 2020-07-29T18:07:14.361Z · score: 2 (1 votes) · LW · GW

(it seems rather unusual to imagine the agent overriding a human, I’d be surprised if that was how we ended up building our AI systems).

It might be workable in Minecraft, simulations*, or with a robot in a safe environment.

*Like the ones involved in making the one hand rubix cube solver (OpenAI).

Comment by pattern on Open & Welcome Thread - July 2020 · 2020-07-28T14:57:09.215Z · score: 2 (1 votes) · LW · GW

It has more order than other tags though. Time is important for sorting so that the most recent one is clearly available. It's a stack.

Comment by pattern on More Right · 2020-07-27T14:32:51.675Z · score: 6 (3 votes) · LW · GW

(Not the commenter above.)

 

Getting good at something isn't just about doing it right when you do it, it's also about doing it for practice/looking for opportunities.

Sidestep-humility:

Don't be foolish/ignorant.

Positive humility:

Seek out knowledge.

 

In what way is positive humility analogous to ignoring a blade while it's sheathed/merely in motion?

Not having/using 'positive humility' (while having the other kind) is like ignoring a blade and trying to cut vegetables up with your hands.

Comment by pattern on More Right · 2020-07-27T14:28:13.379Z · score: 2 (1 votes) · LW · GW

I was asking a different question - beyond 'religion' and 'life will get boring', what other silly things do people believe (that are incorrect), or is that it?

Comment by pattern on Open & Welcome Thread - July 2020 · 2020-07-27T13:58:58.261Z · score: 2 (1 votes) · LW · GW

What was weird about that sequence, is that it was less like something that needed an author, and more like something that needed a 'create new post in this sequence button' that anyone could click.

(An automatic posting feature keyed into time sounds kind of niche, absent auto-reruns, or scheduled posting.)

Comment by pattern on Become a person who Actually Does Things · 2020-07-27T13:53:50.720Z · score: 2 (1 votes) · LW · GW

I think it's an overly strong response to:

Now I’ve got the unnecessarily provocative opening line out of the way,

Comment by pattern on The Dark Miracle of Optics · 2020-07-27T00:08:12.302Z · score: 6 (2 votes) · LW · GW

I did. It's great to see this all in one place - it connects a lot of dots, and a long, good read.

 

After checking both I discovered

[7] doesn't appear on either.

 

Have you been doing this for a long time?

Comment by pattern on The Dark Miracle of Optics · 2020-07-26T15:09:34.384Z · score: 2 (1 votes) · LW · GW

[5] David Lewis-Williams, The Mind in the Cave: Consciousness and the Origins of Art

[6] Thanks to romeostevensit for pointing me toward related literature. 

The end of your footnotes don't appear on (the version of this post on) your blog.

Comment by pattern on Lessons on AI Takeover from the conquistadors · 2020-07-26T14:31:34.872Z · score: 2 (1 votes) · LW · GW

How rapidly?

Comment by pattern on Using books to prime behavior · 2020-07-26T02:46:35.153Z · score: 3 (2 votes) · LW · GW

What books?

Comment by pattern on Reveal Culture · 2020-07-25T14:51:31.357Z · score: 0 (0 votes) · LW · GW

So I’m going to use Majuscule Singular to talk about the [P]latforms and lowercase (usually plural) to talk about the cultures themselves. I think this is just good thinking practice.

Comment by pattern on Construct a portfolio to profit from AI progress. · 2020-07-25T14:16:46.145Z · score: 2 (1 votes) · LW · GW

Such as?

Comment by pattern on [Meta] anonymous merit or public status · 2020-07-25T04:28:45.209Z · score: 4 (2 votes) · LW · GW

If authors’ posts get a similar number of upvotes compared to when their names were public, then that’s a sign that every post is evaluated independently.

Controlling for the amount of views seems important.