Open & Welcome Thread - December 2020 2020-12-01T17:03:48.263Z
Pattern's Shortform Feed 2019-05-30T21:21:23.726Z


Comment by Pattern on Tracey Davis and the Prisoner of Azkaban - Part 1 · 2021-06-18T01:14:55.141Z · LW · GW
You respect J.K. Rowling's copyright. Harry Potter fanfiction must remain non-commercial, especially in the strict sense of traditional print publishing.

Isn't there an end date on that?

Comment by Pattern on Tracey Davis and the Prisoner of Azkaban - Part 2 · 2021-06-18T00:36:50.957Z · LW · GW
"What cities have Muggles destroyed with nuclear missiles?" said Tracey.
"None, but—" said Malfoy.

When does this take place?

Comment by Pattern on Covid 6/17: One Last Scare · 2021-06-18T00:34:31.743Z · LW · GW
Supporting organizations such as MIRI

Is there a longer list somewhere?

Comment by Pattern on Covid 6/17: One Last Scare · 2021-06-18T00:27:33.258Z · LW · GW

New Apocalyptic AI Theory (That is Too Specific):

AI will improve gain of function research greatly. Unfortunately, an enhanced virus will breach containment, and wipe out humanity.

Comment by Pattern on The Apprentice Thread · 2021-06-18T00:17:04.948Z · LW · GW

If it said Aikido Sports Substack, it'd be more clear. The word Aikido by itself already refers to something.

Comment by Pattern on The Apprentice Thread · 2021-06-18T00:14:25.537Z · LW · GW

Thread for comments on the article that aren't of the forms:



Comment by Pattern on bvbvbvbvbvbvbvbvbvbvbv's Shortform · 2021-06-14T15:49:05.620Z · LW · GW
i.e. it's one way to find out how much you're privileged

You described using it for 'bubble evaluation'. I've also heard of stuff like that to measure bias.

any way to quantify (even naively like my system) this kind of thing

Which thing, and what kind of thing?

Comment by Pattern on What other problems would a successful AI safety algorithm solve? · 2021-06-14T15:45:47.100Z · LW · GW
reverse engineering the entire human mind from scratch!

That might not necessarily be required for AGI, though that does seem to be what figuring out how to program values is.

Comment by Pattern on Taleuntum's Shortform · 2021-06-13T00:17:47.537Z · LW · GW
Do you know of a real world example where the first intervention on the proxy raised the target value, but the second, more extreme one, did not (or vica versa)?

Here's a fictional story:

You decide to study more. Your grades go up. You like that, so you decide to study really really hard. You get burnt out. Your grades go down. (There's also an argument here that the metric - grades - isn't necessarily ideal, but that's a different thing.)*

*There might be a less extreme version involving 'you stay up late studying', and 'because you get less sleep it has less effect (memory stuff)'.

This isn't meant as an unsolvable problem - it's just that:

  • You have limits


  • You can grow

are both true.

Maybe this style of mechanism, or 'causal influence' is rare. But its (biological) nature arguably, may characterize a domain (life). So in that area at least, it's worth taking note of.

I guess I'm saying, if you want to know if you have to be worried about Goodhart's Law, in general, I think it depends. Just spend time optimizing your metric, and spend time optimizing for you metric, and see what happens. If you want more specific feedback, I think you'll probably have to be more specific.

Comment by Pattern on bvbvbvbvbvbvbvbvbvbvbv's Shortform · 2021-06-13T00:13:46.066Z · LW · GW

I wouldn't say there's flaws in reasoning. Just that multiple comparisons are more likely to have issues, it's just a proxy, etc.

It's an interesting idea.

Comment by Pattern on Am I anti-social if I get vaccinated now? · 2021-06-12T18:50:15.788Z · LW · GW

Your second argument seems to imply social neutrality, rather than pro- or anti-. It's not strong enough to match the claim above (although it is following a conditional).

Comment by Pattern on Taleuntum's Shortform · 2021-06-12T18:45:11.020Z · LW · GW

If you keep increasing P, the connection might break.

Comment by Pattern on Why do patients in mental institutions get so little attention in the public discourse? · 2021-06-12T18:41:39.387Z · LW · GW

Other possibilities that spring to mind are:

  • The difficulty of them becoming your voters
  • The opportunity has been overlooked. (The market is not all knowing.)
  • It conflicts with other interests already secured.
Comment by Pattern on Why do patients in mental institutions get so little attention in the public discourse? · 2021-06-12T18:39:42.797Z · LW · GW

The question is why does the attic work so well. Why does no one talk about the attic?

Comment by Pattern on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-11T19:17:00.225Z · LW · GW

Someone dies and you get sued. (All it takes is one allergic reaction, or someone who already had asthma, and you're a murderer.)

Comment by Pattern on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-11T19:15:55.221Z · LW · GW

Do you wish you didn't have it?

Comment by Pattern on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-11T19:15:21.680Z · LW · GW

Combine it with getting entrance to a place. It doesn't have last too long, just long enough.

Comment by Pattern on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-11T19:12:33.224Z · LW · GW

Maybe Scott has a secret identity.

Comment by Pattern on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-11T18:38:13.546Z · LW · GW

One day at work you discover a protein that crosses the blood-brain barrier and causes crippling migraine headaches if someone's attention drifts while driving.

Seems way too specific. This is going to go off under at least some other condition.


If these genes really are an adaptation, it shows how ruthless evolution can be. If you implanted a device in your kid that mildly poisoned them every time they drank, you'd be a monster. But evolution basically did that.

It doesn't make them get drunk faster?


No one cares about my freedom to rob convenience stores or burn down public buildings.

Unless you live in a tyrannical regime.


so he wanted to remove freedom from himself to overcome that.

He wanted to enjoy the beauty of the song, without the downside of the actions he'd take in response (drowning). It's like someone wanting to try heroine without getting addicted. There's a metaphor involving alcohol here. (And he's lucky that he didn't get addicted to siren song.)


Most real-world scenarios are different:

  • We need society to enforce constraints.
  • Those constraints affect everyone to some degree, even those who don't want them.


What I'd really like is for society to criminalize all mint-chocolate flavored snacks.

Tell us more about your dystopian dictatorship, where people are free from temptation.

I still think there's arguably a fix which doesn't have problem 2 "Those constraints affect everyone to some degree, even those who don't want them." - having opt-in constraints. This might work if you can voluntarily get yourself banned from something (say for the next week), but open tables with snacks don't quite mix with this. 

Less distantly, maybe places could share info about what snacks they will have, in advance.


Say you used to be addicted but now you've quit. If you could snap your fingers and make all drugs disappear, wouldn't you do that?

No. There's a difference between all drugs, and a specific drug. 'Snapping' here, would literally kill people.

Interesting consequentialist question here - do drugs save (and help) more people than they kill (and destroy)?


Obviously, criminalizing cookies (or fentanyl) is bad for both responsible users and people who can't or don't want to quit. I'm just trying to point out that there is a tradeoff. Society has decided that tradeoff in favor of responsible Twinkie users and against responsible fentanyl users.

Are there responsible cookie users? Or do we just resist the urge to buy it, but give in when it's available as a free snack? You want to not have 'mint chocolate' options - you want them banned. You want to stop, but you're having trouble doing so. Are you addicted to mint chocolate sweets? Are we addicted to cookies?


Sometimes we can give people the chance to "Odysseus" themselves without intruding too much on the freedom of others. An example is gambling. Some locations allow people to "self-exclude" from gambling, after which casinos won't let you play for a time period of your choice. This isn't perfect, since now responsible gamblers have their ID checked, and addicts can still cross state lines or play the lotto or whatever.

This is perfect. It's perfect for you, and particular style of irresponsible mint chocolate consumption. 


We can informally picture the different regimes like so:

You're still distinguishing freedom and constraints. From your perspective isn't there just a line, instead of two dimensions?


Roughly 10% of people in the US are raging alcoholics. Could we offer them the chance to self-exclude from alcohol?

Unfortunately, it seems very difficult.

We're back at ignoring the simpler policy that would work for someone like you -  i.e., I want to not buy it, and would opt in to 'not having the option to buy it'.


quantum stem-cell



Some studies show great results for people who are married but not for single people.

Maybe friends aren't the weak link you made them out to be.

Comment by Pattern on Is ("Chemical Imbalance" => Depression) an example of fake causality? · 2021-06-11T06:26:37.536Z · LW · GW
To what extent would said research be more difficult to do without a working hypothesis?

You would have to poke around, with no idea what you're looking for.

By what sort of process does the existence of a working hypothesis enable research?

The working hypothesis says you should try poking around over there, which narrows things down a little bit, but not very much.

To the extent that a working hypothesis is used in public communication with non-scientists about a given topic, why is it so?

People like having an explanation. Even if it tells you very little indeed.

Something more specific - I think head trauma is related to depression. If this involves a 'chemical imbalance' then maybe that means something was damaged...relating to happiness? (There's also some theories about top down versus bottom up processing which didn't really clear things up for me, but might offer a possible explanation.)

Comment by Pattern on The Generalized Product Rule · 2021-06-11T01:17:42.221Z · LW · GW

Is this just 'expected value follows some of the same rules as probability' or is there more to it?

Comment by Pattern on What are some important insights you would give to a younger version of yourself? · 2021-06-10T22:06:11.429Z · LW · GW

Is there a specific kind of math you find really useful?

Comment by Pattern on ChristianKl's Shortform · 2021-06-10T21:52:53.686Z · LW · GW

Do the transposons ever have positive benefits?

Why is your population all connected?

Comment by Pattern on How do you keep track of your own learning? · 2021-06-10T21:50:13.908Z · LW · GW
It can't surveille your activities and see how much you've been studying.

It tries.

Comment by Pattern on Game-theoretic Alignment in terms of Attainable Utility · 2021-06-10T21:43:01.236Z · LW · GW

That moment when the AI takes a treacherous turn

because it wasn't aligned up to affine transformations.

Comment by Pattern on Covid 6/10: Somebody Else’s Problem · 2021-06-10T17:31:34.974Z · LW · GW

One of your links is broken:

Probably broken by twitter though, so...

Also, at this point I have zero faith that if we decided on reasonable precautions that were actually reasonable if followed, that those procedures would get followed, even by those who said they were following them. There would also be those who saw this as permission to do the research without even saying they would use the precautions. Either you ban this, or you don’t.

1984 style solution: the research is carried out and live-streamed, thus making 'are procedures being followed' a question answerable by examining the footage.

Arguably public research should be public info anyway.

To be clear, ‘some sufficiently strong level of precautions’ is something like ‘do it in Antarctica and the quarantine for leaving a 100-mile radius around the lab is a year or more,’ not ‘do it in China next to a city but have additional protective equipment and a second observer present.’ 

I was thinking the moon/mars/or something, and have it be a one way trip, but I figured the cost would be too high for anyone to pay it.

An alternative to the quarantine approach: have a series of areas through which there are specified movement patterns (i.e. a DAG). Basically, a larger quarantined area, with supplies dropped off via drone, or funneling through the start of the chain.

Comment by Pattern on Reply to Nate Soares on Dolphins · 2021-06-10T17:01:20.268Z · LW · GW


A dictionary definition is just a convenient pointer to help people pick out "the same" natural abstraction in their own world-model. Unambiguous discrete features make for better word definitions than high-dimensional statistical regularities, even if most of the everyday inferential utility of using the word comes from fuzzy high-dimensional[ ]statistical correlates, because discrete features are more useful as a simple membership test that can function as common knowledge to solve the coordination problem of matching up the meanings in different people's heads.

and this:

And that's why phylogenetic categories are useful: because genetics are at the root of the causal graph underlying all other features of an organism, such that creatures that are genetically close to each other are more similar in general. It's easier to keep track of the underlying relatedness as if it were an "essence" (even though patterns of physical DNA aren't metaphysical essences), rather than the all of the messy high-dimensional similarities and differences of everything you might notice about an organism.


"creatures that are genetically close to each other are more similar in general." is a 'high-dimensional statistical regularity' rather than a 'unambiguous discrete feature'.

For example, water. The word "water" can be used to mean H₂O in any form (in which sense ice is a kind of water), or specifically liquid H₂O (in which sense ice is not a kind of water). If someone says "water" and you're not sure if they're using it in the ice-inclusive or the ice-exclusive sense, and ice happens to be relevant to the conversation you're having, then you might have to ask the speaker for clarification! Fortunately, this doesn't cause a whole lot of problems among people who are trying to communicate with each other and don't have an incentive to start a pointless dispute over definitions.

Water is not H₂O, though water always contains H₂O. You need water to live, but drinking pure H₂O by itself can be harmful.

Comment by Pattern on Bayeswatch 3: A Study in Scarlet · 2021-06-09T01:37:47.770Z · LW · GW
"Weather is subject to the butterfly effect," said Vi.

The interesting question, is

  • would the red paint would make the change?
  • is the desired change made by the satellite and missile launched in response to the red paint job?
  • Or is it tired of making incorrect predictions and ensuring its own destruction to that end?
Comment by Pattern on Qria's Shortform · 2021-06-08T14:34:30.838Z · LW · GW

Two versions of a goal:

World Peace

Preventing a war you think is going to happen

The 2nd may have a (close) deadline, the 1st might have a distant deadline like the sun burns out, or something closer like before you die, or 'an AGI revolution (like the industrial revolution) starts' (assuming you think AGI will happen before the sun burns out).

Comment by Pattern on Five Whys · 2021-06-08T14:20:25.153Z · LW · GW

Why aren’t you exercising?

  • Because it’s difficult to stop mindlessly browsing the web in the evening to start exercising.
    • Possible solution:

Maybe I should get up early and exercise.

Comment by Pattern on Often, enemies really are innately evil. · 2021-06-08T14:13:39.525Z · LW · GW

TL:DR; I was talking about selection bias from you still being alive (I assume).

My point was that, given that the protagonist of Worm almost died, probabilistically, most people won't have experienced that level of bullying, unless we include dead people in 'people who have experienced' because there's a selection effect from being alive. Conditioning on survival*, probabilistically selects against more extreme torture, and towards none at all. At the limit, no one survives, and thus everyone who is alive has experienced such things with probability zero.

*For more exact numbers, look at the SSC link, and see if they investigate at a finer level that 'was or wasn't bullied'. Alternatively, just review the statistics and compare the rates of survival implied by this:

"In fact, the frequently bullied kids had nearly twice as much psychiatric disease, were twice as likely to attempt suicide, were twice as likely to drop out of high school, and even had double the unemployment rate. Worse physical health, worse cognitive function, less likely to get married, et cetera, et cetera."
Comment by Pattern on Often, enemies really are innately evil. · 2021-06-07T18:37:17.326Z · LW · GW
No bullying I or anyone else I know has experienced was that bad, but the point is, bullies can go far beyond name-calling or even hitting.

Selection bias much?

Comment by Pattern on Often, enemies really are innately evil. · 2021-06-07T18:32:33.781Z · LW · GW
Don't think this study is big enough to be representative?

How big is the study?

Comment by Pattern on What to optimize for in life? · 2021-06-07T03:23:19.020Z · LW · GW

Patrick Collins might not think that is the only thing to optimize for - just one that is underrated.

Comment by Pattern on "How to Talk About Books You Haven't Read", by Pierre Bayard · 2021-06-04T23:52:25.813Z · LW · GW
So if the underlying message of this argument is “it’s ok to shoot the shit,” I agree. If it’s “sometimes stories and ideas can be conveyed by texts other than the original,” that’s trivially true. If it’s “you can make assumptions about the contents of a given book, then opine on the book itself,” that seems very wrong to me.
  • Prior + Evidence = Posteriors*
  • “you can make assumptions about the contents of a given book, then opine on [your model of] the book”
  • Is there a specific book you haven't read? Why?

*(Technically it's P(X | Evidence) = P(Evidence | X)*P(X)/P(Evidence).)

Comment by Pattern on An Intuitive Guide to Garrabrant Induction · 2021-06-04T01:23:38.581Z · LW · GW


However, even if you did know the source code, you might still be ignorant about what it would do.

The Halting Problem.

As a simple example, suppose I violate the axiom that P(Heads)+P(Not Heads)=1 by having P(Not Heads)=P(Heads)=13. Given my stated probabilities, I think a 2:1 bet that the coin is Heads is fair and a 2:1 bet that the coin is Not Heads is fair; this combination of bets that is guaranteed to lose me $1, making me Dutch-bookable.

It's not clear why you would think that bet is fair.

Solomonoff induction is an example of an ideal empirical induction [process].

4. The astute reader may notice that Brower’s fixed point theorem is non-constructive. We find the fixed point by brute force searching over all rational numbers. See [5.1.2] for details. ↩︎


Solomonoff induction uses all computable worlds as its experts; however, the underlying logic (Bayesian updating) is more general than that. Instead of using all computable worlds, we can instead use all polynomials, decision trees, or functions representable by a one billion parameter neural network. Of course, these reduced forms of Solomonoff induction would not work as well, but the method of induction would remain unchanged.
Similarly, Garrabrant induction employs polynomial-time traders as its experts; however, the underlying logic of trading and markets is more general than that. Instead of using polynomial time traders, we can instead use linear-time traders, constant time traders, or traders representable by a one billion parameter neural network. Of course, these reduced forms of Garrabrant induction would not work as well, but the method of induction would remain unchanged.

Why would Garrabrant induction be better than Garrabrant induction with neural networks?

Comment by Pattern on Rogue AGI Embodies Valuable Intellectual Property · 2021-06-03T23:18:26.702Z · LW · GW


A naive story for how humanity goes extinct from AI: Alpha Inc. spends a trillion dollars to create Alice the AGI. Alice escapes from whatever oversight mechanisms were employed to ensure alignment by uploading a copy of itself onto the internet. Alice does not have to pay an alignment tax, and so outcompetes Alpha and takes over the world.
On its face, this story contains some shaky arguments. In particular, Alpha is initially going to have 100x-1,000,000x more resources than Alice. Even if Alice grows its resources faster, the alignment tax would have to be very large for Alice to end up with control of a substantial fraction of the world’s resources.

Escapes is vague. Alice might escape with capital (Alice) and other capital, like $. And what if 'the original' is deleted?


'Outcompetes' is vague. Let's say Alpha is a known entity and Alice deploys attacks - digital, legal, nuclear, whatever. Alpha may be unable to effectively strike back against a rogue with an unknown location - and perhaps multiple locations - if it's digital it can be copied.

Suppose that Alpha currently has a monopoly on the Alice-powered models, but Beta Inc. is looking to enter the market.

It's not one market. If Alice can do X and Y and Z, then it is at least the X market the Y market and the Z market.

In this view, the primary value the employee has is their former employer’s trading high-performing strategies; knowledge they can potentially sell to other hedge funds.

They could also start their own.

Brand loyalty/customer inertia, legal enforcement against pirated IP, and distrust of rogue AGI could all disadvantage Beta in the share of the market it captures.

This assumes it's a legal market. Instead Alice could...breach systems and upload viruses that encrypt your data, put it on the internet, delete it*, and then serve as part of a botnet. Alice then:

  • has your data
  • can sell it back to you (or not)

*This might make things more detectable, so usefulness is based on the amount of time involved.

In these worlds, relevant actors see AGI coming, correctly predict its economic value, and start investing accordingly. This rough efficiency claim implies AI researchers and hardware are priced such that one can potentially get 3x returns on investment (ROI) from training a powerful model, but not 30x.[1] Since most economic activity will rapidly involve the production and use of AGI, early-AGI will attract huge investments, implying the Alice-powered model market will be a moderate fraction of the world’s wealth. The value of Alice’s embodied IP, being tied to the value of that market, will thus be similarly massive.

This assumes there's a FOOM, or

Rogue [artificial general super-intelligence] has access to its embodied IP.
Comment by Pattern on Selection Has A Quality Ceiling · 2021-06-03T14:37:08.146Z · LW · GW


Combine searching and training to make the task not impossible. Use/make groups that have more skills than exist in an individual (yet). Do we 'basically understand paradigm changes/interdisciplinary efforts?' If you need a test you don't have, maybe you should make that test. Pay attention to growth - if you want someone (or a group) better than the best in the world, you need someone who is/can grow, past that point. Maybe you'll have to create a team that's better than the best (that currently exist) in the world - possibly people who are currently working in different fields.

1. Hybrid: searching and training

I also sometimes want more-than-one bit of search in just one skill. For instance, if I want someone in the top 1/32 of writing skill, then that’s 5 bits of search.

You could also search for a few bits, and try training the rest.

2. Change the constraints to make the problem solvable (use groups instead of individuals)

There are ways around that: skills are not independent, and sometimes I can make do with someone who has most of the skills. But the basic picture still holds: as I raise my bar, selection becomes exponentially more difficult.

Sounds like figuring out teams might be the way to go here.

3. Are interdisciplinary or paradigm changing project 'problems-we-basically-understand'?

Selection breaks down when we need people with rare skills, and especially when we need people with many independent skills - exactly the sort of people we’re likely to need for problems-we-basically-don’t-understand.

This might also be an issue if you combine a bunch of 'things we understand' into one project, or want to make major change, like (maybe) semiconductor lithography.

4. Can you build what you don't have?

And if we have a test, then we could just forget about training and instead use the test to select.

Maybe you have to develop one, and afterwards you could use it, but now you have people who are trained.

5. Asymptotic growth

But this technique puts a cap on “how good” we can select for - we can’t ask for someone better than the best in the world.

Unless you get someone who will get better over time AND they're (among) the best in the world.

6. Select for/Build a team.

But if we want top-level collaborators in many skills, then we just have to figure out how to do it. Selection does not scale that way.

Mentioned this in 2, though it seems like a different thing than the rest of the post - which is about getting one person with a lot of strong/rare traits, instead of people (from different fields?) who can work together to the same or better effect. (If you want a lot of stuff done, arguably that is a fundamental cap, and larger groups will be needed once you select too hard for that - though how this plays into automation/tools might matter a lot, depending on the area.)

Comment by Pattern on The Cost of Convenience.... · 2021-06-02T16:54:38.514Z · LW · GW
In this piece, I argue that by making a convenient world, we have made less meaning in the world.

Is it also convenient relative to other goals like 'having (desired) inconvenience'?

Comment by Pattern on Networks of Trust vs Markets · 2021-06-02T16:50:29.238Z · LW · GW
This post could be read as an introduction to a (hypothetical) sequence about using and scaling networks of trust. If there is interest, I might write another post detailing my observations so far. Any thoughts?

I'd be interested in that.

Comment by Pattern on Open and Welcome Thread - May 2021 · 2021-06-02T02:12:55.271Z · LW · GW

That's a great post by the way. I loved it.

Comment by Pattern on Networks of Trust vs Markets · 2021-06-02T02:08:05.952Z · LW · GW
That immidiatly raised two questions.
1. How can I find more hippies?
2. Why are markets so expensive?
Lets look at the second one.

Based on the name of this piece, I'm not surprised you went there, but the first question sounds like it might change your life.

Comment by Pattern on Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI · 2021-06-02T02:01:26.241Z · LW · GW
any good graduate education in mathematics will teach you that for the purpose of understanding something confusing, it’s always best to start with the simplest non-trivial example.  

While that comment is meant as a metaphor, I'd say it's always best to start with a trivial example. Seriously, start with the number of dimensions d, and turn it all the way down to 0* and 1, solve those cases, and draw a line all the way to where you started, and check if it's right.

Reflective Oracles (fallenstein2015reflective) are another case of this, but in a highly technical context that probably (according-to-me: unfortunately) didn’t have much effect on broader rationality-community-discourse. 

I haven't seen them mentioned in years.

*If zero is impossible, then do 1 and 2 instead.

Comment by Pattern on Open and Welcome Thread - May 2021 · 2021-06-02T01:53:41.882Z · LW · GW

Why was the health tag deleted?

Comment by Pattern on Zen and Rationality: Continuous Practice · 2021-06-02T01:47:57.704Z · LW · GW
Scott Alexander wrote that rationality is a habit to be cultivated. As such, cultivation of that habit requires ongoing work, which he captured with the phrase "constant vigilance".

I thought that showed up in the sequences first, though that might have just been methods?

Comment by Pattern on Forecasting Newsletter: May 2021 · 2021-06-02T01:45:49.139Z · LW · GW
Probability theory does not extend logic (predicate calculus). In particular, freely mixing logical quantifiers (∀, ∃) and probability statements gets messy fairly quickly, and the tools to disambiguate their meaning may not be found solely in probability theory (but perhaps in statistical inference or in the study of causality.)

The original article made it sound like that was an area of unfinished research (at the time it was written). If that's been solved, I imagine the original writer might want to know about it.

Comment by Pattern on For Better Commenting, Take an Oath of Reply. · 2021-06-02T01:39:02.391Z · LW · GW
Committing to reply to any comment seems like [a bad idea].

Then don't. It could be 'at least one', or 'First', or something. There could also be something like 'if no one posts any comments on this, then (after a week) I will'.

There's also the option of including a 'unless I think you're a troll' clause.

I also want to give my commenters a chance to talk to each other without me interrupting.

The oath could be conditional on being invoked?

Like, 'I will respond to the first 5 questions

a) about this piece

b) directed at me*'

*(It's easier to do this if there's a way to be specifically notified, like an @ option).

Comment by Pattern on For Better Commenting, Take an Oath of Reply. · 2021-06-02T01:35:23.954Z · LW · GW
For this post, my Oath of Reply is to respond to top-level comments at least once through August 2021.

Top level comments on what? This post?

Comment by Pattern on Testing The Natural Abstraction Hypothesis: Project Intro · 2021-05-30T15:52:04.366Z · LW · GW
The natural abstraction hypothesis can be split into three sub-claims, two empirical, one mathematical:

The third one:

Convergence: a wide variety of cognitive architectures learn and use approximately-the-same summaries.

Couldn't this be operationalized as empirical if a wide variety...learn and give approximately the same predictions and recommendations for action (if you want this, do this), i.e. causal predictions?

Human-Compatibility: These summaries are the abstractions used by humans in day-to-day thought/language.

This seems contingent on 'the human summaries are correct' and 'natural abstraction summaries are correct', then claiming this happens, is just making a claim about a particular type of convergence. (Modulo the possibility that:

"human recommendations (may)/do not describe the system, and (may) instead focus on 'what you should do' which requires guesses about factors like 'capabilities or resources'.)"


Along the way, it should be possible to prove theorems on what abstractions will be learned in at least some cases. Experiments should then [mostly] probe cases not handled by those theorems, enabling more general models and theorems, eventually leading to a unified theory.

I say 'mostly' because probing cases believed to be handled may reveal failure.

Then, the ultimate test of the natural abstraction hypothesis would just be a matter of pointing the abstraction-thermometer at the real world, and seeing if it spits out human-recognizable abstract objects/concepts.

Interesting this doesn't involve 'learners' communicating, to see what sort of language they'll develop. But this (described above) seems more straightforward.

It would imply that a wide range of architectures will reliably learn similar high-level concepts from the physical world, that those high-level concepts are exactly the objects/categories/concepts which humans care about (i.e. inputs to human values), and that we can precisely specify those concepts.

It seems good that the program described involves testing a variety, then seeing how they turn out (concerning object details, if not values), rather than attempting to design understandable architectures, if one wants to avoid the risk of a 'ontological turn' whereby 'an AI' develops a way of seeing the world that doesn't line up after it 'goes big'. (On the other hand, if understanding global systems requires learning concepts we haven't learned yet, then, without learning those concepts, we might not be able to understand maps produced by (natural abstraction) learners without said learning. This property - something can't be understood without certain knowledge or concepts - might be called 'info-locked maps' or 'conceptual irreducibility'. Though it's just a hypothesis for now.)

Comment by Pattern on A.D&D.Sci May 2021 Evaluation and Ruleset · 2021-05-29T20:45:25.468Z · LW · GW
In general, one would define cooperation in games as strategies that lead to better overall gains, and ignore effort involved in thinking up the strategy.

You should change your username to 'one' then.*

Imagine a game where the 'optimal strategy' is more difficult to calculate than the optimal strategy in chess. Or, suppose you're playing a chess game. You know how to calculate the optimal strategy. Unfortunately, it will take 10 years to calculate on your supercomputer, and you can't take 10 years to make the first move. To neglect time as a resource is to neglect that 'the optimal strategy' must be executed after it is formulated, not before.

The rules did not explicitly forbid coordination, even by non-Lesswrongers, so you could have recruited a horde of acquaintances to spam 1-bids. (that might have been against the spirit of the rules, but you could have asked abstractapplic about it first I, I guess).

Do you want to make a bet concerning abstractapplic's response to this question?

*I expect

Neo hasn't been taken yet.