Posts

Open & Welcome Thread - December 2020 2020-12-01T17:03:48.263Z
Pattern's Shortform Feed 2019-05-30T21:21:23.726Z

Comments

Comment by pattern on D&D.Sci II Evaluation and Ruleset · 2021-01-17T23:51:08.722Z · LW · GW
Wakalix did not read the instructions which came with his Thaumometer, and does not realize it needs to be calibrated based on the size and color of an item. Unfortunately, your lack of magical ability prevents you from adjusting it yourself.

This wouldn't have helped with solving the problem by itself, but sending this friendly advice along might have been a good idea.

Comment by pattern on Any examples of people analyzing/critiquing scientific studies or papers? · 2021-01-17T23:48:03.858Z · LW · GW
https://statmodeling.stat.columbia.edu/

Somehow it didn't link on your comment.

Comment by pattern on gilch's Shortform · 2021-01-17T21:02:40.840Z · LW · GW

I suggest putting those links inside those links. For example, on the github page, changing:

Also available on PyPI.

to

Also available on PyPI.
Comment by pattern on The True Face of the Enemy · 2021-01-17T20:52:30.030Z · LW · GW
Did you hear of Hole in the Wall experiments.

No. Any links/sources you have for these things would be interesting.

Comment by pattern on How do I improve at being strategic? · 2021-01-17T20:29:10.023Z · LW · GW

I'm not sure offhand what Strategy is (as separate from tactics.) That said perhaps this Sequence will be helpful:

https://www.lesswrong.com/s/qRxTKm7DAftSuTGvj

Hammertime (sequence name)

Thirty days of instrumental rationality practice. (sequence description)


Your question is a good one, that I think gets asked periodically. There might be more/better answers on similar questions. If anyone who knows what the related tags are, that might help.

Comment by pattern on Why Productivity Systems Don't Stick · 2021-01-17T20:24:44.407Z · LW · GW
human's

humans

(Ignoring synonyms like 'people' in favor of the smallest change.)


another 100 tweets

Is that how you drafted this? (Or is it a mindset?)

Comment by pattern on Wuwei · 2021-01-17T20:19:23.581Z · LW · GW
If keep[ing] away from time-wasting activities then I do worthwhile things by default.
Comment by pattern on Will we witness the compassion of a nation? · 2021-01-15T21:41:31.912Z · LW · GW

With that explanation, it makes sense. So you could link to your comment above, or perhaps something like:

"Fear can only grow alongside love. (We fear losing what we love, after all.)"

Comment by pattern on Eli's shortform feed · 2021-01-15T21:36:51.011Z · LW · GW

It sounds like a tagline for a blog.

Comment by pattern on RationalWiki on face masks · 2021-01-15T06:47:24.412Z · LW · GW

It’s not ok to shout “fire” in a theatre,

Unless there is a fire.

Unless there's another reason,

that isn't on this list.

Comment by pattern on RationalWiki on face masks · 2021-01-15T06:40:40.250Z · LW · GW
Thinking about freedom of speech and the latest "purges" on social networks, my thoughts are like this: I prefer freedom of speech even for people like homeopaths and anti-vaxers, not because I consider their opinions to be inherently valuable, but because a decision algorithm that would ban them, would probably also have banned Ignác Semmelweis two centuries ago.
Then I thought again and realized I don't actually need such an old example. A decision algorithm that would today ban people who say "COVID-19 is just a flu" would have one year ago banned people who advised wearing face masks, wouldn't it?

You make claims about decision algorithms in general which:

a) only apply to a specific decision algorithm (such as "Rational"Wiki's, to the extent that there is such a thing) or

b) only apply to a class of decision algorithms ('trust authority' + [some definition of authority]).

Comment by pattern on Will we witness the compassion of a nation? · 2021-01-14T06:14:25.855Z · LW · GW
Fear can only grow alongside love.

This part didn't sound quite right.


Great piece, btw. There were parts I disagreed with, but it was interesting to read your perspective.

Comment by pattern on Lessons from “The Book of My Life” · 2021-01-06T23:31:57.694Z · LW · GW
[4] I looked up a commentary on Cardano’s probability research to see if it’s actually plausible that he thought that calculating probabilities in games of dice was intractable. Apparently he drew a distinction between “chance” and “luck,” claimed they were both at play in dice games, and suggested that one cannot have “rational knowledge” of luck. This seems like a really interesting mistake, which might be intertwined his supernatural/non-mechanistic view of the world.

It's not clear that he's wrong, or what he's wrong about. What did he call "luck", and what did he call "chance"?


To be charitable to Cardano:

1. If you buy a lottery ticket you probably won't win.

2. But someone will win.

3. In other words, chance are your odds before. If "Lucky" is being the unlikely case, then to say 'one cannot have rational knowledge of luck' is to say 'you can't (rationally) know you're going to win the lottery, the odds are against you.' It is precisely that understanding which conveys why buying the lottery ticket is a fool's errand: because you don't know you will be lucky. You don't know you 'have a chance'.


I also think that examining past beliefs about progress can help to inform present-day debates. If historical people have tended to severely underestimate opportunities for future progress, then we should be wary of making the same mistake. We should, for example, feel some reflexive skepticism toward the predictions of growth pessimists like Robert Gordon, who worry that most really important inventions have already been developed. Like Cardano reacting to the inventions of the 1400s, they look out at electricity and plumbing and the internet and ask: “What lack we yet unless it be the taking of Heaven by storm?”

You have outlined that in context that phrase was pessimistic. But, today it sounds very different. It sounds like someone saying a) "We have everything we need to start colonizing the stars." b) that's our next goal.

Comment by pattern on G Gordon Worley III's Shortform · 2021-01-06T22:42:16.825Z · LW · GW

Which feature?

Comment by pattern on G Gordon Worley III's Shortform · 2021-01-06T03:48:58.605Z · LW · GW

You have reinvented Google Docs.

A similar effect could be achieved by having a sequence which...all appears on one page. (With the comments.)

Comment by pattern on Open & Welcome Thread - December 2020 · 2021-01-05T00:00:10.710Z · LW · GW

There isn't a way to search for posts with "Predictions" in the title?

Comment by pattern on Great minds might not think alike · 2020-12-29T18:26:09.146Z · LW · GW

Alike minds think they think great.

Alike minds think alike minds think great.

(Systematically) Overestimating the effectiveness of similarity*


*This one points towards possibilities like

1. people aren't evaluating 'how good/effective is (someone else)' but 'how well would I work with them', or 2. something about the way 'the value of similar contributions' is valued.

These seem testable.

Comment by pattern on Simultaneous Randomized Chess · 2020-12-29T18:15:33.317Z · LW · GW

2 is an exception to 5.

6. To capture en passant you must attempt the capture simultaneously.

i.e., you must predict that the other player will move their pawn forward, and simultaneously eliminate that pawn via en passant.


because pawns [are] very powerful.
Comment by pattern on The map and territory of NFT art · 2020-12-29T17:14:39.554Z · LW · GW
The kicker here is that we’ll be just as able to derive meaning from owning the “original”; it’ll be satisfying in the same way that hanging the Mona Lisa in your living room would.

Unless you value having the original. Imagine a private collector and the head of an art gallery, both happy they have the Mona Lisa. And only the thief who promised the private collector they'd switch it out for a forgery knows which is the forgery, and which, is the original.


Let’s say scientists devised a way to create a perfect copy of a painting. They manufactured a machine that can read the exact configuration of atoms in DaVinci’s Mona Lisa, and create a perfect “clone” with the exact same type of atoms arranged in the exact same order. Would those clones be just as valuable as the Mona Lisa that hangs in the Louvre?
Certainly not.

This thought experiment is probably right, although it hasn't been performed.

If the value of the original is derived from it being the original - that knowledge - then if a forgery was switched out for the original successfully, it would obtain that value (or price).

Comment by pattern on adamzerner's Shortform · 2020-12-29T06:05:12.432Z · LW · GW

Suggested title: If it's not obvious, then how do we know it's true?

Comment by pattern on mike_hawke's Shortform · 2020-12-29T06:04:21.089Z · LW · GW

Where does this Breaking News section appear? Is this a horror inflicted only on those poor souls that log in?

Comment by pattern on Debate update: Obfuscated arguments problem · 2020-12-29T04:35:51.624Z · LW · GW

The footnotes aren't numbered at the bottom of the post.

Comment by pattern on Fusion and Equivocation in Korzybski's General Semantics · 2020-12-28T03:47:59.360Z · LW · GW
Much mas made of this in Eliezer's Sequences. 

"was" or "more". (I know the red underlining in the editor won't show up in the final version, so it's "mas" that is a typo (in English).

Comment by pattern on The Twins · 2020-12-28T03:36:07.577Z · LW · GW
If one said ‘hello’, so would the other. Having a conversation would be impossible, since both would say the same things at the exact same time.

This helps explain why determinism is weird.

Comment by pattern on Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda · 2020-12-28T03:10:41.559Z · LW · GW

A)

  • Charity (see B.)
  • I didn't generalize that much.
  • Confidence (distinct from probability)*
  • Decision making heuristic (also distinct from probability)**

B)

  • I've read stuff that you've written that didn't seem bad the same way.

Reading Where to Draw the Boundaries?***

  • It's long, hard to read/understand, and seem kind of wrong. Sometimes this is because the author is bouncing between (two) things that conflict, like: 'I think I'm right about this interpretation' and 'multiple interpretations are possible'. (This confusion that might be fixable by breaking things up more.)
  • Given that the post is about a specific thing, maybe it's written in a way that is really hard to read because references to thing have been moved/altered. (I could make some of the same points just using numbers and functions. An infinite number of series**** begin with 1, 4, 9, 16, 25, then don't follow up with 36, 49, etc. And yet, upon seeing those numbers you may see a pattern, and expect the 36, and the 49. And if "our brains know what they're doing" there's a reason for that. (But beware the Law of Small Numbers.))
  • It's also like a dialgoue, but without the two sides delineated, or the reader doesn't get to read half the conversation and it's really confusing because the rebuttals are confusing on their own.

The issue with removed references/abstracting politics has been mentioned before. On it's own it's slightly convincing. Looking these specific examples, it seems like it's horribly accurate.


*Like probability, but with wide error bars.

**Do more general hypotheses 'need' more evidence, or less?

***The word "the" might be out of place in that title. (And borders are drawn on maps. And they're messy around the edges.)

****Similarly, an infinite number of functions have the properties that f(1) = 1, and f(2) = 4, and...

Comment by pattern on Allowing Exploitability in Game Theory · 2020-12-28T02:30:36.964Z · LW · GW

Your equations didn't render.

Comment by pattern on Covid 12/10: Vaccine Approval Day in America · 2020-12-10T19:38:50.034Z · LW · GW
All I Want Are Christmas

All I Want For Christmas

Comment by pattern on Mental Blinders from Working Within Systems · 2020-12-10T19:30:37.756Z · LW · GW
Success in existing systems tends to give you a big salary, which can be cut off if you step out of line, making you dependent.

Sometimes there's a preference, or outright requirement, for candidates to be in a lot of debt, sometimes combined with the high salary. (Or the company is somewhere with a very high cost of living, which gwern dubbed "golden handcuffs".*)


These existing systems will tend to have norms (implicit and explicit) which maintain the status quo. Putting up an extraordinary effort will tend to disrupt these norms. Furthermore, it won't usually be rewarded proportionately to the risks; you basically get your salary one way or the other. (This is of course not 100% true, but you get the point.)

Lazy workers don't like hard workers, because they make them have to work harder.


*After checking the source, I found he also highlighted the cost of healthcare, and student debt. This seems far more pervasive.

Comment by pattern on Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda · 2020-12-10T19:16:32.992Z · LW · GW

I appreciate this set of links grouped together being made because, given the similarity between them, having them grouped together seems useful.

I also think that every one of those posts is probably too long. Specifically, longer than they need to be. I consider this evidence in favor of 'keeping politics out of lesswrong does help with 'rationality''.

Comment by pattern on Mati_Roy's Shortform · 2020-12-09T20:02:33.453Z · LW · GW

This comment/post is the 3rd of 3 duplicates. (Link to main here.)

Comment by pattern on Mati_Roy's Shortform · 2020-12-09T20:02:15.323Z · LW · GW

This comment/post is the 2nd of 3 duplicates. (Link to main here.)

Comment by pattern on [AN #128]: Prioritizing research on AI existential safety based on its application to governance demands · 2020-12-09T19:56:48.386Z · LW · GW
A lot of explainability research has focused on instilling more trust in AI systems without asking how much trust would be appropriate, even though there is research showing that hiding model bias instead of truthfully revealing it can increase trust in an AI system.

So it might be better to ask "Does hiding model bias lead to better team performance?" (In what environments/over what time horizon/with what kind of players?)

I am more sceptical that the differences between inductive and deductive explanations will be the same in different contexts.

I wonder how people do without an explanation.

Comment by pattern on Mati_Roy's Shortform · 2020-12-09T19:50:13.958Z · LW · GW

This comment/post is one of 3 duplicates. (Link to main here.)

Comment by pattern on Quick Thoughts on Immoral Mazes · 2020-12-09T19:45:22.475Z · LW · GW
One hypothesis could be that although the root causes are in some ways decreasing, the damage has been done -- like someone exposed to the common cold at a party, and who gradually gets worse once they get home. America has contracted the maze disease, and it continues to fester. In other words, even if large corporations with deep hierarchies are actually less prevalent than they once were, the maze cultures in those that exist are far, far more developed.

Compare 100 mazes (mazes are widespread) with 10 mazes holding 90% of the area the 100 mazes did (big concentrated mazes). This allows them to be 10 times as deep.

Comment by pattern on The Incomprehensibility Bluff · 2020-12-08T21:44:45.682Z · LW · GW

The link is

https://www.lesswrong.com/posts/BNfL58ijGawgpkh9b/everybody-knows

Comment by pattern on Toward A Culture of Persuasion · 2020-12-08T21:42:01.405Z · LW · GW

Discussion? Changing your mind?

Comment by pattern on Open & Welcome Thread - December 2020 · 2020-12-07T00:38:02.184Z · LW · GW

Since the implementation and adoption of shortform, has Open Thread content decreased?

Comment by pattern on Pre-Hindsight Prompt: Why did 2021 NOT bring a return to normalcy? · 2020-12-07T00:29:42.366Z · LW · GW

That looks even better with Dark Reader.

Comment by pattern on The Incomprehensibility Bluff · 2020-12-07T00:23:54.978Z · LW · GW
First, complexity for its own sake. This includes using special terminology and vocabulary to express concepts that could be as easily explained with normal language.

Every field does this.


5. Finally, the confidence with which a theory is expressed can be an important cue, especially where the theory relates to generally low-confidence fields of knowledge (philosophy, psychology, economics and the social sciences being chief amongst them). A theory which is measured, qualified, and expressed with uncertainty invites questions, and forthright expressions of disagreement or lack of understanding. But such statements undermine the social dynamics buttressing the incomprehensibility bluff.  Contrastively, a confident statement of views is a cue that the author knows precisely what they are talking about.

The phrasing on this one is a little weird.

Comment by pattern on Covid 12/3: Land of Confusion · 2020-12-04T17:45:21.518Z · LW · GW

If that is indeed the LW preference - then could it be done on the import phase? (Or by readers on LW?) I like the twitter links because they're actually good, and would happily switch to reading on your blog to get the unabridged version.

More simply, would spoilering the twitter stuff work?

Like this.

Comment by pattern on Open & Welcome Thread - December 2020 · 2020-12-04T17:18:18.014Z · LW · GW

Mods, if you want to make this post frontpage, stickied, etc., feel free to do so.

Comment by pattern on Book review: WEIRDest People · 2020-12-02T00:24:16.370Z · LW · GW
P.S. I was maybe a bit misleading when

This should be a footnote.

Comment by pattern on Forecasting Newsletter: November 2020 · 2020-12-01T18:13:59.540Z · LW · GW

This link doesn't work:

Highlights

Where it should go.

Comment by pattern on Open & Welcome Thread – November 2020 · 2020-12-01T17:23:49.542Z · LW · GW

I recommend making this a question.

Comment by pattern on mike_hawke's Shortform · 2020-11-29T15:27:30.394Z · LW · GW

This posted 4 times. This was the fourth time.

Comment by pattern on mike_hawke's Shortform · 2020-11-29T15:27:11.764Z · LW · GW

This posted 4 times. This was the third time.

Comment by pattern on mike_hawke's Shortform · 2020-11-29T15:26:53.790Z · LW · GW

This posted 4 times. This was the second time.

Comment by pattern on Embedded Interactive Predictions on LessWrong · 2020-11-28T00:22:47.166Z · LW · GW

Do predictions have resolutions?

Comment by pattern on Should I do it? · 2020-11-26T15:43:25.955Z · LW · GW

I don't know when you should stop. All I'm suggesting is that you not turn it on, without a time on which it is supposed to (automatically) switch off. In other words, you should stop it regularly, over and over again. This has the benefit of letting you consider the new information you have received, and decide how to respond to it. Perhaps your design will be "flawed" - and won't have the risk of going 'foom' that you think it will (without further work to revise and change it - by you, before it is capable of 'improving'). If you decide that it is risky, then the 'intervention' isn't turning it off - it's just not deciding to turn it back on (which maybe shouldn't be automatic).

Comment by pattern on Troy Macedon's Shortform · 2020-11-26T15:37:06.751Z · LW · GW

I'm saying the rules differ from how they are said - and the apparent conflict results from the difference.