Posts

Site Redesign Feedback Requested 2020-07-03T22:28:17.935Z · score: 44 (13 votes)
What's Your Cognitive Algorithm? 2020-06-18T22:16:39.104Z · score: 68 (19 votes)
Can Covid-19 spread by surface transmission? 2020-06-10T22:09:24.983Z · score: 65 (16 votes)
Quarantine Bubbles Require Directness, and Tolerance of Rudeness 2020-06-07T19:52:51.600Z · score: 40 (11 votes)
Your best future self 2020-06-06T19:10:04.069Z · score: 29 (15 votes)
What are the best tools for recording predictions? 2020-05-24T19:15:24.033Z · score: 14 (4 votes)
Reflective Complaints 2020-05-21T21:11:48.842Z · score: 36 (16 votes)
The Best Virtual Worlds for "Hanging Out" 2020-04-27T21:54:46.400Z · score: 61 (24 votes)
Tag Relevance Systems (Feedback Requested) 2020-04-23T01:29:56.165Z · score: 25 (7 votes)
Holiday Pitch: Reflecting on Covid and Connection 2020-04-22T19:50:20.326Z · score: 62 (21 votes)
What are the best online tools for meetups and meetings? 2020-03-27T22:58:01.287Z · score: 27 (9 votes)
What is the safe in-person distance for COVID-19? 2020-03-26T20:29:52.732Z · score: 34 (12 votes)
What's the upper bound of how long COVID is contagious? 2020-03-21T22:39:30.829Z · score: 27 (5 votes)
Tagging (Click Gear Icon to filter Coronavirus content) 2020-03-21T22:16:26.092Z · score: 39 (12 votes)
How does one run an organization remotely, effectively? 2020-03-20T20:26:01.379Z · score: 18 (6 votes)
If I interact with someone with nCov for an hour, how likely am I to get nCov? 2020-03-01T23:53:19.649Z · score: 41 (12 votes)
Reviewing the Review 2020-02-26T02:51:20.159Z · score: 47 (12 votes)
Slack Budget: 3 surprise problems per week 2020-02-25T21:52:16.314Z · score: 39 (18 votes)
The Relational Stance 2020-02-11T05:16:06.900Z · score: 47 (17 votes)
Long Now, and Culture vs Artifacts 2020-02-03T21:49:25.367Z · score: 25 (8 votes)
Bay Winter Solstice seating-scarcity 2020-02-01T23:09:39.563Z · score: 3 (3 votes)
How would we check if "Mathematicians are generally more Law Abiding?" 2020-01-12T20:23:05.479Z · score: 28 (5 votes)
Please Critique Things for the Review! 2020-01-11T20:59:49.312Z · score: 51 (13 votes)
Being a Robust Agent (v2) 2020-01-11T02:06:45.467Z · score: 119 (45 votes)
Clumping Solstice Singalongs in Groups of 2-4 2020-01-05T20:50:51.247Z · score: 15 (2 votes)
Meta-discussion from "Circling as Cousin to Rationality" 2020-01-03T21:38:16.387Z · score: 12 (5 votes)
Voting Phase UI: Aggregating common comments? 2019-12-31T03:48:41.024Z · score: 10 (1 votes)
What are the most exciting developments from non-Europe and/or non-Northern-Hemisphere? 2019-12-29T01:30:05.246Z · score: 14 (3 votes)
Propagating Facts into Aesthetics 2019-12-19T04:09:17.816Z · score: 85 (26 votes)
"You can't possibly succeed without [My Pet Issue]" 2019-12-19T01:12:15.502Z · score: 53 (24 votes)
Karate Kid and Realistic Expectations for Disagreement Resolution 2019-12-04T23:25:59.608Z · score: 80 (27 votes)
What are the requirements for being "citable?" 2019-11-28T21:24:56.682Z · score: 44 (11 votes)
Can you eliminate memetic scarcity, instead of fighting? 2019-11-25T02:07:58.596Z · score: 66 (22 votes)
The LessWrong 2018 Review 2019-11-21T02:50:58.262Z · score: 105 (29 votes)
Picture Frames, Window Frames and Frameworks 2019-11-03T22:09:58.181Z · score: 32 (7 votes)
Healthy Competition 2019-10-20T20:55:48.265Z · score: 57 (21 votes)
Noticing Frame Differences 2019-09-30T01:24:20.435Z · score: 143 (54 votes)
Meetups: Climbing uphill, flowing downhill, and the Uncanny Summit 2019-09-21T22:48:56.004Z · score: 27 (6 votes)
[Site Feature] Link Previews 2019-09-17T23:03:12.818Z · score: 35 (9 votes)
Modes of Petrov Day 2019-09-17T02:47:31.469Z · score: 68 (26 votes)
Are there technical/object-level fields that make sense to recruit to LessWrong? 2019-09-15T21:53:36.272Z · score: 26 (10 votes)
September Bragging Thread 2019-08-30T21:58:45.918Z · score: 52 (15 votes)
OpenPhil on "GiveWell’s Top Charities Are (Increasingly) Hard to Beat" 2019-08-24T23:28:59.705Z · score: 11 (2 votes)
LessLong Launch Party 2019-08-23T22:18:39.484Z · score: 13 (4 votes)
Do We Change Our Minds Less Often Than We Think? 2019-08-19T21:37:08.004Z · score: 21 (3 votes)
Raph Koster on Virtual Worlds vs Games (notes) 2019-08-18T19:01:53.768Z · score: 22 (11 votes)
What experiments would demonstrate "upper limits of augmented working memory?" 2019-08-15T22:09:14.492Z · score: 30 (12 votes)
Partial summary of debate with Benquo and Jessicata [pt 1] 2019-08-14T20:02:04.314Z · score: 90 (27 votes)
[Site Update] Weekly/Monthly/Yearly on All Posts 2019-08-02T00:39:54.461Z · score: 36 (8 votes)
Gathering thoughts on Distillation 2019-07-31T19:48:34.378Z · score: 36 (9 votes)

Comments

Comment by raemon on Classifying games like the Prisoner's Dilemma · 2020-07-08T00:34:45.073Z · score: 6 (3 votes) · LW · GW

Curated.

A year ago, a mathematician friend of mine commented that "as far as I can tell, nobody has published a paper that just outlines all the different types of 2x2 quadrant-game payoffs", and they spent a weekend charting out the different payoff matrixes, and just meditating on how they felt about each one, and how it fit into their various game theory intuitions. But, they didn't get around to publishing it AFAICT.

This seems like a really obvious thing to do, but prior to this post I don't think anyone had written it up publicly. (if someone does know of an existing article, feel free to link here). But regardless I think a good writeup of this is useful to have in the LessWrong body of knowledge.

Real life is obviously more complicated than 2x2 payoff games, but having a set of crisp formulations is helpful for orientation on more complex issues. And, the fact that many people default to using prisoner's dilemma all the time even when it's not really appropriate seems like an actual problem that needed fixing.

I have some sense that the pedagogy of this post could be improved. I'd previously commented that using different symbols would be helpful for me. I have a nagging sense that there is other more useful feedback I could give on how to articulate some of the games, but don't have clear examples.

Those concerns are relatively minor though. Overall, thanks for the great post. :)

Comment by raemon on Editor Mini-Guide · 2020-07-08T00:15:59.707Z · score: 2 (1 votes) · LW · GW

I'm not yet sure about this, but last I checked you had to make sure on Google Drive that the image was shared fully publicly. Doublechecking you tried that?

Comment by raemon on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-07-06T22:11:02.697Z · score: 4 (2 votes) · LW · GW

Is the "aligning incentives" tag you are interested in something AI specific or should it apply to general human institutions / social systems? I could see a case for either, but that impacts what tag names we should use.

Comment by raemon on Site Redesign Feedback Requested · 2020-07-05T22:40:54.195Z · score: 3 (2 votes) · LW · GW

Jim's actually been interested in a dark mode, and better theme support in general. It's a bit tricky because we do expect to change nontrivial bits of the site around sometimes. The actual effort for initial theme support isn't too high, but then committing to keeping various themes working is a pain. (Different team members have different opinions on how to prioritize those issues)

PostsItem-isRead is a sort of hacky thing I added so I could experiment with changing the styling of read posts, with a superfluous style change so that JSS didn't ignore the class. Currently, a post's read-status only displays on it's title, but I wanted to experiment with changing the background color, further up the component hierarchy.

PostsItem2 isn't deliberately unstable but it'll undergo some changes in month's redesign. 

Comment by raemon on Classifying games like the Prisoner's Dilemma · 2020-07-04T22:39:07.934Z · score: 23 (10 votes) · LW · GW

I'd heard another friend discuss this idea a few months back, and thought it was a useful thing for someone to write up.

Something I found a bit difficult reading this was the arbitrariness of W, X, Y and Z (I had trouble remembering which was which). I think I'd have found it a little easier to parse the examples if they used something like XX, XY, YX, and YY. (Honestly, CC, CD, DC, DD would have been easiest to map onto my existing models, although I get part of the point here was to break from the Cooperate/Defect concept)

Comment by raemon on Site Redesign Feedback Requested · 2020-07-04T18:10:13.851Z · score: 2 (1 votes) · LW · GW

Oh! That makes sense, will fix that. (Also I think this is true for all tags and seems unnecessarily confusing, thanks for the heads up)

Comment by raemon on Site Redesign Feedback Requested · 2020-07-04T08:26:18.272Z · score: 2 (1 votes) · LW · GW

oh, I meant specifically for the Add Tag + button (I assumed you meant to the top-right of the Latest section)

Comment by raemon on Site Redesign Feedback Requested · 2020-07-04T06:46:56.431Z · score: 6 (3 votes) · LW · GW

Thanks! Great feedback.

Clicking the "+" to add a tag filter doesn't work.

This surprises me a bit – what OS/browser/setup are you using? Clicking the button should pop up a little widget where you type in a new tag, with a dropdown menu of selections as you type.

Comment by raemon on Dony's Shortform Feed · 2020-07-03T22:37:57.327Z · score: 2 (1 votes) · LW · GW

The short answer is "it turns out making use of an assistant is a surprisingly high-skill task, which requires a fair amount of initial investment to understand which sort of things are easy to outsource and which are not, and how to effectively outsource them."

Comment by raemon on Second Wave Covid Deaths? · 2020-07-01T21:00:38.562Z · score: 6 (3 votes) · LW · GW

Not really the main topic here, but I've been wanting to see graphs for better breakdowns of different parts of California. Anyone have good recommendations for that?

Comment by raemon on Matt Goldenberg's Short Form Feed · 2020-07-01T20:38:30.357Z · score: 2 (1 votes) · LW · GW

One of my worries with the talk about Simulacra Levels and how it relates to Moral Mazes is that it's not distinguishing between Kegan 2 players (who are lying and manipulating the system for their own gain), with Kegan 4.5 players (who are lying and manipulating the system because they actually have no ontology to operate through except revenge and power), with Kegan 5 players (who are viewing truth and social dynamics as objects to be manipulated because there is no truth of which tribe their a part of or what they believe about a specific thing - it's all dependent on what will generate the most meaning for them/their organization/their culture).

At the same time, it's absolutely imperative that you have systems that can find, develop and promote Kegan 5 leaders that can create new systems and operate through all 3 types of rationality.  Otherwise your organizations/cultures values won't be able to evolve with changing situation.

I worry framing things as Simulacra levels don't distinguish between these two types of players.


This is an interesting concern. I think it's useful to distinguish these things. I'm not sure how big a concern it is for the Simulacra Levels thing to cover this case – my current worry is that the Simulacra concept is trying to do too many things. But, since it does look like Zvi is hoping to have it be a Grand Unified Theory, I agree the Grand Unified version of it should account for this sort of thing.

Comment by raemon on Possible takeaways from the coronavirus pandemic for slow AI takeoff · 2020-06-30T23:20:52.220Z · score: 8 (4 votes) · LW · GW

Curated. 

I personally agree with the OP, and have found at least the US's response to Covid-19 fairly important for modeling how it might respond to AI. I also found it particularly interesting that it focused on the "Slow Takeoff" scenario. I wouldn't have thought to make that specific comparison, and found it surprisingly apt. 

I also think that, regardless of whether one agrees with the OP, I think "how humanity collectively responded to Covid-19" is still important evidence in some form about how we can expect them to handle other catastrophes, and worth paying attention to, and perhaps debating.

Comment by raemon on Welcome to LessWrong! · 2020-06-30T22:37:41.896Z · score: 2 (1 votes) · LW · GW

note: TAG's solution works for https://www.greaterwrong.com/, an alternate viewing portal for LessWrong, but not for LessWrong.com.

That said, I'm curious what devices you're reading it on. (some particular browsers have rendered the font particularly badly for reasons that are hard to anticipate in advance). In any case, sorry you've had a frustrating reading experience – different people prefer different fonts and it's a difficult balancing act.

Comment by raemon on A reply to Agnes Callard · 2020-06-29T16:30:24.645Z · score: 8 (4 votes) · LW · GW

Our petition should have a clause talking about how terrible it is for the NYT to bow to mobs of enraged internet elites but that it would be hypocritical of them to choose now as their moment to grow a spine. At least this gets the right ideas across.

Something in this space feels approximately right to me. (This feels supererogatory rather than obligatory, and I think it is more important to be able to defend yourself than to get all the nuances exactly right. But, it is good to look for ways to defend yourself that also improve civilizational norms on the margin)

Comment by raemon on A reply to Agnes Callard · 2020-06-29T16:25:34.931Z · score: 8 (5 votes) · LW · GW

So I do think it makes sense to have philosopher societies where the focus is on sharing information in such a way that we jointly converge on the truth (I'm not sure if this is quite the same thing you're getting at with communicative rationality.). And I think there is benefit to trying to get broader society to adopt more truthseeking styles of communication, which includes more reasoned arguments on the margin.

But, this doesn't imply that it's always the right thing to do, when interacting with people who don't share your truthseeking principles. (for extreme example, I wouldn't try to give reasoned arguments to someone attacking me on the street)

I have some sense of why communicative rationality is important to you, but not why it should be (overwhelmingly) important to me.

I think there is sometimes benefit to people standing by their principles, to get society to change around them. (i.e. you can be a hero of communicative rationality, maybe even trying to make reasoned arguments to an attacker on the street, to highlight that clear communication is a cause worth dying for). But, this is a supererogatory thing. I wouldn't want everyone who was interested in philosophy to feel like interest-in-philosophy meant giving up their ability to defend themselves, or give up the ability to communicate in ways that other cultures understand or respect. 

That would probably result in fewer people being willing to incorporate philosophy into their life.

My own conception of rationality (note: Vaniver may or may not endorse this) is to be a robust agent – someone who reliably makes good decisions in a variety of circumstances, regardless of how other agents are interacting with me and how the environment might change. This includes clear communication, but also includes knowing how to defend yourself, and credibly communicating when you will defend yourself, and how, so that people can coordinate with you.

My conception of "rationalist hero" is someone who understands when it is the right time to defend "communication via reasoned arguments", and when is the right to defend other foundational norms (via incentives or whatnot)

I think this is legitimately tricky (part of being a rationalist hero in my book is having the good judgment to know the difference, and it can be hard sometimes). But, right now it seems to me that it's more important to be incentivizing the Times to not de-anonymize people, rather than to focus on persuading them that it is wrong to do so using reasoned arguments.

Comment by raemon on A reply to Agnes Callard · 2020-06-29T15:51:23.132Z · score: 4 (2 votes) · LW · GW

Probably worth noting that folk on LessWrong may be using the word rationality different than the way it sounds like you're using the word. (This is fine, but it means we need to be careful that we're understanding each other right)

The post What Do We Mean By Rationality is a bit old but still roughly captures what most LW-folk mean by the word:

1. Epistemic rationality: systematically improving the accuracy of your beliefs.

2. Instrumental rationality: systematically achieving your values.

The first concept is simple enough. When you open your eyes and look at the room around you, you’ll locate your laptop in relation to the table, and you’ll locate a bookcase in relation to the wall. If something goes wrong with your eyes, or your brain, then your mental model might say there’s a bookcase where no bookcase exists, and when you go over to get a book, you’ll be disappointed.

This is what it’s like to have a false belief, a map of the world that doesn’t correspond to the territory. Epistemic rationality is about building accurate maps instead. This correspondence between belief and reality is commonly called “truth,” and I’m happy to call it that.1

Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”, 

So rationality is about forming true beliefs and making decisions that help you win.

I'm not sure what your conception of rationality is. I'm somewhat interested, but I think it might be better to just cut closer to the issue: why is good to rely on reasoned arguments rather than petitions?

Comment by raemon on TurnTrout's shortform feed · 2020-06-29T02:34:23.704Z · score: 2 (1 votes) · LW · GW

Woo faith healing! 

(hope this works out longterm, and doesn't turn out be secretly hurting still) 

Comment by raemon on Radical Probabilism [Transcript] · 2020-06-28T18:06:40.296Z · score: 2 (1 votes) · LW · GW

A background question I've had for a while: people often use Dutch Booking as an example of a failure mode you need your rationality-theory to avoid. Dutch Booking seems like a crisp, formalizable circumstance that makes it easy to think about some problems, but I'm not sure it ever comes up for me. Most people seem to avoid it via "don't make big bets often", rather than "make sure your beliefs are rational and inexploitable."

Is Dutch Book supposed to be a metaphor for something that happens more frequently? 

Comment by raemon on GPT-3 Fiction Samples · 2020-06-28T04:52:57.218Z · score: 4 (2 votes) · LW · GW

My own take (not meant to be strong evidence of anything, mostly just kinda documenting my internal updating experience)

I had already updated towards fairly shortish (like, maybe 20% chance of AGI in 20 years?). I initially had a surge of AAAAUGH maybe the end times around now right around the corner with GPT-3, but I think that was mostly unwarranted. (At least, GPT-3 didn't seem like new information, it seemed roughly what I'd have expected GPT-3 to be like, and insofar as I'm updating shorter it seems like that means I just made a mistake last year when first evaluating GPT-2)

I'm also interested in more of Kaj's thoughts.

Comment by raemon on Why are all these domains called from Less Wrong? · 2020-06-27T20:46:03.482Z · score: 5 (3 votes) · LW · GW

Quick note for transparency, re: LogRocket – previously, we used another service called FullStory which did indeed edit out the username. We're currently trying out LogRocket to make sure it's basically worthwhile, and haven't yet implemented various anonymization practices, but plan to.

Comment by raemon on Betting with Mandatory Post-Mortem · 2020-06-27T20:22:10.250Z · score: 8 (5 votes) · LW · GW

Curated.

This seems like a quite obvious idea in retrospect. I haven't yet thought through whether it's something you should always be doing when you're betting, but it certainly seems like a good tool to have in the rationalist-culture-toolkit.

Comment by raemon on A Personal (Interim) COVID-19 Postmortem · 2020-06-26T16:22:41.497Z · score: 4 (2 votes) · LW · GW

Thanks for this!

Paragraph with confusing wording:

In retrospect, I think it would have been better, consequentially, to push for cloth masks earlier, but current modeling and our understanding of spread make it clear that mask wearing by itself is only marginally effective.

Do you mean our present day understanding, or our understanding at the time? Do you mean that you still think masks are only marginally effective, or thought so at the time?

Comment by raemon on What is meant by Simulcra Levels? · 2020-06-25T21:30:50.979Z · score: 2 (1 votes) · LW · GW

There is some chance, for reasons actually completely unrelated to the current discussion, that I might actually try to read the original work. Would be kinda interested in book-clubbing it.

Comment by raemon on What is meant by Simulcra Levels? · 2020-06-25T21:29:51.154Z · score: 2 (1 votes) · LW · GW

Yeah, I do realize you were still aiming for a broader thing with Level 3 than the way it crystallized in my head. I think there's still some difference between how job titles were treated in the original example and (my understanding of) the somewhat broader point you were making in the Covid post.

(i.e. I think your more recent paragraph of "a more general form of indicating what things/groups you want to support/oppose or raise/lower in status, etc" still feels a bit different than the Job Titles thing. Specifically, Stage 3 in Bullshit Titles is when they're actually sort of beginning to lower in status, while other complicated stuff is going where the shared map is breaking down. The differences in 2/3/4 in the original example felt less distinct to me than they felt in your recent post. And, to be clear, your recent post mostly felt more useful than what came before, by virtue of simplifying things into a particular wrong-but-usefully-clear-map)

So, yes, these two things look distinct and are importantly different, but I hope to do a unified theory thing in a month or two. Stay tuned.

Looking forward. (And, to be clear, I expect this to be a huge job, I'm mostly hoping we've gotten more clarity on it over the course of the next year, more than I'm hoping/expecting you to make concrete progress on the timescale of weeks)

Comment by raemon on Effective children education · 2020-06-25T20:46:06.232Z · score: 3 (2 votes) · LW · GW

Minor formatting note: you don't need to enter extra spaces between paragraphs (the editor / formatting will add spaces for you. I fixed it here using Mod Powers, apologies if you actively preferred the wider spacing)

Comment by raemon on Covid 6/25: The Dam Breaks · 2020-06-25T20:16:52.066Z · score: 27 (16 votes) · LW · GW

As far as truncated numbers go, I find the graphs far easier to parse. In general if it's possible to do graphs instead of charts of numbers I'd find that more useful as a reader.

(numbers seem useful if I were actually trying to do analysis, but I'm guessing those people prefer an actual spreadsheet link)

Comment by raemon on Assessing Kurzweil predictions about 2019: the results · 2020-06-25T02:56:16.450Z · score: 11 (7 votes) · LW · GW

Curated.

I think "futurism with good epistemics" is pretty hard, and pretty important. The LessWrong zeitgeist is sort of "Post Kurzweil" – his predictions aren't the ones that we'll personally be graded on. But, I think the act of methodically looking over his predictions helps us orient on the predictions we're making. 

I think a) it offers a cautionary tale of mistakes we might be making, and b) I think the act of having a strong tradition of evaluating long-past predictions (hopefully?) helps ward off bullshit. (i.e. many pundits make predictions which skew towards 'locally sound exciting and impressive' because they don't expect to be called on it later)

It's also interesting to note how much disagreement there was over some predictions.

One question I came away with:

It's been suggested that Kurzweil's predictions for 2009 are mostly correct in 2019.

Is this well established? Is there a previous writeup that argues this, or just a general feel? I'd be interested in applying the same methology to the old 2009 predictions and check if they're true.

Comment by raemon on Betting with Mandatory Post-Mortem · 2020-06-24T22:53:58.821Z · score: 7 (5 votes) · LW · GW

Yeah, this seems great to me. 

It does seem like a fair bit of time, people might just say "well, I got unlucky, but my models are the same, and, I dunno I guess I slightly adjusted the weights of my model?". The more interesting thing is when you make a bet where a negative outcome should force a large update.

Comment by raemon on Preview On Hover · 2020-06-24T22:46:18.374Z · score: 2 (1 votes) · LW · GW

It seems plausible to me that having hovers Off To The Side may be better than the current thing LW does. I do find that the jefftk.com hovers are... too far off to the side. I'd prefer them if they were basically just to the right of the main column. 

(I also don't really mind them appearing in the main body, perhaps not surprising since I helped implement the LW ones, but it'd make sense to me if other people preferred them to-the-side. I think I generally find it less distracting to have the Right By The Hover when I deliberately moused over them, but if I'm just scrolling quickly it can sometimes be annoying. Though this might be solved by just implementing a slight delay to their appearance)

Comment by raemon on Raemon's Scratchpad · 2020-06-24T22:06:17.814Z · score: 3 (2 votes) · LW · GW

Anyone know how predictions of less than 50% are supposed to be handled by PredictionBook? I predicted a thing would happen with 30% confidence. It happened. Am I supposed to judge the prediction right or wrong?

It shows me a graph of confidence/accuracy that starts from 50%, and I'm wondering if I'm supposed to be phrasing prediction in such a way that I always list >50% confidence (i.e. I should have predicted that X wouldn't happen, with 70% confidence, rather than that it would, with 30%)

Comment by raemon on [META] Building a rationalist communication system to avoid censorship · 2020-06-24T21:22:33.760Z · score: 3 (2 votes) · LW · GW

I think this is a problem, but not an insurmountable one (note that Facebook requires login to see most things)

Comment by raemon on Simulacra Levels and their Interactions · 2020-06-24T20:49:22.875Z · score: 2 (1 votes) · LW · GW

I ended up writing some more thoughts on how the concept of Simulacra Levels seem to have evolved on LessWrong, over on Chris Leong's post. It was sort of relevant to other things I brought up in this comment thread.

Comment by raemon on What is meant by Simulcra Levels? · 2020-06-24T20:48:02.508Z · score: 12 (4 votes) · LW · GW

So... I started out thinking this didn't make sense as a question to ask right now. But I've now re-read the original post and gotten a clearer sense of what goals Benquo had for the Simulacrum Levels model, and how some of the newer posts have diverged.

It seems like Simulacrum Levels were aiming to explore two related concepts:

  • How people's models/interactions diverge over time from an original concept (where that concept is gradually replaced by exaggerations, lies, and social games, which eventually bear little or not referent to the original)
  • How people relate to object level truth, as a whole, vs social reality

The first concept makes sense to call "simulacrum", and the second one I think ends up making more sense to classify in the 2x2 grid that I and Daniel Kokotajilo both suggested (and probably doesn't make sense to refer to as 'simulacrum')

Benquo's original essay uses the example of Bullshit Job Titles, wherein (paraphrased from original)

First, some people are called "managers", because they manage people.

Second, companies have started offering managerial titles to employees as a perk so that they can benefit from the desirable side effects, lessening the title's usefulness for tracking who's doing what work, but possibly increasing its correlation with some of the side effects, since the good (i.e., effective at producing the desired side effects) titles go to the people who are most skilled at playing the game. 

The system is wireheading itself with respect to titles, but in a way that comes with real resource commitments, so people who can track the map and reality separately, and play on both gameboards simultaneously, can extract things through judicious acquisition of titles.

Third, the system starts using titles to wirehead its employees. Titles like "Vice President of Sorting" are useless and played out in the industry, interviewers know to ask what you actually did (and probably just look at your body language, and maybe call around to get your reputation, or just check what parties you've been to), but maybe there's some connotative impressiveness left in the term, and you feel better getting to play the improv game as a Vice President rather than a Laborer. You're given social permission to switch your inner class affiliation and feel like a member of the managerial class. Probably mom and dad are impressed.

Fourth, some of the practices from world 3 are left, and it's almost universally understood emotionally that they don't refer to anything, but there's nothing real to contrast them with, so if you tell a story about yourself well enough, people will go along with it even though they know that all the "evidence" is meaningless.

I actually kinda liked... um, Chris Leong's Summary, making a similar point but at a somewhat broader worldview level:

Baudrillard's language seems quite religious, so I almost feel that a religious example might relate directly to his claims better. I haven't really read Baudrillard, but here's how I'd explain my current understanding:

Stage 1: People pray faithfully in public because they believe in God and follow a religion. Those who witness this prayer experience a window into the transcendent.

Stage 2: People realise that they can gain social status by praying in public, so they pretend to believe. Many people are aware of this, so witnessing an apparently sincere prayer ceases to be the same experience as you don't know whether it is genuine or not. It still represent the transcendent to some degree, but the experience of witnessing it just isn't the same.

Stage 3: Enough people have started praying insincerely that almost everyone starts jumping on the bandwagon. Publicly prayer has ceased to be an indicator of religiosity or faith any more, but some particularly naive people still haven't realised the pretence. People still gain status from this for speaking sufficiently elegantly. People can't be too obviously fake though or they'll be punished either by the few still naive enough to buy into it or by those who want to keep up the pretence.

Stage 4: Praying is now seen purely as a social move which operates according to certain rules. It's no longer necessary in and of itself to convince people that you are real, but part of the game may include punishments for making certain moves. For example, if you swear during your prayer, that might be punished for being inappropriate, even though no-one cares about religion any more, because that's seen as cheating or breaking the rules of the game. However, you can be obviously fake in ways that don't violate these rules, as the spirit of the rules has been forgotten. Maybe people pray for vain things like becoming wealthy. Or they go to church one day, then post pictures of them getting smashed the next day on Facebook, which all their church friends see, but none of them care. The naive are too few to matter and if they say anything, people will make fun of them.

Chris Leong's conception is useful because the original "prayer as earnest expression of faith" thing is in fact built on falsehood, and in demonstrates how the notion of a Copy of a Copy Simulacrum process applies to things other than objective truth.

These are both notably different from Zvi's most recent conception wherein Level 3 specifically means "words are incantations that tell you what team you're on." 

"What team you're on" is a specific, narrower type of Level 3. 

Stage 3 Bullshit Job Titles aren't really about what team you're on, they're about how the system has corrupted the concept of job titles, in a way that isn't really about anyone's team. (there might be other things going on in-tandem with the bullshit job titles that are about what-team-you're-on, but someone calling themselves Vice President of Sorting doesn't really tell you much about their worldview or alliances, it's just The Incantation That Refers to Someone Who Sorts)

So...

I think some high level disagreement I've had with Simulacra-as-a-concept is the way "Simulacra" and "How People Treat Object vs Social Reality" have gotten conflated. 

In particular, I think we are long past the point where the original "object level reality" got simulacra'd away for much of society, and it's not very useful to track overall. But it does make sense to track ascending simulacra levels of specific object level maps (such as job titles), which do get corrupted over time. 

"The evolution of Moral Mazes" is an interesting case where it's a domain more specific than "all of society" and less specific than "Bullshit Jobs". It does map fairly well onto both the "simulacra as general corruption of original map" and "simulacra as 'physical vs social reality' distinctions". But, I think it makes most sense to have a map of Moral Mazes that is just optimized for being a Map of Moral Mazes. 

I think there are also useful maps to build of how society overall has ebbed and flowed in how "simulacra-y it is", but the Simulacra model feels more murky than helpful to me.

Comment by raemon on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T23:31:00.305Z · score: 8 (4 votes) · LW · GW

I think it kinda matters how people perceive it as being said, and, well, note that someone who is friendly on-your-side initially perceived it that way.

(This is not really a strong claim about strategy, it just seemed like something one should be weighing while formulating their overall strategy)

Comment by raemon on FactorialCode's Shortform · 2020-06-23T21:50:25.323Z · score: 2 (1 votes) · LW · GW

When new users post content, moderations check whether they're spammers, and whether they seem to meet the basic quality bar we want for site users. (In some cases we block accounts, in some cases we send them a message noting that their content isn't generally up to the standards of the site)

Comment by raemon on FactorialCode's Shortform · 2020-06-23T19:25:47.941Z · score: 2 (1 votes) · LW · GW

Yup. This is already a thing we keep an eye out for for new users (I'm less likely to approve a new user if they seem primarily interested in arguing politics), and I agree it makes more sense to be on the lookout for it right now.

Comment by raemon on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T17:35:55.618Z · score: 5 (3 votes) · LW · GW

The Metaculus folk actually did the embedded iframe, we just implemented the use of the frame in the LW link previews.

Comment by raemon on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T17:25:43.904Z · score: 8 (5 votes) · LW · GW

We implemented Metaculus hoverovers a few weeks ago (I assume that's what you're asking). We mentioned in the open thread.

Comment by raemon on What's Your Cognitive Algorithm? · 2020-06-23T16:52:16.947Z · score: 4 (2 votes) · LW · GW

So, doublechecking my comprehension:

In my OP, my claim was basically "you probably can get human-level output out of something GPT-like by giving it longer term rewards/punishments, and having it continuously learn" (i.e. give it an actual incentive to figure out how to fight fires in novel situations, which current GPT doesn't have).

I realize that leaves a lot of fuzziness in "well, is it really GPT if has a different architecture that continuously learns and has longterm rewards?". My guess was that it'd be fairly different from GPT architecturally, but that it wouldn't depend on architectural insights we haven't already made, it'd just be work to integrate existing insights.

Is your claim "this is insufficient – you still need working memory and the ability to model scenarios, and currently we don't know how to do that, and there are good reasons to think that throwing lots of data and better reward structures at our existing algorithms won't be enough to cause this to develop automatically via Neural Net Magic?"

Comment by raemon on What's Your Cognitive Algorithm? · 2020-06-21T17:30:33.673Z · score: 2 (1 votes) · LW · GW

Something I notice here (about myself) is that I don't currently understand enough about what's going on under-the-hood to make predictions about what sort of subsystems GPT could develop internally, and what it couldn't. (i.e. if my strength as a rationalist is the ability to be more confused by fiction than reality, well, alas)

It seems like it has to develop internal models in order to make predictions. It makes plausible sense to me that working memory is a different beast that you can't develop by having more training data thrown at you, but I don't really know what facts about GPT's architecture should constrain my beliefs about that.

(It does seem fairly understandable to me that, even if it were hypothetically possible for GPT to invent working memory, it would be an inefficient way of inventing working memory)

Comment by raemon on What's Your Cognitive Algorithm? · 2020-06-21T17:24:37.734Z · score: 4 (2 votes) · LW · GW

Thanks, this is great. May have more thoughts after thinking it over a bit.

Comment by raemon on What's Your Cognitive Algorithm? · 2020-06-20T21:46:00.652Z · score: 2 (1 votes) · LW · GW

i don't have what feels like a badness check, rather it feels like i have a thought and then maybe a linked thought is about what the consequences of it might be, and sometimes those are bad.

I think this is actually probably what's going on with me, upon further reflection.

Comment by raemon on [ongoing] Thoughts on Proportional voting methods · 2020-06-20T21:16:46.136Z · score: 2 (1 votes) · LW · GW

Ah, cool. I think you intended the post to be a draft, while sharing it with Ben Pace.

[edit: oh, but an unfortunate property of draft posts is you can't leave regular comments on them. Ben should probably make it an unlisted post]

Comment by Raemon on [deleted post] 2020-06-20T18:28:09.554Z

Fwiw, I always find linkposts easier to read when they actually just contain the whole post, and if you own the post I'd find it most convenient to copy this text over to the linkpost and then move this post to draft.

Comment by raemon on [ongoing] Thoughts on Proportional voting methods · 2020-06-20T18:23:50.490Z · score: 4 (2 votes) · LW · GW

I think the link-post url is somehow malformed. (Also I initially missed this was a linkpost, you might want to link it again at the end to be clear what my next action is supposed to be when I get to the bottom of the post)

Comment by raemon on What's Your Cognitive Algorithm? · 2020-06-20T17:07:26.512Z · score: 2 (1 votes) · LW · GW

Thanks! Will reply to some different bits separately. First, on reddit-karma training: 

I imagine the easiest thing would be to pre-pend the karma to each post, fine-tune the model, then you can generate high-karma posts by just prompting with "Karma 1000: ...".

This doesn't accomplish what I'm going for (probably). The key thing I want is to directly reward GPT disproportionately in different circumstances. As I currently understand it, every situation for GPT is identical – bunch of previous words, one more word to predict, graded on that one word. 

GPT never accidentally touches a burning hot stove, or gets a delicious meal, or builds up a complicated web of social rewards that they aspire to succeed at. I bet toddlers learn not to touch hot stoves very quickly even without parental supervision, faster than GPT could.

I don't want "1 karma", "10 karma" and "100 karma" to be a few different words with different associations. I want 10 karma to be 10x the reward of 1 karma, and 100 karma 10x that. (Well, maybe not literally 10x, I'd fine tune the reward structure with some fancy math)

When GPT-3 sort of struggles to figure out "I'm supposed to be doing addition or multiplication here", I want to be able to directly punish or reward it more than it usually is.

Comment by raemon on G Gordon Worley III's Shortform · 2020-06-20T16:36:57.094Z · score: 4 (2 votes) · LW · GW

I have a blog post upcoming called ‘Unconditional Love Integration Test: Hitler’

Comment by raemon on What's Your Cognitive Algorithm? · 2020-06-19T22:41:38.307Z · score: 4 (2 votes) · LW · GW

Actually, I think your comment about this awhile ago was what got me started on all this. I tried looking for it when I wrote this post but couldn't find it easily. If you give me the link I'd be happy to credit you in the OP.

Comment by raemon on Memory is not about the past · 2020-06-19T20:27:47.469Z · score: 4 (2 votes) · LW · GW

Random feedback: I bounced off this post a couple times because I couldn't tell what point it was building towards (and, later, based on some comments, I guessed it was going to say "memory is not about the past, it's about the future", which did seem straightforwardly true)

I'm curious if so.meone (the author, or random bystander) could summarize any additional points it made

Comment by raemon on Simulacra Levels and their Interactions · 2020-06-19T06:10:33.280Z · score: 2 (1 votes) · LW · GW

What sort of unfakeably costly signals?