Open & Welcome Thread - July 2022

post by Kaj_Sotala · 2022-07-01T07:47:22.885Z · LW · GW · 61 comments

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [? · GW] section of the LessWrong FAQ [? · GW]. If you want to orient to the content on the site, you can also check out the new Concepts section [? · GW].

The Open Thread tag is here [? · GW]. The Open Thread sequence is here [? · GW].

61 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2022-07-01T08:02:12.924Z · LW(p) · GW(p)

I asked GPT-3 to describe LessWrong. (The prompt is in bold, the rest is its output)


Item #: SCP-47000 

Object Class: Euclid 

Alias: LessWrong

Special Containment Procedures: SCP-47000 is to be contained within a standard containment cell at Site-19.

Description: SCP-47000 is an entity resembling a humanoid composed of a green, translucent material. It is approximately 1.8 meters in height. SCP-47000 is sapient, and is capable of human speech. It also possesses the ability to levitate and travel through solid objects. 

The entity has stated that it is a "rationalist", and is interested in "the pursuit of truth and knowledge". It has also claimed to have been created by an entity known as "Yudkowsky" and that its purpose is to "spread rationality". 

When interacting with Foundation personnel, SCP-47000 will often attempt to engage them in philosophical discussion. It has also been known to offer advice on various topics, including but not limited to: 

• The scientific method 

• The philosophy of science 

• The nature of reality 

• The existence of God 

• The meaning of life 

• The importance of critical thinking 

SCP-47000 has also been known to give lectures on various topics. These lectures typically last for several hours, and are often followed by a Q&A session. 

SCP-47000 has shown a deep understanding of human psychology, and is able to exploit this knowledge to its advantage. It has been known to use this knowledge to manipulate Foundation personnel into releasing it from containment, or into revealing sensitive information.

SCP-47000 is to be considered extremely dangerous, and is not to be released from containment under any circumstances.
 

Replies from: Pattern
comment by Pattern · 2022-07-09T01:57:55.758Z · LW(p) · GW(p)

It also possesses the ability to levitate and travel through solid objects. 

How is it contained?

comment by Steven Byrnes (steve2152) · 2022-07-15T02:30:35.760Z · LW(p) · GW(p)

My “better self” set karma notifications to just be once a day, so that I wouldn’t get addicted to refreshing LW.

However, it seems that my “worse self” has found a loophole in that plan, namely that I can still see whether my karma is going up on a minute-by-minute basis by looking at (1) the karma total on my user page and (2) the karma on my individual recent posts and comments.

So, I am willing to pay a fair price for a tampermonkey script (or any other method) that does the following:

  • Hide the karma total on my user page
  • Hide the karma on (only) my own comments and posts that I posted within the last 24 hours.

Anyone interested?

[Yes I have heard of GW anti-kibitzer mode [LW · GW] but I don't like it for other reasons. Yes I am aware that karma = fake internet points and this whole thing is incredibly stupid. I (exclusively) use Chrome browser on a Windows desktop, if that’s relevant.]

Replies from: ivan-vendrov, Zack_M_Davis
comment by Ivan Vendrov (ivan-vendrov) · 2022-07-17T19:38:16.448Z · LW(p) · GW(p)

My preferred adblocker, uBlock Origin, lets you right-click on any element on a page and block it, with a nice UI that lets you set the specificity and scope of the block. Takes about 10 seconds, much easier than mucking with JS yourself. I've done this to hide like & follower counts on twitter, just tried and it works great for LessWrong karma. It can't do "hide karma only for your comments within last 24 hours" but thought this might be useful for others who want to hide karma more broadly.

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2022-07-18T14:48:07.884Z · LW(p) · GW(p)

Nice!! Well, it took me 20 minutes not 10 seconds, mostly figuring out what the filters are and how they work, for the purpose of making them only apply on my user-page and not site-wide. (The trick is here, e.g. www.lesswrong.com##:matches-path(/steve2152) span.UsersProfile-userMetaInfo:nth-of-type(1).)

This isn't 100% what I wanted, but better than before, hopefully good enough.

As a bonus, now I have ad-blocking ;-)

comment by Zack_M_Davis · 2022-07-15T04:39:48.059Z · LW(p) · GW(p)

It looks like you can remove the total karma score from your user page with document.querySelector(".UsersProfile-userMetaInfo").remove();, and that you can remove the karma scores from your comments with

document.querySelectorAll('.UsersNameDisplay-userName[href="/users/steve2152"]').forEach(function(el) {
    el.closest('.CommentsItem-meta').querySelector('.OverallVoteAxis-voteScore').innerHTML = '';
})

I did this in the Firefox developer console, but it's just JavaScript and should work in Tampermonkey?

comment by freemany · 2022-07-12T22:19:16.062Z · LW(p) · GW(p)

Hi! It's time to convert my lurking into an active account. I'm interested in all things related to making the long-term future go well, and I currently run Future Forum.

Replies from: Raemon
comment by Raemon · 2022-07-12T22:51:10.911Z · LW(p) · GW(p)

Welcome! 

comment by niplav · 2022-07-01T14:37:15.038Z · LW(p) · GW(p)

comment by [deleted] · 2022-07-05T02:04:15.338Z · LW(p) · GW(p)

Hi, I'm new here (or at least this is my first comment). I was reading the Sequences, but sometimes there are parts that I don't understand very well. what do I do when that happens? I would feel very stupid if I were the only one in the comments of the Sequences asking what a certain paragraph in each article I don't understand means. It occurred to me to leave the article for later so next time maybe it will be clearer for me, but I risk losing interest in the Sequences when I come back. Speaking of interest, how can I sustain it for long enough to finish the Sequences, because I've already made several attempts and haven't gotten very far.

Replies from: Raemon, Pattern, Leviad
comment by Raemon · 2022-07-05T02:46:58.505Z · LW(p) · GW(p)

I think leaving comments about things you're confused about is good practice.

comment by Pattern · 2022-07-09T02:01:36.322Z · LW(p) · GW(p)

I think that is a flaw of comments, relative to 'google docs'. Long documents without the referenced areas being tagged in comments, might make it hard to find other people asking the same question you did, even if someone wondered about the same section. (And the difficulty of ascertaining that quickly seems unfortunate.)

comment by Drake Morrison (Leviad) · 2022-07-08T04:49:39.331Z · LW(p) · GW(p)

The Sequences are very long, but worth it. I would recommend reading the Highlights [? · GW], and then reading more of the sections that spark your curiosity. 

(I only found out about that today, and I've been lurking here for a little bit. Is there a way for the Highlights to be seen next to the Rationality: A - Z page?) 

Replies from: Raemon
comment by Raemon · 2022-07-08T05:33:54.904Z · LW(p) · GW(p)

We just curated and released the sequence highlights yesterday, so you were not missing anything. :)

Over the next few days we'll be adding more nice-to-have features to help people find and read them.

Replies from: tomcatfish, gilch
comment by Alex Vermillion (tomcatfish) · 2022-07-17T03:39:17.927Z · LW(p) · GW(p)

Aha! I was wondering how long those had been on the side of the page, unbeknownst to me! Thank you for that! I've already sent it to someone (who really liked the first few articles).

comment by gilch · 2022-07-20T16:15:31.261Z · LW(p) · GW(p)

How did you decide which posts to include?

comment by Torello · 2022-07-14T00:39:29.168Z · LW(p) · GW(p)

Link to my notes/summary of "The Dictator's Handbook".  

Probably of interest to people here thinking about the dynamics that govern political behavior in nation states, companies, etc. 

https://digitalsauna.wordpress.com/2022/07/13/the-dictators-handbook-by-bruce-bueno-de-mesquita-and-alastair-smith-2011/

There's also a deep dive LessWrong post on the topic:

https://www.lesswrong.com/posts/N6jeLwEzGpE45ucuS/building-blocks-of-politics-an-overview-of-selectorate [LW · GW]

comment by Drake Morrison (Leviad) · 2022-07-08T06:15:02.796Z · LW(p) · GW(p)

Hello! I've been here lurking for a bit, but never quite introduced myself. I found myself commenting for the first time and figured I should go ahead and write up my story.

I don't quite remember how I first stumbled upon this site, but I was astonished. I skimmed a few of the front page articles and read some of the comments. I was impressed by the level of dialogue and clear thought. I thought it was interesting but I should check it out when I had some more time.

One day I found myself trying to explain something to a friend that I had read here, but I couldn't do it justice. I hadn't internalized the knowledge, it wasn't a part of me. That bothered me. I felt like I should have been able to understand better what I read, or explain as I remembered reading it.

So I decided to dig in, I wanted to understand things, to be able to explain the concepts, to know them well enough to write about them and be understood. I like reading fantasy, so I decided to start with HPMOR.

I devoured that book. I found myself stunned with how much I thought like Harry. It was like reading what I had always felt but never been able to put into words. The more I read, the more impressed I was, I had to keep reading. I finished the book, and immediately started on the Sequences. I felt like this was a great project I could only have wished for, and yet here it was.

I started trying to apply the things I learned to myself, and found it very difficult. rationality was not as easy as reading up how it all works, I had to actually change my mind. For me, the first great test of my rationality was religious. I had many questions about my faith for a long time. Reading the Sequences gave me the courage I needed to finally face the scariest questions. I finally had tools that could apply to the foundational questions I had.

The answers I came to where not pretty. Facing the questions had changed me. In finding answers to my questions I had lost my belief in the claims of religion. I found myself with a clarity that I hadn't thought possible. I had some troubling issues to confront, now that my religious conception of the world had fallen away.

I found myself confident, in ways I had never been before. I could kind of explain where the evidence for my beliefs were, instead of having no answer at all. I have all kinds of mental models and names for concepts now that I wish I had found earlier. I had found a path that would take me where I wanted to go. I'm not very far along that path, but I found it.

Of course, I'm still learning. And I'm still not all that good at practicing my rationality. But I'm getting better, a little bit at a time. My priorities have changed. I've got money on the line now for some of my goals, thanks to Beeminder. I've been writing more, trying to get better at communicating. I can't thank enough all the people who contribute and maintain this site. It's a wonderful place of sanity in a mad world, and I have become better, and less wrong, because of it. 

comment by Libsacul · 2022-07-03T15:14:20.985Z · LW(p) · GW(p)

Hello 

Came here from a link in ”The Browser” news letter. First impressions is I can learn a lot from this site. Thanks to all who contribute. 
 

Lib Sacul

comment by Adam Zerner (adamzerner) · 2022-07-18T04:10:55.978Z · LW(p) · GW(p)

Meta: This seems like a 101-level question, so I ask it in the Open Thread rather than using the questions feature on LW.

Suppose you are designing an experiment for a blood pressure drug. I've heard stuff about how it is important to declare what metrics you are interested in up front, before collecting the data, not afterwards. Ie. you'd declare up front that you only care about measuring systolic blood pressure, diastolic blood pressure, HDL cholesterol, and LDL cholesterol. And then if you happen to see an effect on heart rate, you are supposed to ignore it.

Why is this? Surely just for social reasons, right? If we do the experiment and happen to see data on heart rate, The Way of Bayes says we are forced to update our beliefs. Why would we ignore those updated beliefs?

Maybe this is getting at the thing (I don't know what it's called) where, if you flip a coin 100 times every day for 10 years, some of those days are going to be extreme results, but that doesn't mean that coin is weighted. Since you're doing it so much, you'd expect it to happen. Or something like that. I don't think my example is quite the same thing since it's the same coin. A better example is probably when you do a blood test on someone, if you test 1000 different metrics, a few of them are bound to give "statistically significant results" just by chance. But this just seems like a phenomena that you need to adjust for, not that you should ignore data.

Replies from: gwern, ChristianKl
comment by gwern · 2022-07-18T16:04:54.829Z · LW(p) · GW(p)

FWIW, I think this is well above 101-level and gets into some pretty deep issues, and sparked some pretty acrimonious debates back during the Sequences when Eliezer blogged about stopping rules & frequentism. It's related to the question "why Bayesians don't need, in theory, to randomize", which is something Andrew Gelman mentioned for years before I even began to understand what he was getting at.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2022-07-18T17:07:36.390Z · LW(p) · GW(p)

Oh that stuff is good to know. Thanks for those clarifications. I actually don't see how it's related to randomization though, so add that to the list of things I'm confused about. My question feels like a question of what to do with the data you got, regardless of whether you utilized randomization in order to get that data.

Replies from: gwern
comment by gwern · 2022-07-18T17:18:43.208Z · LW(p) · GW(p)

It's the same question because it screens off the data-generating process. A researcher who is biased or p-hacking or outcome switch is like a world which generates imbalanced/confounded experimental vs 'control' groups, in a Bayesian needs to model the data-generating process like the stopping rule to learn correctly from, while pre-registration and explicit randomization make the results independent of those and a simple generative model is correct.

(So this is why you can get a decision-theoretic justification for Bayesians doing those even if they are sure they are modeling correctly all confounding etc: because it is a 'nothing up my sleeve'-esque design which allows sharing information with other agents who have nonshared priors - by committing to a randomization or pre-registration, they can simply take your data at face-value and do an update, while if they had to model you as a non-randomized generating process generating arbitrarily biased data in unknown ways, the data would be uninformative and lose almost all of its possible value.)

comment by ChristianKl · 2022-07-18T08:01:13.191Z · LW(p) · GW(p)

Bayes has nothing to do with the concept of statistical significance. Statistical significance is a concept out of frequentist statistics. A concept that comes with a lot of problems.

Nobody really argues that you should ignore it. If you would want drug approval you likely even would have to list it as a potential side effect. That's why the increased lighting strike risk of the Moderna vaccine was disclosed. It's just that your study doesn't provide good evidence for the effect existing. If you want that evidence, you can run another study to look for it. 

comment by myutin · 2022-07-19T07:34:47.620Z · LW(p) · GW(p)

Hey, I'm new here. I'm looking for help on dealing with akrasia. My most common failure mode is as follows: when I have many different tasks to do, I'm not able to start any one of them.

I'm planning on working through the hammertime sequence: i've asked for 9 days off work, for a total of 13 days free. Will this be achievable / helpful? What other resources are available?

Specs:
DC area. Have read MoR, Sig. Digits, Inadequate Equilibria, and half of the Sequences. Heavy background in Math/CS/Physics.

comment by Flaglandbase · 2022-07-01T22:06:39.009Z · LW(p) · GW(p)

I believe it should be possible at every Lesswrong post to make "low quality" comments that would be automatically hidden at the bottom of each comment section, underneath the "serious" comments. So you would have to click on them to make them visible. Such comments would be automatically given -100 points, but in a way that doesn't count against the poster's "account karma". The only requirement would be that the commenter should genuinely believe they're making a true statement. Replies to such comments would be similarly hidden. Also certain types of "unacceptable" speech could be banned by the site. This would stimulate out-of-the-box discussion and brainstorming.

Replies from: MondSemmel
comment by MondSemmel · 2022-07-02T00:03:53.692Z · LW(p) · GW(p)

Also certain types of "unacceptable" speech could be banned by the site. This would stimulate out-of-the-box discussion and brainstorming.

By which mechanism do you expect to improve discussion by introducing censorship?

Replies from: Flaglandbase
comment by Flaglandbase · 2022-07-02T01:59:17.644Z · LW(p) · GW(p)

I'm completely opposed to any type of censorship whatsoever, but this site might have two restrictions:

  • Descriptions of disruptive or dangerous new technology that might threaten mankind
  • Politically or socially controversial speech considered beyond the pale by the majority of members or administrators
comment by mikbp · 2022-07-01T11:48:07.364Z · LW(p) · GW(p)

I know there are posts in LW that mention a behaviour and/or productivity tip of the form "if/when X happens, do Y". I don't know how this is called so I am not able to find any. Could anybody point me to the right direction, please?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2022-07-01T12:24:01.338Z · LW(p) · GW(p)

Trigger-Action Planning [? · GW] ("implementation intentions" in the academic literature)

Replies from: mikbp
comment by mikbp · 2022-07-01T12:36:51.156Z · LW(p) · GW(p)

Awesome, thanks Kaj!

Replies from: MondSemmel
comment by MondSemmel · 2022-07-04T10:48:52.542Z · LW(p) · GW(p)

Note that Duncan just posted [LW · GW] the relevant chapter from the CFAR Handbook as a standalone LW essay.

Replies from: mikbp
comment by mikbp · 2022-07-04T11:52:15.715Z · LW(p) · GW(p)

Cool, thanks. I'll read it!

comment by Oscar_Cunningham · 2022-07-01T09:24:55.312Z · LW(p) · GW(p)

Is there a way to alter the structure of a futarchy to make it follow a decision theory other than EDT?

comment by MondSemmel · 2022-07-05T14:25:54.480Z · LW(p) · GW(p)

I've found a new website bug: copy & pasting bullet points from LW essays into the comment field fails with a weird behavior. I've created a corresponding Github issue.

comment by sairjy · 2022-07-23T09:21:24.023Z · LW(p) · GW(p)

I am trying to improve my forecasting skills and I was looking for a tool that would allow me to design a graph/network where I could place some statement as a node with an attached probability (confidence level) and then the nodes can be linked so that I can automatically compute the joint or disjoint probability etc.

It seems such a tool could be quite useful, for a forecast with many inputs. 

I am not sure if bayesian networks or influence graphs are what I am looking for or if they could be used for such scope. Nevertheless, I haven't exactly found a super user-friendly tool for either of them. 

comment by Alex Vermillion (tomcatfish) · 2022-07-17T03:37:50.262Z · LW(p) · GW(p)

What's up with editing groups? Myself and one of the organizers get an error like

app.operation_not_allowed Error submitting form: Localgroup.update

If this is intentional, can the error message show that better? I have no idea if I'm disallowed from taking this action or if there is a bug!

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2022-07-26T16:23:56.750Z · LW(p) · GW(p)

This is a real question! We do not know how to edit our group!

comment by Lost Futures (aeviternity1) · 2022-07-10T23:09:20.211Z · LW(p) · GW(p)

Anyone else shown DALL-E 2 to others and gotten surprisingly muted responses? I've noticed some people react to seeing its work with a lot less fascination than I'd expect for a technology with the power to revolutionize art. I stumbled on dalle2 subreddit post describing a similar anecdote so maybe there's something to this.

comment by isle9 · 2022-07-09T13:22:45.161Z · LW(p) · GW(p)

Hello, is there a reason for why we can't bookmark comments?

Also, what is supposed to be the point of bookmarks? Are they supposed to be a tool for read it later or for favoriting stuff?

Replies from: gilch, Raemon, habryka4
comment by gilch · 2022-07-20T16:13:20.447Z · LW(p) · GW(p)

Normal browser bookmarks do work. Use the link icon between the date and karma to get the URL for one.

comment by Raemon · 2022-07-11T17:40:08.386Z · LW(p) · GW(p)

There's not a particular reason you can't, except for "we haven't prioritized it", and "it's not obviously worth the UI-complexity-creep". 

Replies from: tomcatfish
comment by Alex Vermillion (tomcatfish) · 2022-07-17T03:42:19.612Z · LW(p) · GW(p)

I'll note that there are some really neat comments on here, and that the button could be hidden in our 3-dots on the top left of comments (though I've never really ached to bookmark a comment before)

comment by habryka (habryka4) · 2022-07-11T00:37:22.597Z · LW(p) · GW(p)

Different people use it for different purposes (just like real bookmarks!). I think the most common use-case is to mark something as to be read for later.

comment by mikbp · 2022-07-27T18:00:33.195Z · LW(p) · GW(p)

Some days (weeks?) ago I saw a tweet from somebody saying something similar to: "What would you guys think of paying AI developers to change careers (to slow AI development)". I am now not able to find anymore.

Has anyone seen it as well and could link it to me here, please?

Thanks!

Is this kind of thinking gaining momentum?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2022-07-28T12:17:58.992Z · LW(p) · GW(p)

This maybe?

Replies from: mikbp
comment by mikbp · 2022-07-28T17:18:41.855Z · LW(p) · GW(p)

Yes! Thanks

comment by Torello · 2022-07-24T19:02:33.924Z · LW(p) · GW(p)

My review of Four Thousand Weeks: Time Management for Mortals.  

I think there's productivity or life-hack kind of content on LessWrong, and I think this book is a good addition to that type of thinking, and it might be a useful counter-point to existing lenses or approaches.

https://digitalsauna.wordpress.com/2022/07/24/four-thousand-weeks-by-oliver-burkeman-2021-second-review/ 

comment by Noosphere89 (sharmake-farah) · 2022-07-24T00:12:47.491Z · LW(p) · GW(p)

I notice one of the big reasons that people view misalignment of AIs as not a threat is they view the human-AI gap like the gap between humans and corporations, where existential risk is low or none.

The hardness of the alignment problem is basically even just one order of magnitude difference in intelligence is essentially the difference between humans and the smarter animals like horses, not the difference between human beings and corporations. And very rarely does the more intelligent entity not hurt or kill the less-intelligent entity, and power differentials this large go very badly for the less intelligent side. Thus the default outcome of misaligned AI is catastrophe or extinction.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2022-07-24T06:53:03.705Z · LW(p) · GW(p)

power differentials this large go very badly for the less intelligent side

With a bit of sympathy/compassion and cosmic wealth, this doesn't seem so inevitable. The question is probability of settling on that bit of sympathy, and whether it sparks before or after some less well-considered disaster.

comment by costlySignal · 2022-07-17T05:34:14.798Z · LW(p) · GW(p)

Best of Econtwitter and Guzey's Best of Twitter are two Twitter newsletters I enjoy, does anyone know of similar roundup newsletters?

comment by Aaro Salosensaari (aa-m-sa) · 2022-07-14T19:19:37.048Z · LW(p) · GW(p)

Can anyone recommend good reading material on economic calculation problem? 

Replies from: Kaj_Sotala, aa-m-sa
comment by Kaj_Sotala · 2022-07-15T13:02:25.459Z · LW(p) · GW(p)

It's been a while since I read it, and it's a blog post rather than anything formal, but I recall liking https://crookedtimber.org/2012/05/30/in-soviet-union-optimization-problem-solves-you/ .

Replies from: aa-m-sa
comment by Aaro Salosensaari (aa-m-sa) · 2022-07-17T12:59:44.804Z · LW(p) · GW(p)

Thanks. I had read it years ago, but didn't remember that he had many more points than O(n^3.5 log(1/h)) scale and provides useful references (other than Red Plenty).

comment by Aaro Salosensaari (aa-m-sa) · 2022-07-15T12:45:19.165Z · LW(p) · GW(p)

(I initially thought it would be better not to mention the context of the question as it might bias the responses. OTOH the context could make the marginal LW poster more interested in providing answers, so I here it is:)

It came up in an argument that the difficulty of economic calculation problem could be a difficult to a hypothetical singleton, insomuch a singleton agent needs certain amount of compute relative to the economy in question. My intuition consists two related hypotheses: First, during any transition period where any agent participates in global economy where most other participants are humans ("economy" could be interpreted widely to include many human transactions), can the problem of economic calculation provide some limits how much calculation would be needed for an agent to become able to manipulate / dominate the economy? (Is it enough for an agent to be marginally more capable than any other participant, or does it get swamped by the sheer size of the economy is large enough?)

 Secondly, if an Mises/Hayek answer is correct and the economic calculation problem is solved most efficiently by a distributed calculation, it could imply that a single agent in a charge of a number of processes on "global economy" scale could be out-competed by a community of coordinating agents. [1]

However, I would like to read more to judge if my intuitions are correct. Maybe all of this is already rendered moot by results I simply do not how to find.

([1] Related but tangential: Can one provide a definition when distributed computation is no longer a singleton but more-or-less aligned community of individual agents? My hunch is, there could be a characterizations related to speed of communication between agents / processes in a singleton. Ultimately speed of light is prone to mandate some limitations.)

comment by Nnotm (NNOTM) · 2022-07-08T18:53:45.301Z · LW(p) · GW(p)

I think there was a post/short-story on lesswrong a few months ago about a future language model becoming an ASI because someone asked it to pretend it was an ASI agent and it correctly predicted the next tokens, or something like that. Anyone know what that post was?

Replies from: gwern
comment by gwern · 2022-07-08T19:03:59.417Z · LW(p) · GW(p)

https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world [LW · GW]

Replies from: NNOTM
comment by Nnotm (NNOTM) · 2022-07-08T19:14:56.880Z · LW(p) · GW(p)

Thanks, I will read that! Though just after you commented I found this in my history, which is the post I meant: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth [LW · GW]

comment by jp · 2022-07-07T12:17:52.787Z · LW(p) · GW(p)

Is there a way to hide the curated sequences from the frontpage?

Replies from: Raemon
comment by Raemon · 2022-07-07T15:01:26.347Z · LW(p) · GW(p)

Not yet but I’ll likely be adding it soon. (I’m also going to be overhauling how curated sequences work and may do that first. I might actually end up re-disabling recommended sequences for older accounts until I’ve iterated on it more, then re-enable it, this time with a dismiss button)

comment by Doron · 2022-07-04T17:41:51.715Z · LW(p) · GW(p)

Hi,

Got here through a recent "Browser" link. Looks interesting. 

Doron