Open & Welcome Thread November 2021

post by Ruby · 2021-11-01T23:43:55.006Z · LW · GW · 35 comments

Contents

38 comments

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [LW · GW] section of the LessWrong FAQ [LW · GW]. If you want to orient to the content on the site, you can also check out the Concepts section [? · GW].

The Open Thread tag is here [? · GW]. The Open Thread sequence is here [? · GW].

35 comments

Comments sorted by top scores.

comment by renxida · 2021-11-10T06:47:14.101Z · LW(p) · GW(p)

Hi! I'm new here. I was introduced by a friend of mine, and reading some of these blog articles make me feel real sad about not having encountered all this earlier.

I grew up homeschooled in a rather closed-minded Church environment in rural / non coastal urban China. When I talk to some friends here in Northern Virginia I get really jealous of what they get to read and learn growing up. I get that sensation extra strongly when I'm browsing here.

I love the articles and comments here at less wrong, but I get this sense that I'm reading holy writing by Gods on pedistals.

I would love to get some help remedying that, to see the people behind the great and awesome ideas. Shoot me an email at cedar.ren@gmail.com if you feel comfortable telling a stranger about your humanity. The things you wish you had growing up. How you became who you are today. Things like that.

Oh and my English name is Cedar. My mum wanted me to be upright and incorruptible. Not sure how well that hope went, but I joke about being a Pokemon professor because I have a tree name.

Replies from: Jon Garcia
comment by Jon Garcia · 2021-11-15T00:51:18.898Z · LW(p) · GW(p)

Hi! I'm also sort of new here (only recently created an account but have been reading sporadically for years). For most of my life, I was actually a young-earth creationist, so I know a bit about coming from a closed-minded religious environment. Ironically, I first started to read LessWrong while I was still an ardent YEC (well before LessWrong 2.0), but I didn't feel that my position was in contradiction to rational thinking. In fact, I prided myself in being able to see through the flaws in creationist arguments whose conclusions agreed with my beliefs and in being able to grasp "evolutionists'" arguments from their perspective (but of course, being able to see the flaws in them as well). Even now, I would say that I understood evolution better back then than most non-biologists who accept it.

The only thing keeping me a YEC for so long (until the end of grad school, if you can believe it) was a very powerful prior moral obligation to maintain a biblically consistent worldview that had been thoroughly indoctrinated into me growing up. It took way more weight of evidence than it should have to convince me (1) that mutation + selection pressure is an effective way of generating diverse and viable designs, (2) that gene regulatory networks produce sufficient abstraction in biological feature space to allow evolutionary search methods to overcome the curse of dimensionality, (3) that the origin of all species from a common ancestor is mathematically possible, (4) that it is statistically inevitable over Earth history, (5) that evolution is in fact homeomorphic to reinforcement learning and thus demonstrably plausible, (6) that all possible ways of classifying species result in the same exact branching tree pattern, (7) that if God did create life, He had to have done so using an evolutionary algorithm indistinguishable in its breadth and detail from the real world, and (8) that the evidence for evolution as a matter of historical fact is irrefutable. It was after realizing all of this that I had a real crisis of faith, which led me to stumble across Eliezer's Crisis of Faith [LW · GW] article after a years of not reading LessWrong. I remember that article, among many others, helped me quite a bit to sort through what I believe and why.

I'm not sure precisely why I stopped reading LessWrong back when I was a YEC, but I think it may have had something to do with me being uncomfortable with Eliezer's utter certainty in the many-worlds interpretation of quantum mechanics. Such a view would completely destroy the idea that this world is the special creation of an Omni-Max God who has carefully been steering Earth history as part of His Grand Design. Although, I did consider the possibility that the quantum multiverse could be God's way of running through infinite hypothetical scenarios before creating the One True Universe with maximum expected Divine Utility. However, this didn't comfort me much since it means that with probability = 1, everything we have ever known and valued is just one of God's hypothetical scenarios, to be forgotten forever once this scenario plays out to heat death. I've since learned to make peace with Many Worlds QM, though.

Replies from: Ruby, lsusr
comment by Ruby · 2021-11-15T02:39:20.720Z · LW(p) · GW(p)

Welcome! That's an interesting path you've followed.

Replies from: Jon Garcia
comment by Jon Garcia · 2021-11-15T17:00:25.109Z · LW(p) · GW(p)

Thanks. I think it's important not to forget the path I've taken. It's a major part of my identity even though I no longer endorse what were once my most cherished beliefs, and I feel that it helps connect me with the greater human experience. My parents and (ironically) my training in apologetics instilled in me a thirst for truth and an alertness toward logical fallacies that took me quite far from where I started in life. I guess that a greater emphasis on overcoming confirmation bias would have accelerated my truth-seeking journey a bit more. Unfortunately and surprisingly for a certain species of story-telling social primates, the truth is not necessarily what is believed and taught by the tribe. An idea is not true just because people devote lifetimes to defending it. And an idea is not false just because they spend lifetimes mocking it.

The one thing that held me back the most, I think, is my rather strong deontological instinct. I always saw it as my moral duty to apply the full force of my rational mind to defending the Revealed Truth. I was willing to apply good epistemology to modify my beliefs arbitrarily far, as long as it did not violate the moral constraint that my worldview remain consistent with the holistic biblical narrative. Sometimes that meant radically rethinking religious doctrines in light of science (or conflicting scriptures), but more often it pushed me to rationalize scientific evidence to fit with my core beliefs.

I always recognized that all things that are true are necessarily mutually consistent, that we all inhabit a single self-consistent Reality, and that the Truth must be the minimum-energy harmonization of all existing facts. However, it wasn't until I was willing to let go of the moral duty to retain the biblical narrative in my set of brute facts that the free energy of my worldview dropped dramatically. It was like a thousand high-tension cables binding all my beliefs to a single (misplaced) epistemological hub were all released at once. Suddenly, everything else in my worldview began to fall into place as all lines of evidence I had already accumulated pulled things into a much lower-energy configuration.

It's funny how a single powerful prior or a single moral obligation can skew everything else. I wish it were a more widely held virtue to deeply scrutinize one's most cherished beliefs and to reject them if necessary. Oh well. Maybe in the next million years if we can set up the social selection pressures right.

comment by lsusr · 2021-11-15T11:10:36.118Z · LW(p) · GW(p)

Welcome!

[T]he many-worlds interpretation of quantum mechanics. Such a view would completely destroy the idea that this world is the special creation of an Omni-Max God who has carefully been steering Earth history as part of His Grand Design.

One planet. A hundred billion souls. Four thousand years. Such small ambitions for an ultimate being of infinite power like Vishnu, Shiva or Yahweh. It seems more appropriately scoped for a minor deity.

Replies from: Jon Garcia
comment by Jon Garcia · 2021-11-15T17:18:22.063Z · LW(p) · GW(p)

Well, at the time I had assumed that Earth history was a special case, a small stage temporarily under quarantine from the rest of the universe where the problem of evil could play itself out. I hoped that God had created the rest of the universe to contain innumerable inhabited worlds, all of which would learn the lesson of just how good the Creator's system of justice is after contrasting against a world that He had allowed to take matters into its own hands. However, now that I'm out of that mindset, I realize that even a small Type-I ASI could easily do a much better job instilling such a lesson into all sentient minds than Yahweh has purportedly done (i.e., without all the blood sacrifices and genocides).

comment by Mitchell_Porter · 2021-11-11T01:06:46.155Z · LW(p) · GW(p)

Does anyone have an informed comment about the use of gauge theory in decision theory? 

Eric Weinstein just gave a controversial econophysics talk [edit: link added] at the University of Chicago about "geometric marginalism", which uses geometric techniques from Yang-Mills theory to model changing preferences. 

If this can be used in economics, it can probably be used in decision theory in general, and I see at least one example of a physicist doing this ("decision process theory"), but I don't know how it compares to conventional approaches. 

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2021-12-09T16:39:03.219Z · LW(p) · GW(p)

arxiv now carries a "response to economics as gauge theory", by a physicist turned ML researcher, known for co-authoring a critique of Weinstein's unified theory of physics, earlier this year. As with the physics critique, the commentary seems pretty basic and hardly the final word on anything, but it's notable because the critic works at Deep Mind (he's presenting a paper at NeurIPS this afternoon). 

comment by philip_b (crabman) · 2021-11-02T10:17:13.278Z · LW(p) · GW(p)

I am looking for software which will help me learn to sing a song I want to sing. I have extracted its vocal track as an mp3 file. I want to sing it in the same frequencies it's sung in the vocal track, or perhaps one or two octaves lower. I want a program which will help me do that. I imagine a program would load the mp3 file, then I would tell it that I want to sing one octave lower, and it would, in real time, draw me a graph of the frequencies I am singing together with the frequencies in the mp3 file. I imagine something like this - the frequencies I am supposed to sing in red, my actual frequencies in yellow. Operating systems suitable for me are Linux, Ipad OS, and Android. If instead of loading an mp3 file into the program I'll have to transcribe the music to sheets myself, that's worse but perhaps still ok. If you don't know such a program but know something else I can use to achieve my goal, do tell.

Replies from: gilch, Charlie Steiner, NicholasKross, Bezzi
comment by gilch · 2021-11-08T05:10:06.675Z · LW(p) · GW(p)

You're probably looking for UltraStar Deluxe. You may have to transcribe your song with the editor, but many songs have already been transcribed and are available online. Depending on the quality of your .mp3, you may still have to sync the transcription to it with the editor.

comment by Charlie Steiner · 2021-11-02T19:02:57.404Z · LW(p) · GW(p)

I have never heard of such a program, sorry. Looks like real-time pitch monitors exist, but a cursory search didn't yield anything that will save a second track and let you compare and restart back to a selected point.

I think most singers would solve this problem by doing the pitch comparison in auditory format rather than visual / spatial. I.e. they'd sing along :)

comment by Nicholas / Heather Kross (NicholasKross) · 2021-11-07T21:28:21.114Z · LW(p) · GW(p)

Closest thing I can think of is SmartMusic, but no clue whether/how good vocal support.

comment by Bezzi · 2021-11-02T14:42:10.490Z · LW(p) · GW(p)

I don't understand why you explicitly want to visualize the frequencies if you're already able to read (and write) sheet music. I mean, learning a tune from the frequency graph seems quite awkward compared to learning from the score... a fairly trained musician should be able to sing straight from the score without ever having heard the original (I can, at least for tonal music). If you can write the sheet yourself, or if you have a suitable MIDI file, the simplest thing you can do is to get Musescore and repeatedly playback the song.

Replies from: crabman
comment by philip_b (crabman) · 2021-11-02T15:33:07.037Z · LW(p) · GW(p)

I haven't had any practice in reading sheet music in like 3 years, and even then I was a novice and was quite slow at it. And I've never tried transcribing music in sheet form but I think I can, especially given a tuner, although it'll take long.

Replies from: Bezzi
comment by Bezzi · 2021-11-02T19:16:13.766Z · LW(p) · GW(p)

In this case I recommend trying to automate the transcription process. You could use something like piano2notes (yet another AI browser tool) in order to get the midi/transcript directly from the mp3 file.

I've never used such services before today, but I've just checked the quality with a sample mp3 tune written on the fly in Musescore and it seems quite good (at least for tonal music with very recognizable melodies). Here's the result:

IMAGE (original score above, automated transcription below)

There's also a free service called soundslice that declares to be optimized for use cases like yours, but I didn't check it.

comment by MondSemmel · 2021-11-04T15:10:10.671Z · LW(p) · GW(p)

To those who haven't seen it and would care about such a thing, there's currently a short-term donation matching event by the site Every.org. (That is, the event was supposed to run for all of November, but the funds have been mostly depleted already.) Here [EA · GW] is the corresponding thread on the EA forums. Matching is a bit better than 1:1 and limited to 100$ per charity, but here [EA(p) · GW(p)] is a subthread with a few offers for donation trades.

comment by Aaro Salosensaari (aa-m-sa) · 2021-11-18T20:26:05.782Z · LW(p) · GW(p)

Open thread is presumably the best place for a low-effort questions, so here goes: 

I came across this post from 2012: Thoughts on the Singularity Institute (SI) [LW · GW] by Holden Karnofsky (then-Co-Executive Director of GiveWell). Interestingly enough, some of the object-level objections (under subtitle "objections") Karnofsky raises[1] are similar to some points that were came up in the Yudkowsky/chathamroom.com discussion and Ngo/Yudkowsky dialogue I read the other day (or rather, read parts of, because they were quite long).

What are people's thought about that post and objections raised today? What the 10 year (-ish, 9.5 year) retrospective looks like?

Some specific questions.

Firstly, how his arguments would be responded today? Any substantial novel contra-objections? (I ask because its more fun to ask than start reading through Alignment forum archives.)

Secondly, predictions. When I look at the bullet points under the subtitle "Is SI the kind of organization we want to bet on?", I think I can interpolate a prediction Karnofsky could have made: in 2012, SI [2] had not the sufficient capability nor engaged in activities likely to achieve its stated goals ("Friendliness theory" or Friendly AGI before others), as it was not worth a GiveWell funding recommendation in 2012.

A perfect counterfactual experiment this is not, but given what people on LW today know about what SI/MIRI did achieve in the NoGiveWell!2012 timeline, was Karnofsky's call correct, incorrect or something else? (As in, did his map of the situation in 2012 matched the reality better than some other map, or was it poor compared to other map?) What inferences could be drawn, if any?

Would be curious to hear perspectives from MIRI insiders, too (edit. but not only them). And I noticed Holden Karnofsky looks active here on LW [? · GW], though I have no idea if how to ping him.

[1] Tool-AI; idea that advances in tech would bring insights into AGI safety.

[2] succeeded by MIRI I suppose

edit2. fixed ordering of endnotes.

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2021-11-22T13:46:46.385Z · LW(p) · GW(p)

how his arguments would be responded today

The old rebuttals I'm familiar with are Gwern's and Eliezer's [LW · GW] and Luke's [? · GW]. Newer responses might also include things like Richard Ngo's AGI safety from first principles [? · GW] or Joe Carlsmith's report on power-seeking AIs [LW · GW]. (Risk is disjunctive; there are a lot of different ways that reality could turn out worse than Holden-2012 expected.) Obviously Holden himself changed his mind; I vaguely recall that he wrote something about why, but I can't immediately find it.

Holden Karnofsky looks active here on LW [? · GW],

I'm not sure that's accurate. His blog posts are getting cross-posting from his account, but that could also be the work of an LW administrator (with his permission).

Replies from: gwern
comment by gwern · 2021-11-22T16:23:34.851Z · LW(p) · GW(p)

I think my rebuttal still basically stands, and my predictions like about how the many promises about how autonomous drones would never be fully autonomous would collapse within years have been borne out. We apparently may have fully autonomous drones killing people now in Libya, and the US DoD has walked back its promises about how humans would always authorize actions and now merely wants some principles like being 'equitable' or 'traceable'. (How very comforting. I'm glad we're building equity in our murderbots.) I'd be lying if I said I was even a little surprised that the promises didn't last a decade before collapsing under the pressures that make tool AIs want to be agent AIs.

I don't think too many people are still going around saying "ah, but what if we simply didn't let the AIs do things, just like we never let them do things with drones? problem solved!" so these days, I would emphasize more what we've learned about the very slippery and unprincipled line between tool AIs and agent AIs due to scaling and self-supervised learning, given GPT-3 etc. Agency increasingly looks like Turing-completeness or weird machines or vulnerable insecure software: the default, and difficult to keep from leaking into any system of interesting intelligence or capabilities, and not something special that needs to be hand-engineered in and which can be assumed to be absent if you didn't work hard at it.

comment by Hazard · 2021-11-13T16:08:16.821Z · LW(p) · GW(p)

I remember at some point finding a giant messy graph that was all of The Sequences and the links between posts. I can't track down the link, anyone remember this and have a lead?

Replies from: niplav
comment by niplav · 2021-11-17T13:30:39.875Z · LW(p) · GW(p)

You're probably looking for this (via the old FAQ).

comment by Yoav Ravid · 2021-11-08T06:30:15.333Z · LW(p) · GW(p)

Is there any rationalist fiction about a group of rationalists (instead of just one rationalist protagonist)?

Replies from: MondSemmel, Ruby
comment by MondSemmel · 2021-11-08T10:27:00.111Z · LW(p) · GW(p)

In general, the best place to ask such a question is probably /r/rational on Reddit.

As for specific works: I haven't read it yet, but Duncan Sabien (former CFAR instructor) has written an epic rationalist fanfic of Animorphs, called Animorphs: the Reckoning, which features a cast of characters with their own POV chapters. Supposedly knowledge of the base work is not required.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2021-11-08T10:42:06.896Z · LW(p) · GW(p)

Thanks! I'll check out Duncan's work, and if I don't get satisfactory responses here then I'll consider venturing into the far and dark lands of Reddit :P

Edit: I got immediately hooked. I'm now at chapter 10 (9%), and I really like it. Thanks for the recommendation! 

comment by Ruby · 2021-11-08T17:46:56.956Z · LW(p) · GW(p)

A Song for Two Voices does kind of have a rationalist protagonist, but other characters also try to be rational too (but are less advanced, generally). I guess in that way it's a bit like HPMOR, though I do think the other characters are trying a bit harder.

comment by MondSemmel · 2021-11-07T12:15:17.768Z · LW(p) · GW(p)

Continuing my ability to stumble on weird niche bugs by accident:

  1. Take a comment with bullet points nested two levels deep, like this one [LW(p) · GW(p)].
  2. Copy a bullet point nested two-deep via Ctrl+C.
  3. Paste elsewhere on Less Wrong via Ctrl+V, e.g. into a comment here. (Do not use Ctrl+Shift+V, which pastes plaintext; this does not trigger the bug.)

Expected behavior: The text is pasted into the new comment, either with the nested bullet points or without them.

Instead, here's a gif of what actually happens:

What you see there: Pasting doesn't work. Instead a weird grey box appears, which also messes with anything I subsequently type into the editor. (The grey box disappears if I click elsewhere in the editor, but I didn't record that part.)

PS: If you mods want me to post this as a github issue, I can do so; or if you prefer, you can do it yourselves. I just don't want this bug report to get lost.

Replies from: Ruby
comment by Ruby · 2021-11-07T17:10:28.507Z · LW(p) · GW(p)

Thanks for the report! Yeah, a good deal of weirdness lives in editor edge cases.

A GitHub issue would be good, but here is also fine if it's easier.

Replies from: MondSemmel
comment by MondSemmel · 2021-11-08T15:39:02.639Z · LW(p) · GW(p)

Have created a Github issue here.

comment by nmehndir · 2021-11-26T21:03:40.757Z · LW(p) · GW(p)

[deleted]

Replies from: ben-carew
comment by bcare18 (ben-carew) · 2021-11-27T02:49:34.119Z · LW(p) · GW(p)

Absolutely! I'm just finishing a bachelor in physics. Email me at B78980988@gmail.com.

Replies from: nmehndir
comment by nmehndir · 2021-11-28T18:21:46.754Z · LW(p) · GW(p)

[deleted]

comment by khafra · 2021-11-25T14:19:34.158Z · LW(p) · GW(p)

I've used Eliezer's prayer to good effect, but it's a bit short. And I have considered The Sons of Martha, but it's a bit long.

Has anyone, in their rationalist readings, found something that would work as a Thanksgiving invocation of a just-right length?

Replies from: Yoav Ravid
comment by Yoav Ravid · 2021-11-25T14:22:40.949Z · LW(p) · GW(p)

Perhaps you can adapt something from Dennet's THANK GOODNESS!

comment by Amit Dubey (amit-dubey) · 2021-11-22T23:23:57.213Z · LW(p) · GW(p)

Hi I just found site this while looking for "textbooks I should read."

I recently had this thought in my head that most popular non-fiction books or radioshows on science aren't well received by scientists working in that field, no matter if it's Guns, Germs and Steel; Sapiens; Freakonomics; the Hidden Brain; or anything by Malcolm Gladwell.  From what I understand, the more popular the book, the more likely it is to focus on an easy-to-market narrative at the expense of a factual and balanced presentation of the evidence.

This isn't very concrete, just an idea bouncing around my head as a way of introducing myself.

Replies from: gilch