Open & Welcome Thread - June 2022
post by MondSemmel · 2022-06-04T19:27:45.197Z · LW · GW · 30 commentsContents
30 comments
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [? · GW] section of the LessWrong FAQ [? · GW]. If you want to orient to the content on the site, you can also check out the new Concepts section [? · GW].
The Open Thread tag is here [? · GW]. The Open Thread sequence is here [? · GW].
30 comments
Comments sorted by top scores.
comment by Lost Futures (aeviternity1) · 2022-06-16T00:25:41.592Z · LW(p) · GW(p)
Georgists, mandatory parking minimum haters, and housing reform enthusiasts welcome!
Recently I've run across a fascinating economics paper, Housing Constraints and Spatial Misallocation. The paper's thesis contends that restrictive housing regulations depressed American economic growth by an eye-watering 36% between 1964 and 2009.
That's a shockingly high figure but I found the arguments rather compelling. The paper itself now boasts over 500 citations. I've searched for rebuttals but only stumbled across a post by Bryan Caplan identifying a math error within the paper that led to an understatement(!) of the true economic toll.
This paper should be of great interest to anyone curious about housing regulation and zoning reform, Georgism, perhaps even The Great Stagnation [? · GW] of total factor productivity since the 70s. (Or just anyone who likes the idea of making thousands of extra dollars annually.)
If there's interest, I'd like to write a full-length post diving deeper into this paper and examining its wider implications.
Replies from: ChristianKl↑ comment by ChristianKl · 2022-06-17T10:48:18.737Z · LW(p) · GW(p)
I'd be interested in a full-length post delving deeper into it.
comment by Alex Vermillion (tomcatfish) · 2022-06-24T21:09:51.580Z · LW(p) · GW(p)
I've got 2 thoughts inspired by What's it like to have sex with Duncan? [LW · GW], which I'm portal-ing over from a comment over there [LW(p) · GW(p)].
- The personal/frontpage distinction [LW(p) · GW(p)] is basically meaningless unless those pages start looking more different than they do now.
- Why is the NSFW tag opt out [LW(p) · GW(p)] instead of in? This is not how basically any website works! It's a really really really small thing that could potentially burn me when I send a cool link[1] to a buddy, they try to find the rest of the series, and end up with a description of someone's sex habits!
To clarify, I really don't care about having these things on LW, and personally support having them here vs elsewhere (though I've not put a lot of thought into this yet). But please let me send posts to kids I know without the sex stuff being visible by default in a way that makes it look totally normal. (To those of you who are going to say "Sex doesn't hurt anyone, let them see it!": sure, maybe sex hurts no one, but it's your job to convince the sexually conservative parents of the bright kid I sent a link to, not to convince me.)
of which Duncan, for example, has no shortage ↩︎
↑ comment by Alex Vermillion (tomcatfish) · 2022-06-24T21:12:10.428Z · LW(p) · GW(p)
Concrete policy item #1: Make personal blogs and frontpages actually look different if more sensitive stuff is going to be moved onto personal blogs. This says something like "Hey, that thing you've heard of, 'The Sequences', is a wholly different thing from this, even though the URL looks similar"
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2022-06-24T21:23:50.519Z · LW(p) · GW(p)
Note: the NSFW tag literally did not exist until I made the relevant post; it is not that the LW team "solved this problem wrong" so much as that they haven't yet attempted to solve it at all, because it hadn't come up.
Replies from: tomcatfish↑ comment by Alex Vermillion (tomcatfish) · 2022-06-24T21:43:34.472Z · LW(p) · GW(p)
I figured that's likely true, given that it seems to be the only item in the tag. I should have mentioned that I'm raising these points because I think
- It's probably going to be raised anyways, and I trust myself (given relatively low stake in this) to not blow up about it
- Subpoint: Probably not that many people have weighed in, given that I saw no discussion on the post and none here
- Subpoint: I (personally) found the post to be non-harmful on the object level, so it's easier to raise the point rather than later if it gets more argument-friendly
- Now is the easiest time to change this, given that we don't have 100s of posts to sort through if we need to fix something manually
- The ""harm"" isn't yet done, so we can avert basically all of it
- [I'm forgetting one of the points I intended to type and will add it somewhere if I remember it]
↑ comment by Raemon · 2022-06-24T22:16:21.093Z · LW(p) · GW(p)
I don't currently have a strongly endorsed position here, but one of my background thoughts is that I think most people/society put sex into a weird taboo bucket for reasons that don't make sense, and I don't know that I want LW to play into.
If I'm sending an impressionable kid here, I'm much more worried about them reading frank discussions about maybe the world will end, than a discussion of sex, and the former is much more common on LW at the moment. (And while some parents might be more freaked out about sex, I actually expect a fair number of parents to also be worried about the world-ending stuff. MY parents were fairly worried about me getting into LW back in the day)
I think there are times LW has to make tradeoffs between "just talking about things sensibly" and "caving to broader societal demands". But I think the default expectation should be "just talk about things sensibly", and if there's something particularly wrong with one particular topic (sex, or politics, or whatnot), it's the job of the people arguing we shouldn't have a discussion (or should hide it), that it's concretely important to hide.
I have heard several people mention that the recent sex post was offputting to them, and I think it's worth tracking that cost. But right now there's been... maybe two posts about sex in 5 years? The previous one I can recall [LW · GW] offhand was actually making some important epistemic points.
It's expensive to build new features so I don't think it makes sense to prioritize this that much until it's become a more common phenomenon.
Replies from: tomcatfish↑ comment by Alex Vermillion (tomcatfish) · 2022-06-25T03:14:12.211Z · LW(p) · GW(p)
I'll note that I had no issue with the post you linked, or this one [LW · GW], both of which use an example which is just sex-flavored and therefore (in my opinion) absolutely harmless. The opposition to those 2 posts actually confused me quite a bit and showed me a lot of people are modeling vulgarity differently than I am!
Again, I totally agree that there shouldn't be anything harmful with any of these posts, but I do think there is some kind of line to draw between "we said the word 'dildo' to make a point" and "the post is literally just data about someone's sex life", and I think this is kind of the easiest time to draw that line, instead of later. However, I get what you're going for.
I don't think it's super productive to do much more arguing my case here other than making sure I reword it once so it's clear, so I'll do that and then leave it alone unless someone else cares. I think it's an error to say "society has an issue with being overly sensitive, and besides, we have stuff that's way more harmful", both because 1, we actually still can be affected by society, or succeed in our goals worse by not conforming in areas that are well established, and 2, because that's just an argument the end-of-the-world stuff also being behind an opt-in (which would probably actually make a ton of people happy?). (I'm gesturing at something similar to "proving too much" [? · GW] here)
↑ comment by Alex Vermillion (tomcatfish) · 2022-06-24T21:13:43.833Z · LW(p) · GW(p)
Concrete policy item #2: Make the darned NSFW tag opt-in, and make even make it a scary color (or have a very noticeable indicator), so I[1] can see at a glance "This article is not for you, kid, unless you're safe from repercussions of reading it, so don't do it until then".
A hypothetical kid who has been sent a link to a LW article by their cool relative Alex ↩︎
comment by Lost Futures (aeviternity1) · 2022-06-16T00:46:39.285Z · LW(p) · GW(p)
Found an obscure quote by Christiaan Huygens predicting the industrial revolution a century before its inception and predicting the airplane over two hundred years before its invention:
The violent action of the powder is by this discovery restricted to a movement which limits itself as does that of a great weight. And not only can it serve all purposes to which weight is applied, but also in most cases where man or animal power is needed, such as that it could be applied to raise great stones for building, to erect obelisks, to raise water for fountains or to work mills to grind grain .... It can also be used as a very powerful projector of such a nature that it would be possible by this means to construct weapons which would discharge cannon balls, great arrows, and bomb shells .... And, unlike the artillery of today these engines would be easy to transport, because in this discovery lightness is combined with power.
This last characteristic is very important, and by this means permits the discovery of new kinds of vehicles on land and water.
And although it may sound contradictory, it seems not impossible to devise some vehicle to move through the air ....
While ultimately land, water, and air vehicles wouldn't be powered by Huygens's gunpowder engine, it remains a remarkably prescient forecast. It should also give AI researchers and other futurists some hope in their ability to predict the next technological revolution.
comment by markhamalainen · 2022-06-06T00:05:01.098Z · LW(p) · GW(p)
Hi folks, I've started an organization called "LessDeath" (http://www.lessdeath.org) with the purpose of supporting the longevity technology field's growth and effectiveness. The name is an obvious reference to LessWrong so I thought I should introduce myself and explain why I chose that.
From your FAQ, "LessWrong is a place to 1) develop and train rationality, and 2) apply one’s rationality to real-world problems." LessDeath will be training people on the specific domain of aging biology and longevity technology, and then helping connect them with real world projects to contribute to.
I also see LessDeath as similar to the Effective Altruist organization 80000hours, but again for longevity specifically rather than a broader mandate. An EA has an obvious lineage connection to Rationalism.
Lastly, I'm a big fan of HPMOR...
I'd appreciate feedback and involvement from folks in the Rationalist community. Also, LessDeath is having its first event this summer and perhaps some of you would like to apply to attend - check out the website if so!
Thanks for your time,
Mark Hamalainen
↑ comment by niplav · 2022-06-11T14:46:33.760Z · LW(p) · GW(p)
Have you tried talking to Mati Roy? They started LessDead.com, focused on life extension, especially through preservation.
comment by Aleksi Liimatainen (aleksi-liimatainen) · 2022-06-20T14:22:46.468Z · LW(p) · GW(p)
I think we have an elephant in the room. As I outlined in a recent post [LW · GW], networks of agents may do Hebbian learning as inevitably as two and two makes four. If this is the case, there are some implications.
If a significant fraction of human optimization power comes from Hebbian learning in social networks, then the optimal organizational structure is one that permits such learning. Institutional arrangements with rigid formal structure are doomed to incompetence.
If the learning-network nature of civilization is a major contributor to human progress, we may need to revise our models of human intelligence and strategies for getting the most out of it.
Given the existence of previously understudied large-scale learning networks, it's possible that there already exist agentic entities of unknown capability and alignment status. This may have implications for the tactical context of alignment research and priorities for research direction.
If agents naturally form learning networks, the creation and proliferation of AIs whose capabilities don't seem dangerous in isolation may have disproportionate higher-order effects due to the creation of novel large-scale networks or modification of existing ones.
It seems to me that the above may constitute reason to raise an alarm at least locally. Does it? If so, what steps should be taken?
↑ comment by MSRayne · 2022-06-20T14:42:09.708Z · LW(p) · GW(p)
For many years I've had the suspicion that complex organizations like religions, governments, ideologies, corporations, really any group of coordinating people, constitute a higher level meta-agent with interests distinct from those of their members, which only became more certain when I read the stuff about immoral mazes etc here. I had similar ideas about ecology, that in some sense "Gaia" is an intelligent-ish being with organisms as its neurons. (Of course, I used to be a New Ager, so these intuitions were rooted in woo, but as I became more rational I realized that they could be true without invoking anything supernatural.) But I've never been able to make these intuitions rigorous. It's exciting to see that, as mentioned in your post, some recent research is going in that direction.
The way I see it, humans haven't ever been the only intelligent agents on the planet, even ignoring the other sapient species like chimps and dolphins. Our own memes self-organize into subagents, and then into egregores (autonomous social constructs independent of their members), and those are what run the world. Humans are just the wetware on which they run, like distributed AIs.
Replies from: TAG↑ comment by TAG · 2022-06-20T15:57:27.261Z · LW(p) · GW(p)
They're called egregores.
Replies from: MSRayne, aleksi-liimatainen↑ comment by Aleksi Liimatainen (aleksi-liimatainen) · 2022-06-20T16:08:17.445Z · LW(p) · GW(p)
Kinda valid but I personally prefer to avoid "egregore" as a term. Too many meanings that narrow it too much in the wrong places.
Eg. some use it specifically to refer to parasitic memeplexes that damage the agency of the host. That cuts directly against the learning-network interpretation IMO because independent agency seems necessary for the network to learn optimally.
↑ comment by MSRayne · 2022-06-20T16:42:00.601Z · LW(p) · GW(p)
In chaos magick, which is where I learned the term from, egregores are just agentic memeplexes in general, iirc. That's how I've always used the term. Another perhaps better way of defining it would be distributed collective subagents.
I'm pretty sure "social constructs" in postmodernist philosophy are the same thing, but that stuff's too dense for me to bother reading. Another good term might be "hive minds", but that has unfortunate Borg connotations for most people and is an overloaded term in general.
Replies from: aleksi-liimatainen↑ comment by Aleksi Liimatainen (aleksi-liimatainen) · 2022-06-20T17:17:08.654Z · LW(p) · GW(p)
Yeah, I don't see much reason to disagree with that use of "egregore".
I'm noticing I've updated away from using references to any particular layer until I have more understanding of the causal patterning. Life, up to the planetary and down to the molecular, seems to be a messy, recursive nesting of learning networks with feedbacks and feedforwards all over the place. Too much separation/focus on any given layer seems like a good way to miss the big picture.
comment by trevor (TrevorWiesinger) · 2022-06-08T22:44:11.589Z · LW(p) · GW(p)
Where do I go for help/review with front-page-worthy drafts, especially drafts that I have high confidence will earn/deserve/warrant more than 50 upvotes and significant aid for lots of lesswrong users?
I am writing a post about how anthropics means that general intelligence could have had a 1/1,000,000,000,000 chance of ever evolving anywhere on earth, and given that we're intelligent enough to observe human evolution, we'd still see human intelligence, and even failed offshoots like chimpanzees and simple insect brains. But the possibility of that scenario would also imply a >1% chance that AGI timelines are weighted during or after 2060, and is strong evidence against the 99% of AGI within the decade (more like 90%), because we might have always lived in a scenario where general intelligence was always astronomically difficult to emerge via brute forcing. Therefore, there is no log odds of survival, because there is more than a 1% chance that we have more than 40 years to figure it out.
Replies from: habryka4↑ comment by habryka (habryka4) · 2022-06-08T23:27:34.705Z · LW(p) · GW(p)
We have an editor on-staff to handle exactly this. Just press the "Get Feedback" button in the editor (it should be visible to everyone above 100 karma).
comment by Rafael Harth (sil-ver) · 2022-06-25T19:03:59.118Z · LW(p) · GW(p)
Anyone know what's up with the betting markets? They give Trump 40% to win the presidency but only 33% to win the primary, and I also don't get the jumps in his graph.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2022-06-26T05:40:31.619Z · LW(p) · GW(p)
Looks like it's because the presidency race is dominated in volume by the FTX market, which is much more bullish on Trump than other markets, while the primary races don't have corresponding FTX markets. I would look at PredictIt or something rather than Election Betting Odds for inter-question consistency.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2022-06-26T08:45:43.454Z · LW(p) · GW(p)
Thanks! That much difference between markets is a pretty devastating sign for accuracy, though.
comment by Oscar_Cunningham · 2022-06-18T17:55:12.181Z · LW(p) · GW(p)
What's the term for statistical problems that are like exploration-exploitation, but without the exploitation? I tried searching for 'exploration' but that wasn't it.
In particular, suppose I have a bunch of machines which each succeed or fail independently with a probability that is fixed separately for each machine. And suppose I can pick machines to sample to see if they succeed or fail. How do I sample them if I want to become 99% certain that I've found the best machine, while using the fewest number of samples?
The difference with exploration-exploitation is that this is just a trial period, and I don't care about how many successes I get during this testing. So I want something like Thompson sampling, but for my purposes Thompson sampling oversamples the machine it currently thinks is best because it values getting successes rather than ruling out the second-best options.
Replies from: MondSemmel↑ comment by MondSemmel · 2022-06-18T18:21:25.876Z · LW(p) · GW(p)
From Algorithms to Live By, I vaguely recall the multi-armed bandit problem. Maybe that's what you're looking for? Or is that still too closely tied to the explore-exploit paradigm?
Replies from: Oscar_Cunningham, Oscar_Cunningham↑ comment by Oscar_Cunningham · 2022-06-30T14:40:44.541Z · LW(p) · GW(p)
I got a good answer here: https://stats.stackexchange.com/q/579642/5751
↑ comment by Oscar_Cunningham · 2022-06-18T18:36:38.387Z · LW(p) · GW(p)
Or is that still too closely tied to the explore-exploit paradigm?
Right. The setup for my problem is the same as the 'bernoulli bandit', but I only care about the information and not the reward. All I see on that page is about exploration-exploitation.
comment by trevor (TrevorWiesinger) · 2022-06-05T21:04:48.319Z · LW(p) · GW(p)
Any idea what's going on with the judging of the AI Safety Arguments Competition [? · GW]? It's been more than a week now since it closed, and I haven't heard anything about it or the follow-up contest.
If there's any need for help with the judging, I could help (especially if there's ways to work around the issue that I was the most active participant so I have an absurdly massive conflict of interest). Like, I could judge other people's entries, but only in a positive way that makes them more competitive, such as giving ignored entries a second look.
comment by Aleksi Liimatainen (aleksi-liimatainen) · 2022-06-24T08:45:05.998Z · LW(p) · GW(p)
If AI alignment were downstream of civilization alignment, how could we tell? How would the world look different if it were/were not?
If AI alignment is downstream of civilization alignment, how would we pivot? I'd expect at least some generalizability between AI and non-AI alignment work and it would certainly be easier to learn from experience.