Posts
Comments
I don't have much understanding of current AI discussions and it's possible those are somewhat better/less advanced a case of rot.
Those same psychological reasons indicate that anything which is actual dissent will be interpreted as incivility. This has happened here and is happening as we speak. It was one of the significant causes of SBF. It's significantly responsible for the rise of woo among rationalists, though my sense is that that's started to recede (years later). It's why EA as a movement seems to be mostly useless at this point and coasting on gathered momentum (mostly in the form of people who joined early and kept their principles).
I'm aware there is a tradeoff, but being committed to truthseeking demands that we pick one side of that tradeoff, and LessWrong the website has chosen to pick the other side instead. I predicted this would go poorly years before any of the things I named above happened.
I can't claim to have predicted the specifics, I don't get many Bayes Points for any of them, but they're all within-model. Especially EA's drift (mostly seeking PR and movement breadth). The earliest specific point where I observed that this problem was happening was 'Intentional Insights', where it was uncivil to observe that the man was a huckster and faking community signals, and so it took several rounds of blatant hucksterism for him to finally be disavowed and forced out. If EA'd learned this lesson then, it would be much smaller but probably 80% could have avoided involvement in FTX. LW-central-rationalism is not as bad, yet, but it looks on the same path to me.
I still prefer the ones I see there to what I see on LW. Lower quantity higher value.
Currently no great alternatives exist because LW killed them. The quality of the comment section on SSC and most other rationalist blogs I was following got much worse when LW was rebooted (and killed several of them), and initially it looked like LW was an improvement, but over time the structural flaws killed it.
I still see much better comments on individual blogs - Zvi, Sarah Constantin, Elizabeth vN, etc. - than on LessWrong. Some community Discords are pretty good, though they are small walled gardens; rationalist Tumblr has, surprisingly, gotten actively better over time, even as it shrank. All of these are low volume.
It's possible in theory that the volume of good comments on LessWrong is higher than those places. I don't know, and in practical terms don't care, because they're drowned out by junk, mostly highly-upvoted junk. I don't bother to look for good comments here at all because they're sufficiently bad that it's not worthwhile I post here only for visibility, not for good feedback, because I know I won't get it; I only noticed this post at all because of a link from a Discord.
Groupthink is not a possible future, to be clear. It's already here in a huge way, and probably not fixable. If there was a chance of reversing the trend, it ended with Said being censured and censored for being stubbornly anti-groupthink to the point of rudeness. Because he was braver or more stubborn than me and kept trying for a couple years after I gave up.
I see much more value in Lighthaven than in the rest of the activity of Lightcone.
I wish Lightcone would split into two (or even three) organizations, as I would unequivocally endorse donating to Lighthaven and recommend it to others, vs. LessWrong where I'm not at all confident it's net positive over blogs and Substacks, and the grantmaking infastructure and other meta which is highly uncertain and probably highly replaceable.
All of the analysis of the impact of new LessWrong is misleading at best; it is assuming that volume on LessWrong is good in itself, which I do not believe to be the case; if similar volume is being stolen from other places, e.g. dropping away from blogs on the SSC blogroll and failing to create their own Substacks, which I think is very likely to be true, this is of minimal benefit to the community and likely negative benefit to the world, as LW is less visible and influential than strong existing blogs or well-written new Substacks.
That's on top of my long-standing objections to the structure of LW, which is bad for community epistemics by encouraging groupthink, in a way that standard blogs are not. If you agree with my contention there, then even a large net increase in volume would still be, in expectation, significantly negative for the community and the world. Weighted voting delenda est; post-author moderation delenda est; in order to win the war of good group epistemics we must accept losing the battles of discouraging some marginal posts from the prospect of harsh, rude, and/or ignorant feedback.
That was true this week, but the first time I attended (the 12th) I believe it wasn't, I arrived at what I think was 6:20-6:25 and found everything had already started.
Based on my prior experience running meetups, a 15m gap between 'doors open' and starting the discussion is too short. 30m is the practical minimum; I prefer 45-60m because I optimize for low barrier to entry (as a means of being welcoming).
I also find this to be a significant barrier in participating myself, as targeting a fifteen-minute window for arrival is usually beyond my planning abilities unless I have something else with a hard end time within the previous half-hour.
The amount of empty space where the audience understands what's going on and nothing new or exciting is happening is much, much higher in 60s-70s film and TV. Pacing is an art, and that art has improved drastically in the last half-century.
Standards, also, were lower, though I'm more confident in this for television. In the 90s, to get kids to be interested in a science show you needed Bill Nye. In the 60s, doing ordinary high-school science projects with no showmanship whatsoever was wildly popular because it was on television and this was inherently novel and fascinating. (This show actually existed.)
A man who is always asking 'Is what I do worth while?' and 'Am I the right person to do it?' will always be ineffective himself and a discouragement to others.
-- G.H. Hardy, A Mathematician's Apology
a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise
There's a point to be made here about why 'unconditional love' is unsatisfying to the extent the description as 'unconditional' is accurate.
...Oh, my mistake, it looked like they were posted a lot later than that and the ~skipped one made that look confirmed. Usually-a-week ahead is plenty of time and I'm sorry I said anything.
Could you please announce these further in advance? Especially given the reading required beforehand it's inconvenient and honestly seems a little inconsiderate.
That's a fascinating approach to characterization. What do you do, have the actors all read the appendix before they start rehearsals?
This is apparently from a play, Man and Superman, which I have never previously heard of, let alone read or seen. I suspect that, much like Oscar Wilde's plays, it is at least as much a vehicle for witty epigrams as it is an actual performance or plot.
The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.
-- George Bernard Shaw, epigram
(Inspired by part of Superintelligences will not spare Earth sunlight)
For as in absolute governments the King is law, so in free countries the law ought to be King; and there ought to be no other. But lest any ill use should afterwards arise, let the crown at the conclusion of the ceremony be demolished, and scattered among the people whose right it is.
-- Thomas Paine, Common Sense, demonstrating the Virtue of The Void
The most potent way to sacrifice your life has always been to do so one day at a time.
-- BoneyM, Divided Loyalties
I currently slightly prefer an but that's pending further thought and discussion.
missing thought in the footnotes
We knew they were experimenting with synthetic data. We didn't know they were succeeding.
Not sure whether to add these in, but a number of local Google calendars theoretically exist: https://calendar.google.com/calendar/render?cid=bayarearationality%40gmail.com&cid=f6qs8c387dhlounnbqg6lbv3b0%40group.calendar.google.com&cid=94j0drsqgj43nkekg8968b3uo4%40group.calendar.google.com&cid=8hq2d2indjps3vr64l96e9okt4%40group.calendar.google.com&cid=theberkeleyreach%40gmail.com
This includes Berkeley REACH (defunct), CFAR Public Events (defunct locally AFAIK), EA Events (superseded by Luma calendar?), LW Meetups (unknown but blank), and Rationalist/EA Social and Community Events (likewise)
Updated to reflect the new, less regular schedule (and change of weekday) since the half-year mark.
That's not what tribalism means.
I think at normal times (when it's not filled with MATS or a con) it's possible to rent coworking space at Lighthaven? I haven't actually tried myself.
Our New Orleans Rat group grows on tribalistic calls to action. “Donate to Global Health Initiatives,” “Do Art,” “Learn About AI.”
If you consider those tribalistic calls to action, I'm not sure any of you are doing evidence-based thinking in the first place. I suppose if the damage is already done, it will not make anything worse if your specific group engages in politics.
There is basically no method of engaging with politics worse than backing a national candidate. It has tiny impact even if successful, is the most aggressively tribalism-infected, and is incredibly hard to say anything novel.
If you must get involved in politics, it should be local, issue-based, and unaffiliated with LW or rationalism. It is far more effective to lobby on issues than for candidates, it is far more effective to support local candidates than national, and there is minimal upside and enormous downside to having any of your political efforts tied with the 'brand' of rationalism or LW.
The track record for attempts to turn tribalism into evidence-based thinking is very poor. The result, almost always, is to turn the evidence-based thinking into tribalism.
Permanently changed to Wednesdays, but forgot that was in the group description; now fixed. There is a Manifold-associated event, Taco Tuesdays, running in SF, and I decided I'd rather stop scheduling against it.
It would be nice to move this to a standalone website like the old Bay Rationality site. I've been considering that for months and dragging my feet about asking for funding to host it; I'd also like to contact whoever used to run it, check whether anything complicated brought it down, and maybe just yoink their codebase and update the content. I don't know who that was, though.
Whoops, fixed.
Someday the site will finish their API and document it, and I'll be able to automate this like I do everything else about posting meetups. But probably not this side of the Singulariy at current rates.
Facing away from the cars approaching works better IME.
Entirely separate from concerns about the site, I think your notion of the theme for a midsummer ritual is wrong.
If you look at midsummer rituals that have memetic fitness (traditions that lasted, or in neopaganism's case that stuck weirdly quickly), most of them are sunset rituals. Things that happen at night on the shortest nights of the year, and dwell on themes of darkness. Ghost stories, things like that.
Assuming, as I think we clearly should, that that's not a coincidence, a ritual that resonates for summer solstice should be aimed in a similar direction. It might have themes of fragility, or of near-misses personal and collective, mixed with recognition of things being good, of civilizational achievements or personal ones. (If at some point we invent the rationalist bar mitzvah it should probably be at midsummer, I feel, but I'm not sure why I think that given what I just said.)
The themes you mention of storing up energy for the winter, celebrating human accomplishment, etc., seem to me, based on my survey of existing rituals and holidays, much more appropriate for the Fall Equinox, the time of year where food is gathered and the cold days are encroaching. Competitions and skillshares, particularly, are my suggestions there, though the whole summer solstice that's developed the last few years would port across without changes other than dropping the amorphous sunset ritual.
I heard about this being planned earlier this year, and after about five minutes with Google Maps I concluded that it was an unsalvageably terrible idea. Unsalvageable because the core problem is Angel Island.
It takes a minimum of 75 minutes from central SF or 2h from the East Bay to travel, each way. And that's if the ferry schedule is convenient, which it will not be; the ferries are spread out far too infrequently to be able to attend conveniently. For those many who don't drive, it's technically public transit accessible, but double those times.
I have quibbles with the details (you're giving up the sunset!) but they are mostly uninmportant compared to the central problem that it is wildly inaccessible. If you go through with this plan next year, I'd estimate a maximum 'swolestice' attendance of 140 and I'd put the over/under at 80. This would mostly just be an event for the campers. Probably a pretty cool event for them, don't get me wrong, but it would be abandoning everyone else.
Rescheduled - skipped it on the 9th for the eclipse, and couldn't do the original plan for the 23rd (park bocce, probably coming in a future month)
But you can't change it for anyone else's view, which is the important thing.
Isn't this post describing the replication attempt?
You should try doing the next version as an adversarial collaboration.
Clarification:
"Steam" is one possible opposite of Slack. I sketch a speculative view of steam as a third 'cognitive currency' almost like probability and utility.
Are 'probability' and 'utility' meant to be the other two cognitive currencies? Or is it 'Slack', and if so which is the third?
This was fairly untested but went very well!
I'll do a better writeup as a Meetup In a Box later, but this is how it went:
For each set, 10m writing things down, then ?20m? discussing, then next set
List a few things that went very well this year. (3-5)
List a few things that went very badly this year. (3-5)
If you were to 80/20 your last year, which 20% gave the 80% you valued most?
If someone looked at your actions for the last year, what would they think your priorities were?
What did you intend your priorities to be?
Do you want to make any of the revealed priorities official intentions for next year? Do you want to drop any of the intended priorities which you ended up not following up on?
What habits did you pick up? What goals (revealed or intentional) did those habits serve?
What habits got in the way? What did you fail to get due to them?
What's the most important unfulfilled goal for the last year? How can you change for the next try?
What did you learn last year?
What lessons do you hope to learn this year?
What things are you curious about, that you expect to learn more about this year?
- this one might be worth writing down and storing for next year
We ended up combining sets 3 and 4 because 3 sets is the right amount. I had a whiteboard and wrote short versions of the questions on the whiteboard as a reminder everyone could look at, and later on emailed everyone the questions so they could refer to the list. Doing at least one of those things is probably important.
Is there a graph of solar efficiency (fraction of energy kept in light -> electricity conversion) for solar tech that's deployed at scale? https://www.nrel.gov/pv/cell-efficiency.html exists for research models but I'm unsure of any for industrial-scale.
No, I said what I meant. And not just what I meant, but what many other people reading but not commenting here are saying; rather than count I'll simply say 'at least a dozen'. This response, like all her other responses, are making her sound more and more like a grifter, not an honest dealer, with every statement made. The fact that when called to defend her actions she can't manage anything that resembles honest argument more than it does dishonest persuasion is a serious flaw; if it doesn't indicate that she has something to hide, it indicates that she is incapable of being a 'good citizen' even when she's in the right.
My primary update from every comment Kat makes is that this is a situation that calls for Conflict Theory, not Mistake Theory.
Rescheduled to the end of the month because I am sick again. Guess maybe I should have worn a mask to the airport in travel season.
It's amazing how everything you say trying to defend yourself make you sound even more like a grifter.
Six weeks, once, with significant counterpressure exerted against her doing so is confirmation of the original claim, not counterevidence.
This post seems wildly over-charitable toward Nonlinear and their claims. Several things you note as refuted by Nonlinear aren't, e.g. "they were not able to live apart from the family unit while they worked with them" which even given the reply by Nonlinear is accurate (uncertain) is still true and obviously and unambiguously so.
Also, you fail to acknowledge that basically everything about Nonlinear's replies indicates an utterly toxic and abusive work environment and a staff of people who are seriously disconnected from reality and consumed in high-simulacra-level nonsense. The attempt to refute the claims made against them managed to be far more damning than the claims themselves. And the claims weren't minimal, either!
Dodging questions like this and living in the world where they go well is something you can do approximately once in your life before you stop living in reality and are in an entirely-imaginary dream world. Twice if you're lucky and neither of the hypotheticals were particularly certain.
A number of Manifold markets under https://manifold.markets/browse?topic=pandemic, looks like most are trading around 10% chance of anything happening outside China.
Possible new pandemic? China's concealing evidence again, looks like the smart money is against 'new virus' but thinks it's drug-resistant pneumonia, specifically resistant to the drugs that are safe for small children.
https://foreignpolicy.com/2023/11/28/chinese-hospitals-pandemic-outbreak-pneumonia/
The LessWrong user who acted as a sounding board over lunch is welcome to be credited if they want to be, or may wish to avoid association with this catastrophe waiting to happen.
I don't think I added anything but encouragement, but that was me. TBH if it's a catastrophe that's an interesting result itself. I wonder if it happens every time
Updated to reflect the new, more regular schedule starting beginning of the year
Interesting. Strikes me as the logical extension of Choices are Bad in some senses.
Censorship always prevents debates. The number of things which are explicitly banned from discussion may technically be small, but the chilling effect is huge. And the fact that ideas and symbols are banned is - correctly! - taken as evidence that they can't be beaten by argument, that people are afraid of the ideas. Also, naturally, the opposite side never has to practice their arguments, so they look like weak debaters because they are.