Posts

How to Edit an Essay into a Solstice Speech? 2024-12-15T04:30:50.545Z
North Oakland: Deep Questions, December 18th 2024-12-05T17:31:11.164Z
North Oakland: Year In Review, January 8th 2024-12-05T17:28:12.681Z
North Oakland: Short Talks, December 11th 2024-12-04T20:26:30.732Z
North Oakland: Projects, December 4th 2024-11-28T00:33:51.125Z
North Oakland/Berkeley: Ballots, October 16th 2024-08-12T03:38:48.461Z
North Oakland: Shallow Questions, November 20th 2024-08-12T03:33:13.923Z
North Oakland: Shallow Questions, September 25th 2024-08-12T03:32:35.030Z
North Oakland: Reading & Discussion, November 27th 2024-08-12T03:22:40.183Z
North Oakland: Short Talks, September 18th 2024-08-12T03:20:43.556Z
North Oakland: Projects, November 6th 2024-08-12T03:16:46.519Z
North Oakland: Projects, October 2nd 2024-08-12T03:16:27.892Z
North Oakland: Projects, September 4th 2024-08-12T03:15:54.474Z
North Oakland: Deep Questions, October 23rd 2024-08-12T03:10:55.766Z
North Oakland: Deep Questions, August 28th 2024-08-12T03:10:23.997Z
North Oakland: Reading & Discussion, October 30th 2024-08-12T03:07:40.518Z
North Oakland: Reading & Discussion, September 11th 2024-08-12T03:07:09.972Z
North Oakland: Reading & Discussion, August 21st 2024-08-12T03:05:42.229Z
North Oakland: Board Games, October 9th 2024-08-12T03:00:13.013Z
North Oakland: Board Games, Wednesday August 14th 2024-08-12T02:59:29.692Z
Index of rationalist groups in the Bay Area July 2024 2024-07-26T16:32:25.337Z
North Oakland: Projects, Wednesday August 7th 2024-07-22T22:09:57.144Z
North Oakland: Group Debugging, Wednesday July 24th 2024-07-22T21:32:41.739Z
North Oakland: Reading & Discussion, Wednesday July 17th 2024-07-13T22:11:37.886Z
North Oakland: Projects, July 9th 2024-06-29T06:44:24.927Z
North Oakland: Board Games, July 2nd 2024-06-27T23:50:17.579Z
North Oakland: Projects, April 23rd 2024-04-11T05:56:04.730Z
Meetup In a Box: Year In Review 2024-02-14T01:18:28.259Z
North Oakland: Year In Review, January 30th (Rescheduled) 2024-01-25T22:28:21.794Z
North Oakland: Meta Meetup, June 25th 2024-01-10T04:15:24.025Z
North Oakland: Group Debugging, May 28th 2024-01-10T04:13:05.363Z
North Oakland: Deep Questions, March 26th 2024-01-10T03:44:28.250Z
North Oakland: Shallow Questions, February 27th 2024-01-10T03:38:59.261Z
North Oakland: Short Talks, Wednesday July 31st 2024-01-10T03:35:12.118Z
North Oakland: Short Talks, April 30th 2024-01-10T03:34:22.444Z
North Oakland: Board Games, June 4th 2024-01-10T03:32:58.205Z
North Oakland: Board Games, May 7th 2024-01-10T03:32:23.033Z
North Oakland: Board Games, April 2nd 2024-01-10T03:30:53.684Z
North Oakland: Board Games, March 5th 2024-01-10T03:30:25.818Z
North Oakland: Board Games, February 6th 2024-01-10T03:29:20.657Z
North Oakland: Board Games, January 23rd 2024-01-10T03:28:38.564Z
North Oakland: Projects, June 11th 2024-01-10T03:24:25.771Z
North Oakland: Projects, May 14th 2024-01-10T03:23:48.950Z
North Oakland: Reading & Discussion, June 18th 2024-01-10T03:22:28.449Z
North Oakland: Reading & Discussion, May 21st 2024-01-10T03:21:32.011Z
North Oakland: Reading & Discussion, April 16th 2024-01-10T03:17:15.714Z
North Oakland: Reading & Discussion, March 19th 2024-01-10T03:16:30.515Z
North Oakland: Reading & Discussion, February 20th 2024-01-10T03:15:57.041Z
North Oakland: Reading & Discussion, January 16th 2024-01-10T03:14:57.242Z
North Oakland: Projects, March 12th 2023-12-30T07:20:53.044Z

Comments

Comment by Czynski (JacobKopczynski) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-02T06:22:54.572Z · LW · GW

I don't have much understanding of current AI discussions and it's possible those are somewhat better/less advanced a case of rot.

Those same psychological reasons indicate that anything which is actual dissent will be interpreted as incivility. This has happened here and is happening as we speak. It was one of the significant causes of SBF. It's significantly responsible for the rise of woo among rationalists, though my sense is that that's started to recede (years later). It's why EA as a movement seems to be mostly useless at this point and coasting on gathered momentum (mostly in the form of people who joined early and kept their principles).

I'm aware there is a tradeoff, but being committed to truthseeking demands that we pick one side of that tradeoff, and LessWrong the website has chosen to pick the other side instead. I predicted this would go poorly years before any of the things I named above happened.

I can't claim to have predicted the specifics, I don't get many Bayes Points for any of them, but they're all within-model. Especially EA's drift (mostly seeking PR and movement breadth). The earliest specific point where I observed that this problem was happening was 'Intentional Insights', where it was uncivil to observe that the man was a huckster and faking community signals, and so it took several rounds of blatant hucksterism for him to finally be disavowed and forced out. If EA'd learned this lesson then, it would be much smaller but probably 80% could have avoided involvement in FTX. LW-central-rationalism is not as bad, yet, but it looks on the same path to me.

Comment by Czynski (JacobKopczynski) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-02T06:06:48.232Z · LW · GW

I still prefer the ones I see there to what I see on LW. Lower quantity higher value.

Comment by Czynski (JacobKopczynski) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-11-30T22:14:51.929Z · LW · GW

Currently no great alternatives exist because LW killed them. The quality of the comment section on SSC and most other rationalist blogs I was following got much worse when LW was rebooted (and killed several of them), and initially it looked like LW was an improvement, but over time the structural flaws killed it.

I still see much better comments on individual blogs - Zvi, Sarah Constantin, Elizabeth vN, etc. - than on LessWrong. Some community Discords are pretty good, though they are small walled gardens; rationalist Tumblr has, surprisingly, gotten actively better over time, even as it shrank. All of these are low volume.

It's possible in theory that the volume of good comments on LessWrong is higher than those places. I don't know, and in practical terms don't care, because they're drowned out by junk, mostly highly-upvoted junk. I don't bother to look for good comments here at all because they're sufficiently bad that it's not worthwhile I post here only for visibility, not for good feedback, because I know I won't get it; I only noticed this post at all because of a link from a Discord.

Groupthink is not a possible future, to be clear. It's already here in a huge way, and probably not fixable. If there was a chance of reversing the trend, it ended with Said being censured and censored for being stubbornly anti-groupthink to the point of rudeness. Because he was braver or more stubborn than me and kept trying for a couple years after I gave up.

Comment by Czynski (JacobKopczynski) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-11-30T18:42:15.422Z · LW · GW

I see much more value in Lighthaven than in the rest of the activity of Lightcone.

I wish Lightcone would split into two (or even three) organizations, as I would unequivocally endorse donating to Lighthaven and recommend it to others, vs. LessWrong where I'm not at all confident it's net positive over blogs and Substacks, and the grantmaking infastructure and other meta which is highly uncertain and probably highly replaceable.

All of the analysis of the impact of new LessWrong is misleading at best; it is assuming that volume on LessWrong is good in itself, which I do not believe to be the case; if similar volume is being stolen from other places, e.g. dropping away from blogs on the SSC blogroll and failing to create their own Substacks, which I think is very likely to be true, this is of minimal benefit to the community and likely negative benefit to the world, as LW is less visible and influential than strong existing blogs or well-written new Substacks.

That's on top of my long-standing objections to the structure of LW, which is bad for community epistemics by encouraging groupthink, in a way that standard blogs are not. If you agree with my contention there, then even a large net increase in volume would still be, in expectation, significantly negative for the community and the world. Weighted voting delenda est; post-author moderation delenda est; in order to win the war of good group epistemics we must accept losing the battles of discouraging some marginal posts from the prospect of harsh, rude, and/or ignorant feedback.

Comment by Czynski (JacobKopczynski) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-28T00:31:17.788Z · LW · GW

That was true this week, but the first time I attended (the 12th) I believe it wasn't, I arrived at what I think was 6:20-6:25 and found everything had already started.

Comment by Czynski (JacobKopczynski) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-27T01:48:19.747Z · LW · GW

Based on my prior experience running meetups, a 15m gap between 'doors open' and starting the discussion is too short. 30m is the practical minimum; I prefer 45-60m because I optimize for low barrier to entry (as a means of being welcoming).

I also find this to be a significant barrier in participating myself, as targeting a fifteen-minute window for arrival is usually beyond my planning abilities unless I have something else with a hard end time within the previous half-hour.

Comment by Czynski (JacobKopczynski) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-27T01:09:49.631Z · LW · GW

The amount of empty space where the audience understands what's going on and nothing new or exciting is happening is much, much higher in 60s-70s film and TV. Pacing is an art, and that art has improved drastically in the last half-century.

Standards, also, were lower, though I'm more confident in this for television. In the 90s, to get kids to be interested in a science show you needed Bill Nye. In the 60s, doing ordinary high-school science projects with no showmanship whatsoever was wildly popular because it was on television and this was inherently novel and fascinating. (This show actually existed.)

Comment by Czynski (JacobKopczynski) on Rationality Quotes - Fall 2024 · 2024-11-24T17:15:07.091Z · LW · GW

A man who is always asking 'Is what I do worth while?' and 'Am I the right person to do it?' will always be ineffective himself and a discouragement to others.

-- G.H. Hardy, A Mathematician's Apology

Comment by Czynski (JacobKopczynski) on What is Evidence? · 2024-11-12T00:27:23.173Z · LW · GW

a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise

There's a point to be made here about why 'unconditional love' is unsatisfying to the extent the description as 'unconditional' is accurate.

Comment by Czynski (JacobKopczynski) on Lighthaven Sequences Reading Group #8 (Tuesday 10/29) · 2024-11-02T21:17:43.896Z · LW · GW

...Oh, my mistake, it looked like they were posted a lot later than that and the ~skipped one made that look confirmed. Usually-a-week ahead is plenty of time and I'm sorry I said anything.

Comment by Czynski (JacobKopczynski) on Lighthaven Sequences Reading Group #8 (Tuesday 10/29) · 2024-10-31T03:55:42.963Z · LW · GW

Could you please announce these further in advance? Especially given the reading required beforehand it's inconvenient and honestly seems a little inconsiderate.

Comment by Czynski (JacobKopczynski) on Rationality Quotes - Fall 2024 · 2024-10-14T02:48:37.850Z · LW · GW

That's a fascinating approach to characterization. What do you do, have the actors all read the appendix before they start rehearsals?

Comment by Czynski (JacobKopczynski) on Rationality Quotes - Fall 2024 · 2024-10-12T00:44:31.756Z · LW · GW

This is apparently from a play, Man and Superman, which I have never previously heard of, let alone read or seen. I suspect that, much like Oscar Wilde's plays, it is at least as much a vehicle for witty epigrams as it is an actual performance or plot.

Comment by Czynski (JacobKopczynski) on Rationality Quotes - Fall 2024 · 2024-10-12T00:40:22.711Z · LW · GW

The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

-- George Bernard Shaw, epigram

(Inspired by part of Superintelligences will not spare Earth sunlight)

Comment by Czynski (JacobKopczynski) on Rationality Quotes - Fall 2024 · 2024-10-12T00:03:44.837Z · LW · GW

For as in absolute governments the King is law, so in free countries the law ought to be King; and there ought to be no other. But lest any ill use should afterwards arise, let the crown at the conclusion of the ceremony be demolished, and scattered among the people whose right it is.

-- Thomas Paine, Common Sense, demonstrating the Virtue of The Void

Comment by Czynski (JacobKopczynski) on Rationality Quotes - Fall 2024 · 2024-10-11T23:51:59.805Z · LW · GW

The most potent way to sacrifice your life has always been to do so one day at a time.

-- BoneyM, Divided Loyalties

Comment by Czynski (JacobKopczynski) on Conflating value alignment and intent alignment is causing confusion · 2024-09-12T02:52:52.922Z · LW · GW

I currently slightly prefer an  but that's pending further thought and discussion.

missing thought in the footnotes

Comment by Czynski (JacobKopczynski) on The Information: OpenAI shows 'Strawberry' to feds, races to launch it · 2024-08-30T18:03:44.868Z · LW · GW

We knew they were experimenting with synthetic data. We didn't know they were succeeding.

Comment by Czynski (JacobKopczynski) on Index of rationalist groups in the Bay Area July 2024 · 2024-08-18T01:39:15.667Z · LW · GW

Not sure whether to add these in, but a number of local Google calendars theoretically exist: https://calendar.google.com/calendar/render?cid=bayarearationality%40gmail.com&cid=f6qs8c387dhlounnbqg6lbv3b0%40group.calendar.google.com&cid=94j0drsqgj43nkekg8968b3uo4%40group.calendar.google.com&cid=8hq2d2indjps3vr64l96e9okt4%40group.calendar.google.com&cid=theberkeleyreach%40gmail.com

This includes Berkeley REACH (defunct), CFAR Public Events (defunct locally AFAIK), EA Events (superseded by Luma calendar?), LW Meetups (unknown but blank), and Rationalist/EA Social and Community Events (likewise)

Comment by Czynski (JacobKopczynski) on The North Oakland LessWrong Meetup · 2024-08-01T05:48:04.131Z · LW · GW

Updated to reflect the new, less regular schedule (and change of weekday) since the half-year mark.

Comment by Czynski (JacobKopczynski) on Rats, Back a Candidate · 2024-07-29T21:26:55.205Z · LW · GW

That's not what tribalism means.

Comment by Czynski (JacobKopczynski) on Index of rationalist groups in the Bay Area July 2024 · 2024-07-29T19:24:26.691Z · LW · GW

I think at normal times (when it's not filled with MATS or a con) it's possible to rent coworking space at Lighthaven? I haven't actually tried myself.

Comment by Czynski (JacobKopczynski) on Rats, Back a Candidate · 2024-07-29T19:19:10.200Z · LW · GW

Our New Orleans Rat group grows on tribalistic calls to action. “Donate to Global Health Initiatives,” “Do Art,” “Learn About AI.”

If you consider those tribalistic calls to action, I'm not sure any of you are doing evidence-based thinking in the first place. I suppose if the damage is already done, it will not make anything worse if your specific group engages in politics.

Comment by Czynski (JacobKopczynski) on Rats, Back a Candidate · 2024-07-28T04:41:52.373Z · LW · GW

There is basically no method of engaging with politics worse than backing a national candidate. It has tiny impact even if successful, is the most aggressively tribalism-infected, and is incredibly hard to say anything novel.

If you must get involved in politics, it should be local, issue-based, and unaffiliated with LW or rationalism. It is far more effective to lobby on issues than for candidates, it is far more effective to support local candidates than national, and there is minimal upside and enormous downside to having any of your political efforts tied with the 'brand' of rationalism or LW.

Comment by Czynski (JacobKopczynski) on Rats, Back a Candidate · 2024-07-28T04:37:39.201Z · LW · GW

The track record for attempts to turn tribalism into evidence-based thinking is very poor. The result, almost always, is to turn the evidence-based thinking into tribalism.

Comment by Czynski (JacobKopczynski) on North Oakland: Short Talks, Wednesday July 31st · 2024-07-28T04:15:54.262Z · LW · GW

Permanently changed to Wednesdays, but forgot that was in the group description; now fixed. There is a Manifold-associated event, Taco Tuesdays, running in SF, and I decided I'd rather stop scheduling against it.

Comment by Czynski (JacobKopczynski) on Index of rationalist groups in the Bay Area July 2024 · 2024-07-27T02:21:25.910Z · LW · GW

It would be nice to move this to a standalone website like the old Bay Rationality site. I've been considering that for months and dragging my feet about asking for funding to host it; I'd also like to contact whoever used to run it, check whether anything complicated brought it down, and maybe just yoink their codebase and update the content. I don't know who that was, though.

Comment by Czynski (JacobKopczynski) on North Oakland: Group Debugging, Wednesday July 24th · 2024-07-24T17:10:42.603Z · LW · GW

Whoops, fixed.

Someday the site will finish their API and document it, and I'll be able to automate this like I do everything else about posting meetups. But probably not this side of the Singulariy at current rates.

Comment by Czynski (JacobKopczynski) on Turning Your Back On Traffic · 2024-07-17T05:24:17.229Z · LW · GW

Facing away from the cars approaching works better IME.

Comment by Czynski (JacobKopczynski) on Higher-effort summer solstice: What if we used AI (i.e., Angel Island)? · 2024-06-28T21:54:05.954Z · LW · GW

Entirely separate from concerns about the site, I think your notion of the theme for a midsummer ritual is wrong.

If you look at midsummer rituals that have memetic fitness (traditions that lasted, or in neopaganism's case that stuck weirdly quickly), most of them are sunset rituals. Things that happen at night on the shortest nights of the year, and dwell on themes of darkness. Ghost stories, things like that.

Assuming, as I think we clearly should, that that's not a coincidence, a ritual that resonates for summer solstice should be aimed in a similar direction. It might have themes of fragility, or of near-misses personal and collective, mixed with recognition of things being good, of civilizational achievements or personal ones. (If at some point we invent the rationalist bar mitzvah it should probably be at midsummer, I feel, but I'm not sure why I think that given what I just said.)

The themes you mention of storing up energy for the winter, celebrating human accomplishment, etc., seem to me, based on my survey of existing rituals and holidays, much more appropriate for the Fall Equinox, the time of year where food is gathered and the cold days are encroaching. Competitions and skillshares, particularly, are my suggestions there, though the whole summer solstice that's developed the last few years would port across without changes other than dropping the amorphous sunset ritual.

Comment by Czynski (JacobKopczynski) on Higher-effort summer solstice: What if we used AI (i.e., Angel Island)? · 2024-06-28T21:29:20.575Z · LW · GW

I heard about this being planned earlier this year, and after about five minutes with Google Maps I concluded that it was an unsalvageably terrible idea. Unsalvageable because the core problem is Angel Island.

It takes a minimum of 75 minutes from central SF or 2h from the East Bay to travel, each way. And that's if the ferry schedule is convenient, which it will not be; the ferries are spread out far too infrequently to be able to attend conveniently. For those many who don't drive, it's technically public transit accessible, but double those times.

I have quibbles with the details (you're giving up the sunset!) but they are mostly uninmportant compared to the central problem that it is wildly inaccessible. If you go through with this plan next year, I'd estimate a maximum 'swolestice' attendance of 140 and I'd put the over/under at 80. This would mostly just be an event for the campers. Probably a pretty cool event for them, don't get me wrong, but it would be abandoning everyone else.

Comment by Czynski (JacobKopczynski) on North Oakland: Projects, April 23rd · 2024-04-11T05:57:07.779Z · LW · GW

Rescheduled - skipped it on the 9th for the eclipse, and couldn't do the original plan for the 23rd (park bocce, probably coming in a future month)

Comment by Czynski (JacobKopczynski) on metachirality's Shortform · 2024-04-02T05:12:24.499Z · LW · GW

But you can't change it for anyone else's view, which is the important thing.

Comment by Czynski (JacobKopczynski) on Increasing IQ by 10 Points is Possible · 2024-03-20T03:08:17.436Z · LW · GW

Isn't this post describing the replication attempt?

Comment by Czynski (JacobKopczynski) on Increasing IQ by 10 Points is Possible · 2024-03-20T03:07:47.533Z · LW · GW

You should try doing the next version as an adversarial collaboration.

Comment by Czynski (JacobKopczynski) on Steam · 2024-02-21T04:19:30.310Z · LW · GW

Clarification:

"Steam" is one possible opposite of Slack. I sketch a speculative view of steam as a third 'cognitive currency' almost like probability and utility.

Are 'probability' and 'utility' meant to be the other two cognitive currencies? Or is it 'Slack', and if so which is the third?

Comment by Czynski (JacobKopczynski) on North Oakland: Year In Review, January 30th (Rescheduled) · 2024-01-31T21:27:16.545Z · LW · GW

This was fairly untested but went very well!

I'll do a better writeup as a Meetup In a Box later, but this is how it went:

For each set, 10m writing things down, then ?20m? discussing, then next set

List a few things that went very well this year. (3-5)

List a few things that went very badly this year. (3-5)

If you were to 80/20 your last year, which 20% gave the 80% you valued most?

If someone looked at your actions for the last year, what would they think your priorities were?

What did you intend your priorities to be?

Do you want to make any of the revealed priorities official intentions for next year? Do you want to drop any of the intended priorities which you ended up not following up on?

What habits did you pick up? What goals (revealed or intentional) did those habits serve?

What habits got in the way? What did you fail to get due to them?

What's the most important unfulfilled goal for the last year? How can you change for the next try?

What did you learn last year?

What lessons do you hope to learn this year?

What things are you curious about, that you expect to learn more about this year?

  • this one might be worth writing down and storing for next year

We ended up combining sets 3 and 4 because 3 sets is the right amount. I had a whiteboard and wrote short versions of the questions on the whiteboard as a reminder everyone could look at, and later on emailed everyone the questions so they could refer to the list. Doing at least one of those things is probably important.

Comment by Czynski (JacobKopczynski) on Czynski's Shortform · 2024-01-17T04:57:42.602Z · LW · GW

Is there a graph of solar efficiency (fraction of energy kept in light -> electricity conversion) for solar tech that's deployed at scale? https://www.nrel.gov/pv/cell-efficiency.html exists for research models but I'm unsure of any for industrial-scale.

Comment by Czynski (JacobKopczynski) on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2024-01-03T02:26:29.028Z · LW · GW

No, I said what I meant. And not just what I meant, but what many other people reading but not commenting here are saying; rather than count I'll simply say 'at least a dozen'. This response, like all her other responses, are making her sound more and more like a grifter, not an honest dealer, with every statement made. The fact that when called to defend her actions she can't manage anything that resembles honest argument more than it does dishonest persuasion is a serious flaw; if it doesn't indicate that she has something to hide, it indicates that she is incapable of being a 'good citizen' even when she's in the right.

My primary update from every comment Kat makes is that this is a situation that calls for Conflict Theory, not Mistake Theory.

Comment by Czynski (JacobKopczynski) on North Oakland: Year In Review, January 30th (Rescheduled) · 2024-01-02T03:00:29.591Z · LW · GW

Rescheduled to the end of the month because I am sick again. Guess maybe I should have worn a mask to the airport in travel season.

Comment by Czynski (JacobKopczynski) on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-22T19:30:11.903Z · LW · GW

It's amazing how everything you say trying to defend yourself make you sound even more like a grifter.

Comment by Czynski (JacobKopczynski) on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-19T19:09:12.655Z · LW · GW

Six weeks, once, with significant counterpressure exerted against her doing so is confirmation of the original claim, not counterevidence.

Comment by Czynski (JacobKopczynski) on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-19T18:57:50.806Z · LW · GW

This post seems wildly over-charitable toward Nonlinear and their claims. Several things you note as refuted by Nonlinear aren't, e.g. "they were not able to live apart from the family unit while they worked with them" which even given the reply by Nonlinear is accurate (uncertain) is still true and obviously and unambiguously so.

Also, you fail to acknowledge that basically everything about Nonlinear's replies indicates an utterly toxic and abusive work environment and a staff of people who are seriously disconnected from reality and consumed in high-simulacra-level nonsense. The attempt to refute the claims made against them managed to be far more damning than the claims themselves. And the claims weren't minimal, either!

Comment by Czynski (JacobKopczynski) on CalvinCash's Shortform · 2023-12-06T06:20:21.017Z · LW · GW

Dodging questions like this and living in the world where they go well is something you can do approximately once in your life before you stop living in reality and are in an entirely-imaginary dream world. Twice if you're lucky and neither of the hypotheticals were particularly certain.

Comment by Czynski (JacobKopczynski) on Czynski's Shortform · 2023-12-06T06:15:25.840Z · LW · GW

A number of Manifold markets under https://manifold.markets/browse?topic=pandemic, looks like most are trading around 10% chance of anything happening outside China.

Comment by Czynski (JacobKopczynski) on Czynski's Shortform · 2023-12-06T05:58:16.328Z · LW · GW

Possible new pandemic? China's concealing evidence again, looks like the smart money is against 'new virus' but thinks it's drug-resistant pneumonia, specifically resistant to the drugs that are safe for small children.

https://foreignpolicy.com/2023/11/28/chinese-hospitals-pandemic-outbreak-pneumonia/

Comment by Czynski (JacobKopczynski) on Crock, Crocker, Crockiest · 2023-11-10T06:27:36.045Z · LW · GW

The LessWrong user who acted as a sounding board over lunch is welcome to be credited if they want to be, or may wish to avoid association with this catastrophe waiting to happen.

I don't think I added anything but encouragement, but that was me. TBH if it's a catastrophe that's an interesting result itself. I wonder if it happens every time

Comment by Czynski (JacobKopczynski) on The North Oakland LessWrong Meetup · 2023-10-30T18:13:06.168Z · LW · GW

Updated to reflect the new, more regular schedule starting beginning of the year

Comment by Czynski (JacobKopczynski) on Complex Signs Bad · 2023-07-05T01:47:53.500Z · LW · GW

Interesting. Strikes me as the logical extension of Choices are Bad in some senses.

Comment by Czynski (JacobKopczynski) on Czynski's Shortform · 2023-06-26T01:09:14.614Z · LW · GW

Censorship always prevents debates. The number of things which are explicitly banned from discussion may technically be small, but the chilling effect is huge. And the fact that ideas and symbols are banned is - correctly! - taken as evidence that they can't be beaten by argument, that people are afraid of the ideas. Also, naturally, the opposite side never has to practice their arguments, so they look like weak debaters because they are.