Posts

Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle 2025-03-29T02:51:29.786Z
North Oakland: Shallow Questions, March 12th 2025-03-01T18:42:55.713Z
North Oakland: Reading & Discussion, March 19th 2025-03-01T18:41:34.161Z
North Oakland: Projects,March 5th 2025-03-01T18:40:53.540Z
North Oakland: Short Talks, March 26th - THE LAST 2025-03-01T18:39:47.629Z
North Oakland: Puzzles, February 26th 2025-02-26T22:46:18.447Z
North Oakland: Reading & Discussion, February 19th 2025-01-29T08:07:18.278Z
North Oakland: Deep Questions, February 12th 2025-01-29T08:06:24.909Z
North Oakland: Group Debugging, Wednesday January 29th 2025-01-29T08:05:14.577Z
North Oakland: Projects,February 5th 2025-01-15T05:16:07.082Z
North Oakland: Reading & Discussion, January 22nd 2025-01-15T05:15:13.400Z
North Oakland: Shallow Questions, January 15th 2025-01-13T17:46:03.887Z
How to Edit an Essay into a Solstice Speech? 2024-12-15T04:30:50.545Z
North Oakland: Deep Questions, December 18th 2024-12-05T17:31:11.164Z
North Oakland: Year In Review, January 8th 2024-12-05T17:28:12.681Z
North Oakland: Short Talks, December 11th 2024-12-04T20:26:30.732Z
North Oakland: Projects, December 4th 2024-11-28T00:33:51.125Z
North Oakland/Berkeley: Ballots, October 16th 2024-08-12T03:38:48.461Z
North Oakland: Shallow Questions, November 20th 2024-08-12T03:33:13.923Z
North Oakland: Shallow Questions, September 25th 2024-08-12T03:32:35.030Z
North Oakland: Reading & Discussion, November 27th 2024-08-12T03:22:40.183Z
North Oakland: Short Talks, September 18th 2024-08-12T03:20:43.556Z
North Oakland: Projects, November 6th 2024-08-12T03:16:46.519Z
North Oakland: Projects, October 2nd 2024-08-12T03:16:27.892Z
North Oakland: Projects, September 4th 2024-08-12T03:15:54.474Z
North Oakland: Deep Questions, October 23rd 2024-08-12T03:10:55.766Z
North Oakland: Deep Questions, August 28th 2024-08-12T03:10:23.997Z
North Oakland: Reading & Discussion, October 30th 2024-08-12T03:07:40.518Z
North Oakland: Reading & Discussion, September 11th 2024-08-12T03:07:09.972Z
North Oakland: Reading & Discussion, August 21st 2024-08-12T03:05:42.229Z
North Oakland: Board Games, October 9th 2024-08-12T03:00:13.013Z
North Oakland: Board Games, Wednesday August 14th 2024-08-12T02:59:29.692Z
Index of rationalist groups in the Bay Area July 2024 2024-07-26T16:32:25.337Z
North Oakland: Projects, Wednesday August 7th 2024-07-22T22:09:57.144Z
North Oakland: Group Debugging, Wednesday July 24th 2024-07-22T21:32:41.739Z
North Oakland: Reading & Discussion, Wednesday July 17th 2024-07-13T22:11:37.886Z
North Oakland: Projects, July 9th 2024-06-29T06:44:24.927Z
North Oakland: Board Games, July 2nd 2024-06-27T23:50:17.579Z
North Oakland: Projects, April 23rd 2024-04-11T05:56:04.730Z
Meetup In a Box: Year In Review 2024-02-14T01:18:28.259Z
North Oakland: Year In Review, January 30th (Rescheduled) 2024-01-25T22:28:21.794Z
North Oakland: Meta Meetup, June 25th 2024-01-10T04:15:24.025Z
North Oakland: Group Debugging, May 28th 2024-01-10T04:13:05.363Z
North Oakland: Deep Questions, March 26th 2024-01-10T03:44:28.250Z
North Oakland: Shallow Questions, February 27th 2024-01-10T03:38:59.261Z
North Oakland: Short Talks, Wednesday July 31st 2024-01-10T03:35:12.118Z
North Oakland: Short Talks, April 30th 2024-01-10T03:34:22.444Z
North Oakland: Board Games, June 4th 2024-01-10T03:32:58.205Z
North Oakland: Board Games, May 7th 2024-01-10T03:32:23.033Z
North Oakland: Board Games, April 2nd 2024-01-10T03:30:53.684Z

Comments

Comment by Czynski (JacobKopczynski) on You Are Not Measuring What You Think You Are Measuring · 2025-04-15T22:09:03.622Z · LW · GW

Doesn't this imply that having a theory of the domain you're experimenting in is of low to no value? I find that hard to believe, and therefore doubt your assumptions are correct and applicable.

Comment by Czynski (JacobKopczynski) on Czynski's Shortform · 2025-04-12T17:36:35.418Z · LW · GW

https://philpapers.org/rec/ARVIAA

This paper uses famous problems from philosophy of science and philosophical psychology—underdetermination of theory by evidence, Nelson Goodman’s new riddle of induction, theory-ladenness of observation, and “Kripkenstein’s” rule-following paradox—to show that it is empirically impossible to reliably interpret which functions a large language model (LLM) AI has learned, and thus, that reliably aligning LLM behavior with human values is provably impossible.

So, this seems provisionally to be bullshit because it doesn't admit of thinking probabilistically or simplicity priors. But I'm not totally sure it'r worthless. Anyone read it in detail?

Comment by Czynski (JacobKopczynski) on A concise version of “Twelve Virtues of Rationality”, with Anki deck · 2025-04-09T19:12:19.757Z · LW · GW

The older deck sucks. It contains the entirety of the essay without regard to what's important. This deck is still messy - including too much focusing on the ordering and numbering of the virtues - but it's significantly superior, and contains concise hearts of the matter. If you're trying to create a memory aid for the Twelve Virtues, this deck was absolutely an improvement.

Comment by Czynski (JacobKopczynski) on Meetups Notes (Q1 2025) · 2025-04-02T03:28:10.161Z · LW · GW

If there are a lot of people for the very-low-context NY meetup, possibly at least one very-low-context meetup per quarter is worth doing, to see if that gets people in/back more?

Comment by Czynski (JacobKopczynski) on Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle · 2025-03-30T21:08:25.258Z · LW · GW

Like others, apparently "think like a mathematician" is enough to get it to work.

Comment by Czynski (JacobKopczynski) on Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle · 2025-03-30T20:58:15.205Z · LW · GW

Not only is there not a standard name for this set of numbers, but it's not clear what that set of numbers is. I consulted a better mathematician in the past, and he said that if you allow multiplication it becomes an known unsolved problem whether its representations are unique and whether it can construct all algebraic numbers.

Comment by Czynski (JacobKopczynski) on Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle · 2025-03-30T17:35:09.673Z · LW · GW

If you give it the up-front caveat "this can represent all rational numbers and at least some algebraic irrationals", I think that rules out the polynomial appromixation approach, since you can't give arbitrary arguments and get intermediate values by continuity. But I'm not certain of that.

Comment by Czynski (JacobKopczynski) on Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle · 2025-03-30T06:31:39.786Z · LW · GW

Yep, that works for Gemini 2.5 as well, got it in one try. In fact, just "think like a mathematician" is enough. Post canceled, everybody go home.

Comment by Czynski (JacobKopczynski) on Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle · 2025-03-30T06:20:42.478Z · LW · GW

Yes, figure out the notation. The test I gave the LLMs to be sure their solutions weren't secretly the same as mine in different language was to ask them to properly encode 30,000 and (210)^(2/5).

Comment by Czynski (JacobKopczynski) on Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle · 2025-03-30T06:16:38.731Z · LW · GW

I meant to leave this link in that footnote. It's really quite awful.

Comment by Czynski (JacobKopczynski) on Tormenting Gemini 2.5 with the [[[]]][][[]] Puzzle · 2025-03-30T06:15:09.976Z · LW · GW

Sure, done.

Comment by Czynski (JacobKopczynski) on Going Nova · 2025-03-20T04:04:07.649Z · LW · GW

Can anyone provide an example conversation (or prefix thereof) which leads to a 'Nova' state? I'm finding it moderately tricky to imagine, not being the kind of person who goes looking for it.

Comment by Czynski (JacobKopczynski) on Czynski's Shortform · 2025-03-04T15:45:22.325Z · LW · GW

Paraphrasing Eddington: If your theory of morality is incompatible with factory farming, then so much the worse for factory farming. If it says not to touch the trolley problem, well, even nominally-obvious thought experiments can be wrong sometimes. But if it says to run a risk of death for all humanity for animals or minds that don't share human values, there is no hope for it; so much the worse for the theory, at best, or so much for morality at worst.

Comment by Czynski (JacobKopczynski) on Benito's Shortform Feed · 2025-02-28T21:44:39.747Z · LW · GW

I find them visually awful and disable them in settings. And avoid using archive.is because there's no way to turn that off.

Not that I browse LW that much, in fairness.

Comment by Czynski (JacobKopczynski) on MichaelDickens's Shortform · 2025-02-23T01:02:20.145Z · LW · GW

People typically only select into this sort of role if they're a bit more prone to conflict about it, which means a lot of the work is kinda thankless because people are pushing back on you for being too conflicty.

Things can be done to encourage this behavior anway, such as with how the site works. Instead the opposite has been done; this is the root of my many heated disagreements with the LW team.

Comment by Czynski (JacobKopczynski) on Czynski's Shortform · 2025-02-23T00:55:52.156Z · LW · GW

Addressing primarily Rethink Priorities's talk at EAG.

Comment by Czynski (JacobKopczynski) on Czynski's Shortform · 2025-02-23T00:54:49.932Z · LW · GW

On Nonperson Predicates

Assume that digital minds will be most of the minds the future holds. Won't this overwhelmingly be after whatever capability escalation passes for "the Singularity", and therefore be addressed at 99.9% efficiency by delaying consideration of the problem after that capability exists and makes it vastly easier?

Comment by Czynski (JacobKopczynski) on Anvil Shortage · 2025-01-28T22:56:13.283Z · LW · GW

A myth contained in the classical Jewish text Pirkei Avot states that the first pair of tongs was created by God right before God rested on the Seventh Day. The reasoning is that a blacksmith must use a pair of tongs in order to fashion a new pair of tongs. Accordingly, God must have provided humankind with the first pair of tongs.

Comment by Czynski (JacobKopczynski) on North Oakland: Shallow Questions, January 15th · 2025-01-15T04:47:27.627Z · LW · GW

Wednesday yes, sorry.

Comment by Czynski (JacobKopczynski) on Meetup In a Box: Year In Review · 2025-01-09T04:11:40.029Z · LW · GW

Trying this again I think the question sets need a little work.

Comment by Czynski (JacobKopczynski) on How Much to Give is a Pragmatic Question · 2024-12-24T18:51:58.565Z · LW · GW

But Doctor, I am Kaufman!

EDIT: Oh wait, he linked this joke himself. I feel less clever now.

Comment by Czynski (JacobKopczynski) on Czynski's Shortform · 2024-12-24T06:19:32.850Z · LW · GW

As usual after Solstice, I had an urge to write about Solstice, in this case a speech I may someday give.

Comment by Czynski (JacobKopczynski) on Czynski's Shortform · 2024-12-24T05:57:06.462Z · LW · GW

Tried to leave this as a review comment, which is blocked:

Even with the benefit of hindsight proving that Trump could and would get reelected, this still looks just as badly-constructed as it did at the time. This was an argument based in fear and rationalization, not a clear-eyed prediction of the future. The bottom line was written first.

Comment by Czynski (JacobKopczynski) on Czynski's Shortform · 2024-12-24T05:47:54.974Z · LW · GW

Editing Essays into Solstice Speeches: Standing offer: if you have a speech to give at Solstice or other rationalist event, message me and I'll look at your script and/or video call you to critique your performance and help

Comment by Czynski (JacobKopczynski) on How to Edit an Essay into a Solstice Speech? · 2024-12-24T05:46:26.395Z · LW · GW

Ask Me For Help

Standing offer: if you have a speech to give at Solstice or other rationalist event, message me and I'll look at your script and/or video call you to critique your performance and help

Comment by Czynski (JacobKopczynski) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-02T06:22:54.572Z · LW · GW

I don't have much understanding of current AI discussions and it's possible those are somewhat better/less advanced a case of rot.

Those same psychological reasons indicate that anything which is actual dissent will be interpreted as incivility. This has happened here and is happening as we speak. It was one of the significant causes of SBF. It's significantly responsible for the rise of woo among rationalists, though my sense is that that's started to recede (years later). It's why EA as a movement seems to be mostly useless at this point and coasting on gathered momentum (mostly in the form of people who joined early and kept their principles).

I'm aware there is a tradeoff, but being committed to truthseeking demands that we pick one side of that tradeoff, and LessWrong the website has chosen to pick the other side instead. I predicted this would go poorly years before any of the things I named above happened.

I can't claim to have predicted the specifics, I don't get many Bayes Points for any of them, but they're all within-model. Especially EA's drift (mostly seeking PR and movement breadth). The earliest specific point where I observed that this problem was happening was 'Intentional Insights', where it was uncivil to observe that the man was a huckster and faking community signals, and so it took several rounds of blatant hucksterism for him to finally be disavowed and forced out. If EA'd learned this lesson then, it would be much smaller but probably 80% could have avoided involvement in FTX. LW-central-rationalism is not as bad, yet, but it looks on the same path to me.

Comment by Czynski (JacobKopczynski) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-02T06:06:48.232Z · LW · GW

I still prefer the ones I see there to what I see on LW. Lower quantity higher value.

Comment by Czynski (JacobKopczynski) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-11-30T22:14:51.929Z · LW · GW

Currently no great alternatives exist because LW killed them. The quality of the comment section on SSC and most other rationalist blogs I was following got much worse when LW was rebooted (and killed several of them), and initially it looked like LW was an improvement, but over time the structural flaws killed it.

I still see much better comments on individual blogs - Zvi, Sarah Constantin, Elizabeth vN, etc. - than on LessWrong. Some community Discords are pretty good, though they are small walled gardens; rationalist Tumblr has, surprisingly, gotten actively better over time, even as it shrank. All of these are low volume.

It's possible in theory that the volume of good comments on LessWrong is higher than those places. I don't know, and in practical terms don't care, because they're drowned out by junk, mostly highly-upvoted junk. I don't bother to look for good comments here at all because they're sufficiently bad that it's not worthwhile. I post here only for visibility, not for good feedback, because I know I won't get it; I only noticed this post at all because of a link from a Discord.

Groupthink is not a possible future, to be clear. It's already here in a huge way, and probably not fixable. If there was a chance of reversing the trend, it ended with Said being censured and censored for being stubbornly anti-groupthink to the point of rudeness. Because he was braver or more stubborn than me and kept trying for a couple years after I gave up.

Comment by Czynski (JacobKopczynski) on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-11-30T18:42:15.422Z · LW · GW

I see much more value in Lighthaven than in the rest of the activity of Lightcone.

I wish Lightcone would split into two (or even three) organizations, as I would unequivocally endorse donating to Lighthaven and recommend it to others, vs. LessWrong where I'm not at all confident it's net positive over blogs and Substacks, and the grantmaking infastructure and other meta which is highly uncertain and probably highly replaceable.

All of the analysis of the impact of new LessWrong is misleading at best; it is assuming that volume on LessWrong is good in itself, which I do not believe to be the case; if similar volume is being stolen from other places, e.g. dropping away from blogs on the SSC blogroll and failing to create their own Substacks, which I think is very likely to be true, this is of minimal benefit to the community and likely negative benefit to the world, as LW is less visible and influential than strong existing blogs or well-written new Substacks.

That's on top of my long-standing objections to the structure of LW, which is bad for community epistemics by encouraging groupthink, in a way that standard blogs are not. If you agree with my contention there, then even a large net increase in volume would still be, in expectation, significantly negative for the community and the world. Weighted voting delenda est; post-author moderation delenda est; in order to win the war of good group epistemics we must accept losing the battles of discouraging some marginal posts from the prospect of harsh, rude, and/or ignorant feedback.

Comment by Czynski (JacobKopczynski) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-28T00:31:17.788Z · LW · GW

That was true this week, but the first time I attended (the 12th) I believe it wasn't, I arrived at what I think was 6:20-6:25 and found everything had already started.

Comment by Czynski (JacobKopczynski) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-27T01:48:19.747Z · LW · GW

Based on my prior experience running meetups, a 15m gap between 'doors open' and starting the discussion is too short. 30m is the practical minimum; I prefer 45-60m because I optimize for low barrier to entry (as a means of being welcoming).

I also find this to be a significant barrier in participating myself, as targeting a fifteen-minute window for arrival is usually beyond my planning abilities unless I have something else with a hard end time within the previous half-hour.

Comment by Czynski (JacobKopczynski) on Lighthaven Sequences Reading Group #12 (Tuesday 11/26) · 2024-11-27T01:09:49.631Z · LW · GW

The amount of empty space where the audience understands what's going on and nothing new or exciting is happening is much, much higher in 60s-70s film and TV. Pacing is an art, and that art has improved drastically in the last half-century.

Standards, also, were lower, though I'm more confident in this for television. In the 90s, to get kids to be interested in a science show you needed Bill Nye. In the 60s, doing ordinary high-school science projects with no showmanship whatsoever was wildly popular because it was on television and this was inherently novel and fascinating. (This show actually existed.)

Comment by Czynski (JacobKopczynski) on Rationality Quotes - Fall 2024 · 2024-11-24T17:15:07.091Z · LW · GW

A man who is always asking 'Is what I do worth while?' and 'Am I the right person to do it?' will always be ineffective himself and a discouragement to others.

-- G.H. Hardy, A Mathematician's Apology

Comment by Czynski (JacobKopczynski) on What is Evidence? · 2024-11-12T00:27:23.173Z · LW · GW

a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise

There's a point to be made here about why 'unconditional love' is unsatisfying to the extent the description as 'unconditional' is accurate.

Comment by Czynski (JacobKopczynski) on Lighthaven Sequences Reading Group #8 (Tuesday 10/29) · 2024-11-02T21:17:43.896Z · LW · GW

...Oh, my mistake, it looked like they were posted a lot later than that and the ~skipped one made that look confirmed. Usually-a-week ahead is plenty of time and I'm sorry I said anything.

Comment by Czynski (JacobKopczynski) on Lighthaven Sequences Reading Group #8 (Tuesday 10/29) · 2024-10-31T03:55:42.963Z · LW · GW

Could you please announce these further in advance? Especially given the reading required beforehand it's inconvenient and honestly seems a little inconsiderate.

Comment by Czynski (JacobKopczynski) on Rationality Quotes - Fall 2024 · 2024-10-14T02:48:37.850Z · LW · GW

That's a fascinating approach to characterization. What do you do, have the actors all read the appendix before they start rehearsals?

Comment by Czynski (JacobKopczynski) on Rationality Quotes - Fall 2024 · 2024-10-12T00:44:31.756Z · LW · GW

This is apparently from a play, Man and Superman, which I have never previously heard of, let alone read or seen. I suspect that, much like Oscar Wilde's plays, it is at least as much a vehicle for witty epigrams as it is an actual performance or plot.

Comment by Czynski (JacobKopczynski) on Rationality Quotes - Fall 2024 · 2024-10-12T00:40:22.711Z · LW · GW

The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

-- George Bernard Shaw, epigram

(Inspired by part of Superintelligences will not spare Earth sunlight)

Comment by Czynski (JacobKopczynski) on Rationality Quotes - Fall 2024 · 2024-10-12T00:03:44.837Z · LW · GW

For as in absolute governments the King is law, so in free countries the law ought to be King; and there ought to be no other. But lest any ill use should afterwards arise, let the crown at the conclusion of the ceremony be demolished, and scattered among the people whose right it is.

-- Thomas Paine, Common Sense, demonstrating the Virtue of The Void

Comment by Czynski (JacobKopczynski) on Rationality Quotes - Fall 2024 · 2024-10-11T23:51:59.805Z · LW · GW

The most potent way to sacrifice your life has always been to do so one day at a time.

-- BoneyM, Divided Loyalties

Comment by Czynski (JacobKopczynski) on Conflating value alignment and intent alignment is causing confusion · 2024-09-12T02:52:52.922Z · LW · GW

I currently slightly prefer an  but that's pending further thought and discussion.

missing thought in the footnotes

Comment by Czynski (JacobKopczynski) on The Information: OpenAI shows 'Strawberry' to feds, races to launch it · 2024-08-30T18:03:44.868Z · LW · GW

We knew they were experimenting with synthetic data. We didn't know they were succeeding.

Comment by Czynski (JacobKopczynski) on Index of rationalist groups in the Bay Area July 2024 · 2024-08-18T01:39:15.667Z · LW · GW

Not sure whether to add these in, but a number of local Google calendars theoretically exist: https://calendar.google.com/calendar/render?cid=bayarearationality%40gmail.com&cid=f6qs8c387dhlounnbqg6lbv3b0%40group.calendar.google.com&cid=94j0drsqgj43nkekg8968b3uo4%40group.calendar.google.com&cid=8hq2d2indjps3vr64l96e9okt4%40group.calendar.google.com&cid=theberkeleyreach%40gmail.com

This includes Berkeley REACH (defunct), CFAR Public Events (defunct locally AFAIK), EA Events (superseded by Luma calendar?), LW Meetups (unknown but blank), and Rationalist/EA Social and Community Events (likewise)

Comment by Czynski (JacobKopczynski) on The North Oakland LessWrong Meetup · 2024-08-01T05:48:04.131Z · LW · GW

Updated to reflect the new, less regular schedule (and change of weekday) since the half-year mark.

Comment by Czynski (JacobKopczynski) on Rats, Back a Candidate · 2024-07-29T21:26:55.205Z · LW · GW

That's not what tribalism means.

Comment by Czynski (JacobKopczynski) on Index of rationalist groups in the Bay Area July 2024 · 2024-07-29T19:24:26.691Z · LW · GW

I think at normal times (when it's not filled with MATS or a con) it's possible to rent coworking space at Lighthaven? I haven't actually tried myself.

Comment by Czynski (JacobKopczynski) on Rats, Back a Candidate · 2024-07-29T19:19:10.200Z · LW · GW

Our New Orleans Rat group grows on tribalistic calls to action. “Donate to Global Health Initiatives,” “Do Art,” “Learn About AI.”

If you consider those tribalistic calls to action, I'm not sure any of you are doing evidence-based thinking in the first place. I suppose if the damage is already done, it will not make anything worse if your specific group engages in politics.

Comment by Czynski (JacobKopczynski) on Rats, Back a Candidate · 2024-07-28T04:41:52.373Z · LW · GW

There is basically no method of engaging with politics worse than backing a national candidate. It has tiny impact even if successful, is the most aggressively tribalism-infected, and is incredibly hard to say anything novel.

If you must get involved in politics, it should be local, issue-based, and unaffiliated with LW or rationalism. It is far more effective to lobby on issues than for candidates, it is far more effective to support local candidates than national, and there is minimal upside and enormous downside to having any of your political efforts tied with the 'brand' of rationalism or LW.

Comment by Czynski (JacobKopczynski) on Rats, Back a Candidate · 2024-07-28T04:37:39.201Z · LW · GW

The track record for attempts to turn tribalism into evidence-based thinking is very poor. The result, almost always, is to turn the evidence-based thinking into tribalism.