Posts

Institution Design and Coordination 2024-09-28T20:36:23.416Z
Halifax Canada - ACX Meetups Everywhere Fall 2024 2024-08-29T18:39:12.490Z
Halifax – ACX Meetups Everywhere Spring 2024 2024-03-30T11:28:31.961Z
Halifax Rationality Meetup 2024-02-13T04:17:49.763Z
Open-Source AI and Bioterrorism Risk 2023-11-04T22:45:55.171Z
Halifax, Nova Scotia, Canada – ACX Meetups Everywhere Fall 2023[UPDATE:POSTPONED BY 1 WEEK] 2023-08-25T23:33:54.802Z
Meetup - Prediction Markets 2023-07-31T03:48:49.708Z
Halifax LW Meetup - July 15 2023-07-08T03:02:03.887Z
Halifax LW Meetup - June 10th 2023-06-05T02:23:12.226Z
Halifax, Nova Scotia, Canada – ACX Spring Meetups Everywhere Spring 2023 2023-04-11T01:19:56.513Z
Alignment Might Never Be Solved, By Humans or AI 2022-10-07T16:14:37.047Z
Will Values and Competition Decouple? 2022-09-28T16:27:23.078Z
Kolmogorov's AI Forecast 2022-06-10T02:36:00.869Z
Tao, Kontsevich & others on HLAI in Math 2022-06-10T02:25:38.341Z
Halifax Rationality / EA Coworking Day 2022-06-01T17:47:00.463Z
What's the Relationship Between "Human Values" and the Brain's Reward System? 2022-04-19T05:15:48.971Z
Halifax Spring Meetup 2022-04-18T20:12:23.769Z
Consciousness: A Compression-Based Approach 2022-04-16T16:40:11.168Z
Algorithmic Measure of Emergence v2.0 2022-03-10T20:26:26.996Z
Meetup at Propeller Brewing Company 2022-02-06T07:22:36.499Z
Advancing Mathematics By Guiding Human Intuition With AI 2021-12-04T20:00:41.408Z
NTK/GP Models of Neural Nets Can't Learn Features 2021-04-22T03:01:43.973Z
interstice's Shortform 2021-03-08T21:14:11.183Z
What Are Some Alternative Approaches to Understanding Agency/Intelligence? 2020-12-29T23:21:05.779Z
Halifax SSC Meetup -- FEB 8 2020-02-08T00:45:37.738Z
HALIFAX SSC MEETUP -- FEB. 1 2020-01-31T03:59:05.110Z
SSC Halifax Meetup -- January 25 2020-01-25T01:15:13.090Z
Clarifying The Malignity of the Universal Prior: The Lexical Update 2020-01-15T00:00:36.682Z
Halifax SSC Meetup -- Saturday 11/1/20 2020-01-10T03:35:48.772Z
Recent Progress in the Theory of Neural Networks 2019-12-04T23:11:32.178Z
Halifax Meetup -- Board Games 2019-04-15T04:00:02.799Z
Predictors as Agents 2019-01-08T20:50:49.599Z
A Candidate Complexity Measure 2017-12-31T20:15:39.629Z
Please Help: How to make a big improvement in the alignment of political parties’ incentives with the public interest? 2017-01-18T00:51:56.355Z

Comments

Comment by interstice on The Case For Giving To The Shrimp Welfare Project · 2024-11-16T16:56:42.896Z · LW · GW

Confused as to why this is so heavily downvoted.

Comment by interstice on Habryka's Shortform Feed · 2024-11-15T22:07:34.731Z · LW · GW

These emails and others can be found in document 32 here.

Comment by interstice on Alexander Gietelink Oldenziel's Shortform · 2024-11-06T16:56:55.253Z · LW · GW

but it seems that even on LW people think winning on a noisy N=1 sample is proof of rationality

It's not proof of a high degree of rationality but it is evidence against being an "idiot" as you said. Especially since the election isn't merely a binary yes/no outcome, we can observe that there was a huge republican blowout exceeding most forecasts(and in fact freddi bet a lot on republican pop vote too at worse odds, as well as some random states, which gives a larger update) This should increase our credence that predicting a republican win was rational. There were also some smart observers with IMO good arguments that trump was favored pre-election, e.g. https://x.com/woke8yearold/status/1851673670713802881

"Guy with somewhat superior election modeling to Nate Silver, a lot of money, and high risk tolerance" is consistent with what we've seen. Not saying that we have strong evidence that Freddi is a genius but we also don't have much reason to think he is an idiot IMO.

Comment by interstice on Alexander Gietelink Oldenziel's Shortform · 2024-11-06T03:10:47.952Z · LW · GW

Looks likely that tonight is going to be a massive transfer of wealth from "sharps"(among other people) to him. Post hoc and all, but I think if somebody is raking in huge wins while making "stupid" decisions it's worth considering whether they're actually so stupid after all.

Comment by interstice on Shortform · 2024-10-26T22:19:10.157Z · LW · GW

Good post, it's underappreciated that a society of ideally rational people wouldn't have unsubsidized, real-money prediction markets.

unless you've actually got other people being wrong even in light of the new actors' information

Of course in real prediction markets this is exactly what we see. Maybe you could think of PMs as they exist not as something that would exist in an equilibrium of ideally rational agents, but as a method of moving our society closer to such an equilibrium, subsidized by the bets of systematically irrational people. It's not a perfect such method, but does have the advantage of simplicity. How many of these issues could be solved by subsidizing markets?

Discord Message

What discord is this, sounds cool.

Comment by interstice on Alexander Gietelink Oldenziel's Shortform · 2024-10-05T04:25:06.885Z · LW · GW

That's probably the one I was thinking of.

Comment by interstice on Alexander Gietelink Oldenziel's Shortform · 2024-10-04T18:17:23.963Z · LW · GW

I know of only two people who anticipated something like what we are seeing far ahead of time; Hans Moravec and Jan Leike

I didn't know about Jan's AI timelines. Shane Legg also had some decently early predictions of AI around 2030(~2007 was the earliest I knew about)

Comment by interstice on A Path out of Insufficient Views · 2024-09-27T17:09:24.039Z · LW · GW

Some beliefs can be worse or better at predicting what we observe, this is not the same thing as popularity.

Comment by interstice on [Completed] The 2024 Petrov Day Scenario · 2024-09-26T14:18:48.564Z · LW · GW
Comment by interstice on Why should anyone boot *you* up? · 2024-08-25T01:48:05.776Z · LW · GW

Far enough in the future ancient brain scans would be fascinating antique artifacts like rare archaeological finds today, I think people would be interested in reviving you on that basis alone(assuming there are people-like things with some power in the future)

Comment by interstice on Habryka's Shortform Feed · 2024-08-16T05:08:57.873Z · LW · GW

I like the decluttering. I think the title should be smaller and have less white space above it. Also think that it would be better if the ToC was maybe just faded a lot until mouseover, the sudden appearance/disappearance feels too sudden.

Comment by interstice on Do Prediction Markets Work? · 2024-08-01T03:03:10.845Z · LW · GW
Comment by interstice on Pivotal Acts are easier than Alignment? · 2024-07-22T03:43:24.544Z · LW · GW

No I don't think so because people could just airgap the GPUs.

Comment by interstice on Pivotal Acts are easier than Alignment? · 2024-07-21T16:02:39.132Z · LW · GW

Weaker AI probably wouldn't be sufficient to carry out an actually pivotal act. For example the GPU virus would probably be worked around soon after deployment, via airgapping GPUs, developing software countermeasures, or just resetting infected GPUs.

Comment by interstice on Towards more cooperative AI safety strategies · 2024-07-21T05:37:57.821Z · LW · GW

This discussion is a nice illustration of why x-riskers are definitely more power-seeking than the average activist group. Just like Eskimos proverbially have 50 words for snow, AI-risk-reducers need at least 50 terms for "taking over the world" to demarcate the range of possible scenarios. ;)

Comment by interstice on Towards more cooperative AI safety strategies · 2024-07-21T04:27:40.071Z · LW · GW

Nice overview, I agree but I think the 2016-2021 plan could still arguably be described as "obtain god-like AI and use it to take over the world"(admittedly with some rhetorical exaggeration, but like, not that much)

Comment by interstice on Towards more cooperative AI safety strategies · 2024-07-21T03:02:03.482Z · LW · GW

I would be happy to take bets here about what people would say.

Sure, I DM'd you.

Comment by interstice on Towards more cooperative AI safety strategies · 2024-07-21T01:25:08.047Z · LW · GW

I think making inferences from that to modern MIRI is about as confused as making inferences from people's high-school essays about what they will do when they become president

Yeah, but it's not just the old MIRI views, but those in combination with their statements about what one might do with powerful AI, the telegraphed omissions in those statements, and other public parts of their worldview e.g. regarding the competence of the rest of the world. I get the pretty strong impression that "a small group of people with overwhelming hard power" was the ideal goal, and that this would ideally be controlled by MIRI or by a small group of people handpicked by them.

Comment by interstice on Towards more cooperative AI safety strategies · 2024-07-20T18:08:17.776Z · LW · GW

I think they talked explicitly about planning to deploy the AI themselves back in the early days(2004-ish) then gradually transitioned to talking generally about what someone with a powerful AI could do.

But I strongly suspect that in the event that they were the first to obtain powerful AI, they would deploy it themselves or perhaps give it to handpicked successors. Given Eliezer's worldview I don't think it would make much sense for them to give the AI to the US government(considered incompetent) or AI labs(negligently reckless)

Comment by interstice on Towards more cooperative AI safety strategies · 2024-07-20T17:27:21.106Z · LW · GW
Comment by interstice on Towards more cooperative AI safety strategies · 2024-07-19T03:48:13.097Z · LW · GW

It wasn't specified but I think they strongly implied it would be that or something equivalently coercive. The "melting GPUs" plan was explicitly not a pivotal act but rather something with the required level of difficulty, and it was implied that the actual pivotal act would be something further outside the political Overton window. When you consider the ways "melting GPUs" would be insufficient a plan like this is the natural conclusion.

doesn't require replacing existing governments

I don't think you would need to replace existing governments. Just block all AI projects and maintain your ability to continue doing so in the future via maintaining military supremacy. Get existing governments to help you, or at least not interfere, via some mix of coercion and trade. Sort of a feudal arrangement with a minimalist central power.

Comment by interstice on Towards more cooperative AI safety strategies · 2024-07-19T02:23:05.730Z · LW · GW

"Taking over" something does not imply that you are going to use your authority in a tyrannical fashion. People can obtain control over organizations and places and govern with a light or even barely-existent touch, it happens all the time.

Would you accept "they plan to use extremely powerful AI to institute a minimalist, AI-enabled world government focused on preventing the development of other AI systems" as a summary? Like sure, "they want to take over the world" as a gist of that does have a bit of an editorial slant, but not that much of one. I think that my original comment would be perceived as much less misleading by the majority of the world's population than "they just want to do some helpful math uwu" in the event that these plans actually succeeded. I also think it's obvious that these plans indicate a far higher degree of power-seeking(in aim at least) than virtually all other charitable organizations.

(..and to reiterate, I'm not taking a strong stance on the advisability of these plans. In a way, had they succeeded, that would have provided a strong justification for their necessity. I just think it's absurd to say that the organization making them is less power-seeking than the ADL or whatever)

Comment by interstice on Towards more cooperative AI safety strategies · 2024-07-17T16:36:00.383Z · LW · GW

Are you saying that AIS movement is more power-seeking than environmentalist movement that spent 30M$+[...]

I think that AIS lobbying is likely to have more consequential and enduring effects on the world than environmental lobbying regardless of the absolute size in body count or amount of money, so yes.

"MIRI default plan" was "to do math in hope that some of this math will turn out to be useful".

I mean yeah, that is a better description of their publicly-known day-to-day actions, but intention also matters. They settled on math after it became clear that the god AI plan was not achievable(and recently, gave up on the math plan too when it became clear that was not realistic). An analogy might be an environmental group that planned to end pollution by bio-engineering a microbe to spread throughout the world that made oil production impossible, then reluctantly settled for lobbying once they realized they couldn't actually make the microbe. I think this would be a pretty unusually power-seeking plan for an environmental group!

Comment by interstice on Towards more cooperative AI safety strategies · 2024-07-16T14:50:20.494Z · LW · GW

Are you sure [...] et cetera are less power-seeking than AI Safety community?

Until recently the MIRI default plan was basically "obtain god-like AI and use it to take over the world"("pivotal act"), it's hard to get more power-seeking than that. Other wings of the community have been more circumspect but also more active in things like founding AI labs, influencing government policy, etc., to the tune of many billions of dollars worth of total influence. Not saying this is necessarily wrong but it does seem empirically clear that AI-risk-avoiders are more power-seeking than most movements.

let’s ensure that AGI corporations have management that is not completely blind to alignment problem

Seems like this is already the case.

Comment by interstice on Will quantum randomness affect the 2028 election? · 2024-07-14T14:37:05.591Z · LW · GW
Comment by interstice on I would have shit in that alley, too · 2024-06-18T07:34:00.265Z · LW · GW
Comment by interstice on Degeneracies are sticky for SGD · 2024-06-17T00:45:30.148Z · LW · GW

Possibly related: Stochastic Collapse: How Gradient Noise Attracts SGD towards sparser subnetworks"

Comment by interstice on How to get nerds fascinated about mysterious chronic illness research? · 2024-05-29T16:51:59.647Z · LW · GW

Seconded.

Comment by interstice on How to get nerds fascinated about mysterious chronic illness research? · 2024-05-29T05:14:12.467Z · LW · GW

Makes sense. But I think the OP is using the term to mean something different than you(centrally math and puzzle solving)

Comment by interstice on How to get nerds fascinated about mysterious chronic illness research? · 2024-05-29T02:39:43.753Z · LW · GW

Hmm, but don't puzzle games and math fit those criteria pretty well?(I guess if you're really trying hard at either there's more legitimate contact with reality?) What would you consider a central example of a nerdy interest?

Comment by interstice on How to get nerds fascinated about mysterious chronic illness research? · 2024-05-28T02:33:13.464Z · LW · GW

I wonder if "brains" of the sort that are useful for math and programming are neccessarily all that helpful here. I think intuition-guided trial and error might work better. That's been my experience dealing with chronic-illness type stuff.

Comment by interstice on Truthseeking is the ground in which other principles grow · 2024-05-27T16:32:24.538Z · LW · GW

I think she meant he was looking for epistemic authority figures to defer to more broadly, even if it wasn't because he thought they were better at math than him.

Comment by interstice on Quantized vs. continuous nature of qualia · 2024-05-15T17:19:37.972Z · LW · GW

Some advanced meditators report that they do perceive experience as being basically discrete, flickering in and out of existence at a very high frequency(which is why it might appear continuous without sufficient attention). See e.g. https://www.mctb.org/mctb2/table-of-contents/part-i-the-fundamentals/5-the-three-characteristics/

Comment by interstice on Spatial attention as a “tell” for empathetic simulation? · 2024-04-26T20:28:54.573Z · LW · GW

Tangentially related: some advanced meditators report that their sense that perception has a center vanishes at a certain point along the meditative path, and this is associated with a reduction in suffering.

Comment by interstice on Is being a trans woman (or just low-T) +20 IQ? · 2024-04-25T13:51:07.260Z · LW · GW

performance gap of trans women over women

The post is about the performance gap of trans women over men, not women.

Comment by interstice on Is being a trans woman (or just low-T) +20 IQ? · 2024-04-25T04:22:30.613Z · LW · GW

I don't know enough about hormonal biology to guess a specific cause(some general factor of neoteny, perhaps??). It's much easier to infer that it's likely some third factor than to know exactly what third factor it is. I actually think most of the evidence in this very post supports the 3rd-factor position or is equivocal - testosterone acting as a nootropic is very weird if it makes you dumber, that men and women have equal IQs seems not to be true, the study cited to support a U-shaped relationship seems flimsy, that most of the ostensible damage occurs before adulthood seems in tension with your smarter friends transitioning after high school.

Comment by interstice on Is being a trans woman (or just low-T) +20 IQ? · 2024-04-24T22:14:43.591Z · LW · GW

I buy that trans women are smart but I doubt "testosterone makes you dumber" is the explanation, more likely some 3rd factor raises IQ and lowers testosterone.

Comment by interstice on Tamsin Leake's Shortform · 2024-04-13T20:36:29.013Z · LW · GW

I think using the universal prior again is more natural. It's simpler to use the same complexity metric for everything; it's more consistent with Solomonoff induction, in that the weight assigned by Solomonoff induction to a given (world, claw) pair would be approximately the sum of their Kolmogorov complexities; and the universal prior dominates the inverse square measure but the converse doesn't hold.

Comment by interstice on Tamsin Leake's Shortform · 2024-04-13T18:38:19.873Z · LW · GW

If you want to pick out locations within some particular computation, you can just use the universal prior again, applied to indices to parts of the computation.

Comment by interstice on Tamsin Leake's Shortform · 2024-04-13T16:54:26.668Z · LW · GW

If you're running on the non-time-penalized solomonoff prior[...]a bunch of things break including anthropic probabilities and expected utility calculations

This isn't true, you can get perfectly fine probabilities and expected utilities from ordinary Solmonoff induction(barring computability issues, ofc). The key here is that SI is defined in terms of a prefix-free UTM whose set of valid programs forms a prefix-free code, which automatically grants probabilities adding up to less than 1, etc. This issue is often glossed over in popular accounts.

Comment by interstice on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-10T17:01:19.328Z · LW · GW

certain aspects of MWI theory (like how you actually get the Born probabilities) are unresolved

You can add the Born probabilities in with minimal additional Kolmogorov complexity, simply stipulate that worlds with a given amplitude have probabilities given by the Born rule(this does admittedly weaken the "randomness emerges from indexical uncertainty" aspect...)

Comment by interstice on On Complexity Science · 2024-04-05T02:39:21.048Z · LW · GW

Having briefly looked into complexity science myself, I came to similar conclusions -- mostly a random hodgepodge of various fields in a sort of impressionistic tableau, plus an unsystematic attempt at studying questions of agency and self-reference.

Comment by interstice on Matthew Barnett's Shortform · 2024-03-30T16:39:20.426Z · LW · GW

That is, I think humans generally (though not always) attempt to avoid death when credibly threatened, even when they're involved in a secret conspiracy to overthrow the government.

This seems like a misleading comparison, because human conspiracies usually don't try to convince the government that they're perfectly obedient slaves even unto death, because everyone already knows that humans aren't actually like that. If we imagine a human conspiracy where there is some sort of widespread deception like this, it seems more plausible that they would try to continue to be deceptive even in the face of death(like, maybe, uh, some group of people are pretending to be fervently religious and have no fear of death, or something)

Comment by interstice on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T23:50:31.078Z · LW · GW

Statements can be epistemically legit or not. Statements have content, they aren't just levers for influencing the world.

Comment by interstice on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T23:03:12.263Z · LW · GW

I mean it's epistemically legitimate for him to bring them up. They are in fact evidence that Scott holds hereditarian views.

Now, regarding the "overall" legitimacy of calling attention to someone's controversial views, it probably does have a chilling effect, and threatens Scott's livelihood which I don't like. But I think that continuing to be mad at Metz for his sloppy inference doesn't really make sense here. Sure, maybe at the time it was tactically smart to feign outrage that Metz would dare to imply Scott was a hereditarian, but now that we have direct documentation of Scott admitting exactly that, it's just silly. If you're still worried about Scott getting canceled (seems unlikely at this stage tbh) it's better to just move on and stop drawing attention to the issue by bringing it up over and over.

Comment by interstice on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T16:37:11.403Z · LW · GW

But was Metz acting as a "prosecutor" here? He didn't say "this proves Scott is a hereditarian" or whatever, he just brings up two instances where Scott said things in a way that might lead people to make certain inferences....correct inferences, as it turns out. Like yeah, maybe it would have been more epistemically scrupulous if he said "these articles represent two instances of a larger pattern which is strong Bayesian evidence even though they are not highly convincing on their own" but I hardly think this warrants remaining outraged years after the fact.

Comment by interstice on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T05:39:41.192Z · LW · GW

How is Metz's behavior here worse than Scott's own behavior defending himself? After all, Metz doesn't explicitly say that Scott believes in racial iq differences, he just mentions Scott's endorsement of Murray in one post and his account of Murray's beliefs in another, in a way that suggests a connection. Similarly, Scott doesn't explicitly deny believing in racial iq differences in his response post, he just lays out the context of the posts in a way that suggests that the accusation is baseless(perhaps you think Scott's behavior is locally better? But he's following a strategy of covertly communicating his true beliefs while making any individual instance look plausibly deniable, so he's kinda optimizing against "locally good behavior" tracking truth here, so it seems perverse to give him credit for this)

Comment by interstice on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-26T22:28:52.760Z · LW · GW

"For my friends, charitability -- for my enemies, Bayes Rule"

Comment by interstice on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-26T22:12:16.708Z · LW · GW

ZMD: Looking at “Silicon Valley’s Safe Space”, I don’t think it was a good article. Specifically, you wrote,

In one post, [Alexander] aligned himself with Charles Murray, who proposed a link between race and I.Q. in “The Bell Curve.” In another, he pointed out that Mr. Murray believes Black people “are genetically less intelligent than white people.”

End quote. So, the problem with this is that the specific post in which Alexander aligned himself with Murray was not talking about race. It was specifically talking about whether specific programs to alleviate poverty will actually work or not.

So on the one hand, this particular paragraph does seem like it's misleadingly implying Scott was endorsing views on race/iq similar to Murray's even though, based on the quoted passages alone, there is little reason to think that. On the other hand, it's totally true that Scott was running a strategy of bringing up or "arguing" with hereditarians with the goal of broadly promoting those views in the rationalist community, without directly being seen to endorse them. So I think it's actually pretty legitimate for Metz to bring up incidents like this or the Xenosystems link in the blogroll. Scott was basically using a strategy of communicating his views in a plausibly deniable way by saying many little things which are more likely if he was a secret hereditarian, but any individual instance of which is not so damning. So I feel it's total BS to then complain about how tenuous the individual instances Metz brought up are -- he's using it as an example or a larger trend, which is inevitable given the strategy Scott was using.

(This is not to say that I think Scott should be "canceled" for these views or whatever, not at all, but at this stage the threat of cancelation seems to have passed and we can at least be honest about what actually happened)

Comment by interstice on What does "autodidact" mean? · 2024-03-24T03:07:41.677Z · LW · GW

This seems significantly overstated. Most subjects are not taught in school to most people, but they don't thereby degrade into nonsense.