Posts

How rapidly are GPUs improving in price performance? 2018-11-25T19:54:10.830Z
Asymptotic Decision Theory 2016-10-15T02:42:44.000Z
A new proposal for logical counterfactuals 2016-07-07T22:15:37.000Z

Comments

Comment by gallabytes on The Obliqueness Thesis · 2024-09-20T02:00:43.759Z · LW · GW

the track record of people trying to broadly ensure that humanity continues to be in control of the future

What track record?

Comment by gallabytes on Ironing Out the Squiggles · 2024-05-02T18:44:24.601Z · LW · GW

But do they also generalize out of training distribution more similarly? If so, why?

Neither of them is going to generalize very well out of distribution, and to the extent they do it will be via looking for features that were present in-distribution. The old adage "to imagine 10-dimensional space, first imagine 3-space, then say 10 really hard".

My guess is that basically every learning system which tractably approximates Bayesian updating on noisy high dimensional data is going to end up with roughly Gaussian OOD behavior. There's been some experiments where (non-adversarially-chosen) OOD samples quickly degrade to uniform prior, but I don't think that's been super robustly studied.

The way humans generalize OOD is not that our visual systems are natively equipped to generalize to contexts they have no way of knowing about, that would be a true violation of no-free-lunch theorems, but that through linguistic reflection & deliberate experimentation some of us can sometimes get a handle on the new domain, and then we use language to communicate that handle to others who come up with things we didn't, etc. OOD generalization is a process at the (sub)cultural & whole-nervous-system level, not something that individual chunks of the brain can do well on their own.

This is also confusing/concerning for me. Why would it be necessary or helpful to have such a large dataset to align the shape/texture bias with humans?

Well it might not be, but you need large datasets to motivate studying large models, as their performance on small datasets like imagenet is often only marginally better.

A 20b param ViT trained on 10m images at 224x224x3 is approximately 1 param for every 75 subpixels, and 2000 params for every image. Classification is an easy enough objective that it very likely just overfits, unless you regularize it a ton, at which point it might still have the expected shape bias at great expense. Training a 20b param model is expensive, I don't think anyone has ever spent that much on a mere imagenet classifier, and public datasets >10x the size of imagenet with any kind of labels only started getting collected in 2021.

To motivate this a bit, humans don't see in frames but let's pretend we do. At 60fps for 12h/day for 10 years, that's nearly 9.5 billion frames. Imagenet is 10 million images. Our visual cortex contains somewhere around 5 billion neurons, which is around 50 trillion parameters (at 1 param / synapse & 10k synapses / neuron, which is a number I remember being reasonable for the whole brain but vision might be 1 or 2 OOM special in either direction).

Comment by gallabytes on Ironing Out the Squiggles · 2024-05-02T00:24:55.783Z · LW · GW

adversarial examples definitely still exist but they'll look less weird to you because of the shape bias.

anyway this is a random visual model, raw perception without any kind of reflective error correction loop, I'm not sure what you expect it to do differently, or what conclusion you're trying to draw from how it does behave? the inductive bias doesn't precisely match human vision, so it has different mistakes, but as you scale both architectures they become more similar. that's exactly what you'd expect for any approximately Bayesian setup.

the shape bias increasing with scale was definitely conjectured long before it was tested. ML scaling is very recent though,and this experiment was quite expensive. Remember when GPT-2 came out and everyone thought that was a big model? This is an image classifier which is over 10x larger than that. They needed a giant image classification dataset which I don't think even existed 5 years ago.

Comment by gallabytes on Ironing Out the Squiggles · 2024-05-01T05:07:05.442Z · LW · GW

Scale basically solves this too, with some other additions (not part of any released version of MJ yet) really putting a nail in the coffin, but I can't say too much here w/o divulging trade secrets. I can say that I'm surprised to hear that SD3 is still so much worse than Dalle3, Ideogram on that front - I wonder if they just didn't train it long enough?

Comment by gallabytes on Ironing Out the Squiggles · 2024-05-01T05:02:27.797Z · LW · GW

They put too much emphasis on high frequency features, suggesting a different inductive bias from humans.

This was found to not be true at scale! It doesn't even feel that true w/weaker vision transformers, seems specific to convnets. I bet smaller animal brains have similar problems.

Comment by gallabytes on On Anthropic’s Sleeper Agents Paper · 2024-01-18T18:28:36.367Z · LW · GW

Order matters more at smaller scales - if you're training a small model on a lot of data and you sample in a sufficiently nonrandom manner, you should expect catastrophic forgetting to kick in eventually, especially if you use weight decay.

Comment by gallabytes on A case for AI alignment being difficult · 2024-01-10T23:07:23.974Z · LW · GW

I think I can just tell a lot of stuff wrt human values! How do you think children infer them? I think in order for human values to not be viable to point to extensionally (ie by looking at a bunch of examples) you have to make the case that they're much more built-in to the human brain than seems appropriate for a species that can produce both Jains and (Genghis Khan era) Mongols.

 

I'd also note that "incentivize" is probably giving a lot of the game away here - my guess is you can just pull them out much more directly by gathering a large dataset of human preferences and predicting judgements.

Comment by gallabytes on A case for AI alignment being difficult · 2024-01-10T22:47:50.258Z · LW · GW

Why do you expect it to be hard to specify given a model that knows the information you're looking for? In general the core lesson of unsupervised learning is that often the best way to get pointers to something you have a limited specification for is to learn some other task that necessarily includes it then specialize to that subtask. Why should values be any different? Broadly, why should values be harder to get good pointers to than much more complicated real-world tasks?

Comment by gallabytes on AI #44: Copyright Confrontation · 2024-01-09T05:56:38.692Z · LW · GW

yeah I basically think you need to construct the semantic space for this to work, and haven't seen much work on that front from language modeling researchers.

drives me kinda nuts because I don't think it would actually be that hard to do, and the benefits might be pretty substantial.

Comment by gallabytes on Quick takes on "AI is easy to control" · 2023-12-03T06:28:49.604Z · LW · GW

Can you give an example of a theoretical argument of the sort you'd find convincing? Can be about any X caring about any Y.

Comment by gallabytes on Response to Quintin Pope's Evolution Provides No Evidence For the Sharp Left Turn · 2023-10-08T22:27:26.414Z · LW · GW

On the impossible-to-you world: This doesn’t seem so weird or impossible to me? And I think I can tell a pretty easy cultural story slash write an alternative universe novel where we honor those who maximize genetic fitness and all that, and have for a long time—and that this could help explain why civilization and our intelligence developed so damn slowly and all that. Although to truly make the full evidential point that world then has to be weirder still where humans are much more reluctant to mode shift in various ways. It’s also possible this points to you having already accepted from other places the evidence I think evolution introduces, so you’re confused why people keep citing it as evidence.

The ability to write fiction in a world does not demonstrate its plausibility. Beware generalizing from fictional fictional evidence!

The claim that such a world is impossible is a claim that, were you to try to write a fictional version of it, you would run into major holes in the world that you would have to either ignore or paper over with further unrealistic assumptions.

Comment by gallabytes on Response to Quintin Pope's Evolution Provides No Evidence For the Sharp Left Turn · 2023-10-08T22:23:21.405Z · LW · GW

In case it is not clear: My expectation is that sufficiently large capabilities/intelligence/affordances advances inherently break our desired alignment properties under all known techniques.

Nearly every piece of empirical evidence I've seen contradicts this - more capable systems are generally easier to work with in almost every way, and the techniques that worked on less capable versions straightforwardly apply and in fact usually work better than on less intelligent systems.

Comment by gallabytes on Response to Quintin Pope's Evolution Provides No Evidence For the Sharp Left Turn · 2023-10-07T23:39:49.828Z · LW · GW

When I explain my counterargument to pattern 1 to people in person, they will very often try to "rescue" evolution as a worthwhile analogy for thinking about AI development. E.g., they'll change the analogy so it's the programmers who are in a role comparable to evolution, rather than SGD.

In general one should not try to rescue intuitions, and the frequency of doing this is a sign of serious cognitive distortions. You should only try to rescue intutions when they have a clear and validated predictive or pragmatic track record.

The reason for this is very simple - most intuitions or predictions one could make are wrong, and you need a lot of positive evidence to privilege any particular hypotheses re how or what to think. In the absence of evidence, you should stop relying on an intuition, or at least hold it very lightly.

Comment by gallabytes on Evaluating the historical value misspecification argument · 2023-10-06T16:37:26.299Z · LW · GW

The obvious question here is to what degree do you need new techniques vs merely to train new models with the same techniques as you scale current approaches.

 

One of the virtues of the deep learning paradigm is that you can usually test things at small scale (where the models are not and will never be especially smart) and there's a smooth range of scaling regimes in between where things tend to generalize.

 

If you need fundamentally different techniques at different scales, and the large scale techniques do not work at intermediate and small scales, then you might have a problem. If you need the same techniques as at medium or small scales for large scales, then engineering continues to be tractable even as algorithmic advances obsolete old approaches.

Comment by gallabytes on Evaluating the historical value misspecification argument · 2023-10-06T04:20:53.345Z · LW · GW

It's more like calling a human who's as smart as you are and directly plugged into your brain and in fact reusing your world model and train of thought directly to understand the implications of your decision. That's a huge step up from calling a real human over the phone!

The reason the real human proposal doesn't work is that

  1. the humans you call will lack context on your decision
  2. they won't even be able to receive all the context
  3. they're dumber and slower than you so even if you really could write out your entire chain of thoughts and intuitions consulting them for every decision would be impractical

Note that none of these considerations apply to integrated language models!

Comment by gallabytes on Evaluating the historical value misspecification argument · 2023-10-05T21:36:04.052Z · LW · GW

To pick a toy example, you can use text as a bottleneck to force systems to "think out loud" in a way which will be very directly interpretable by a human reader, and because language understanding is so rich this will actually be competitive with other approaches and often superior.

I'm sure you can come up with more ways that the existence of software that understands language and does ~nothing else makes getting computers to do what you mean easier than if software did not understand language. Please think about the problem for 5 minutes. Use a clock.

Comment by gallabytes on Evaluating the historical value misspecification argument · 2023-10-05T21:09:28.518Z · LW · GW

ML models in the current paradigm do not seem to behave coherently OOD but I'd bet for nearly any metric of "overall capability" and alignment that the capability metric decays faster vs alignment as we go further OOD.

 

See https://arxiv.org/abs/2310.00873 for an example of the kinds of things you'd expect to see when taking a neural network OOD. It's not that the model does some insane path-dependent thing, it collapses to entropy. You end up seeing a max-entropy distribution over outputs not goals. This is a good example of the kind of thing that's obvious to people who've done real work with ml but very counter to classic LessWrong intuitions and isn't learnable by implementing mingpt.

Comment by gallabytes on Evaluating the historical value misspecification argument · 2023-10-05T21:01:40.862Z · LW · GW

Historically you very clearly thought that a major part of the problem is that AIs would not understand human concepts and preferences until after or possibly very slightly before achieving superintelligence. This is not how it seems to have gone.

 

Everyone agrees that you assumed superintelligence would understand everything humans understand and more. The dispute is entirely about the things that you encounter before superintelligence. In general it seems like the world turned out much more gradual than you expected and there's information to be found in what capabilities emerged sooner in the process.

Comment by gallabytes on Evaluating the historical value misspecification argument · 2023-10-05T19:39:15.823Z · LW · GW

We should clearly care if their arguments were wrong in the past, especially if they were systematically wrong in a particular direction, as it's evidence about how much attention we should pay to their arguments now. At some point if someone is wrong enough for long enough you should discard their entire paradigm and cease to privilege hypotheses they suggest, until they reacquire credibility through some other means e.g. a postmortem explaining what they got wrong and what they learned, or some unambiguous demonstration of insight into the domain they're talking about.

Comment by gallabytes on Introducing the Center for AI Policy (& we're hiring!) · 2023-08-30T05:38:47.999Z · LW · GW

Sure, a stop button doesn't have the issues I described, as long as it's used rarely enough. If it's too commonplace then you should expect similar effects on safety to eg CEQA's effects on infrastructure innovation. Major projects can only take on so much risk, and the more non-technical risk you add the less technical novelty will fit into that budget.

This line from the proposed "Responsible AI Act" seems to go much further than a stop button though?

Require advanced AI developers to apply for a license & follow safety standards.

Where do these safety standards come from? How are they enforced?

These same questions apply to stop buttons. Who has the stop button? Random bureaucrats? Congress? Anyone who can file a lawsuit?

Comment by gallabytes on Introducing the Center for AI Policy (& we're hiring!) · 2023-08-30T02:26:57.496Z · LW · GW

It depends on the form regulation takes. The proposal here requires approval of training runs over a certain scale, which means everything is banned at that scale, including safety techniques, with exceptions decided by the approval process.

Comment by gallabytes on Introducing the Center for AI Policy (& we're hiring!) · 2023-08-30T00:51:04.348Z · LW · GW

What would your plan be to ensure that this kind of regulation actually net-improves safety? The null hypothesis for something like this is that you'll empower a bunch of bureaucrats to push rules that are at least 6 months out of date under conditions of total national emergency where everyone is watching, and years to decades out of date otherwise.

This could be catastrophic! If the only approved safety techniques are as out of date as the only approved medical techniques, AI regulation seems like it should vastly increase P(doom) at the point that TAI is developed.

Comment by gallabytes on A review of Principia Qualia · 2023-07-13T23:58:40.338Z · LW · GW

Which brings me to my main disagreement with bottom-up approaches: they assume we already have a physics theory in hand, and are trying to locate consciousness within that theory. Yet, we needed conscious observations, and at least some preliminary theory of consciousness, to even get to a low-level physics theory in the first place. Scientific observations are a subset of conscious experience, and the core task of science is to predict scientific observations; this requires pumping a type of conscious experience out of a physical theory, which requires at least some preliminary theory of consciousness. Anthropics makes this clear, as theories such as SSA and SIA require identifying observers who are in our reference class.

 

 

There's something a bit off about this that's hard to quite put my finger on. To gesture vaguely at it, it's not obvious to me that this problem ought to have a solution. At the end of the day, we're thinking meat, and we think because thinking makes the meat better at becoming more meat. We have experiences correlated with our environments because agents whose experiences aren't correlated with their environments don't arise from chemical soup without cause.

 

My guess is that if we want to understand "consciousness", the best approach would be functionalist. What work is the inner listener doing? It has to be doing something, or it wouldn't be there.

 

Do you feel you have an angle on that question? Would be very curious to hear more if so.

Comment by gallabytes on AI #19: Hofstadter, Sutskever, Leike · 2023-07-07T18:19:48.727Z · LW · GW

This seems especially unlikely to work given it only gives a probability. You know what you call someone whose superintelligent AI does what they want 95% of the time? Dead.

 

if you can get it to do what you want even 51% of the time and make that 51% independent on each sampling (it isn't, so in practice you'd like some margin, but 95% is actually a lot of margin!) you can get arbitrarily good compliance by creating AI committees and taking a majority vote.

Comment by gallabytes on Douglas Hofstadter changes his mind on Deep Learning & AI risk (June 2023)? · 2023-07-03T23:00:18.163Z · LW · GW

that paper is one of many claiming some linear attention mechanism that's as good as full self attention. in practice they're all sufficiently much worse that nobody uses them except the original authors in the original paper, usually not even the original authors in subsequent papers.

the one exception is flash attention, which is basically just a very fancy fused kernel for the same computation (actually the same, up to numerical error, unlike all these "linear attention" papers).

Comment by gallabytes on I No Longer Believe Intelligence to be "Magical" · 2022-06-17T16:25:44.533Z · LW · GW

4, 5, and 6 are not separate steps - when you only have 1 example, the bits to find an input that generates your output are not distinct from the bits specifying the program that computes output from input.

Comment by gallabytes on I No Longer Believe Intelligence to be "Magical" · 2022-06-17T16:08:16.347Z · LW · GW

Yeah my guess is that you almost certainly fail on step 4 - an example of a really compact ray tracer looks like it fits in 64 bytes. You will not do search over all 64 byte programs. Even if you could evaluate 1 of them per atom per nanosecond using every atom in the universe for 100 billion years, you'd only get 44.6 bytes of search.

Let's go with something more modest and say you get to use every atom in the milky way for 100 years, and it takes about 1 million atom-seconds to check a single program. This gets you about 30 bytes of search.

Priors over programs will get you some of the way there, but usually the structure of those priors will also lead to much much longer encodings of a ray tracer. You would also need a much more general / higher quality ray tracer (and thus more bits!) as well as an actually quite detailed specification of the "scene" you input to that ray tracer (which is probably way more bits than the original png).

The reason humans invented ray tracers with so much less compute is that we got ray tracers from physics and way way way more bits of evidence, not the other way around.

Comment by gallabytes on I No Longer Believe Intelligence to be "Magical" · 2022-06-15T13:24:10.529Z · LW · GW

This response is totally absurd. Your human priors are doing an insane amount of work here - you're generating an argument for the conclusion, not figuring out how you would privilege those hypotheses in the first place.

See that the format describes something like a grid of cells, where each cell has three [something] values.

This seems maybe possible for png (though it could be hard - the pixel data will likely be stored as a single contiguous array, not a bunch of individual rows, and it will be run-length encoded, which you might be able to figure out but very well might not - and if it's jpg compressed this is even further from the truth).

Come up with the hypothesis that the grid represents a 2d projection of a 3d space (this does not feel like a large jump to me given that ray tracers exist and are not complicated, but I can go into more detail on this step if you’d like).

It's a single frame, you're not supposed to have seen physics before. How did ray tracers enter into this? How do you know your world is 3D, not actually 2D? Where is the hypothesis that it's a projection coming from, other than assuming your conclusion?

Determine the shape of the lens by looking at edges and how they are distorted.

How do you know what the shapes of the edges should be?

If the apple is out in sunlight, I expect that between the three RGB channels and the rainbows generated by chromatic aberration would be sufficient to determine that the intensity-by-frequency of light approximately matches the blackbody radiation curves (though again, not so much with that name as just “these equations seem to be a good fit”)..

What's "sunlight"? What's "blackbody radiation"? How would a single array of numbers of unknown provenance without any other context cause you to invent these concepts?

Comment by gallabytes on Slow motion videos as AI risk intuition pumps · 2022-06-14T22:12:18.269Z · LW · GW

10 million times faster is really a lot - on modern hardware, running SOTA object segmentation models at even 60fps is quite hard, and those are usually much much smaller than the kinds of AIs we would think about in the context of AI risk.

But - 100x faster is totally plausible (especially w/100x the energy consumption!) - and I think the argument still mostly works at that much more conservative speedup.

Comment by gallabytes on Salvage Epistemology · 2022-04-30T17:16:33.084Z · LW · GW

for me it mostly felt like I and my group of closest friends were at the center of the world, with the last hope for the future depending on our ability to hold to principle. there was a lot of prophesy of varying qualities, and a lot of importance placed suddenly on people we barely knew then rapidly withdrawn when those people weren't up for being as crazy as we were.

Comment by gallabytes on Salvage Epistemology · 2022-04-30T05:25:27.834Z · LW · GW

This seems roughly on point, but is missing a crucial aspect - whether or not you're currently a hyper-analytical programmer is actually a state of mind which can change. Thinking you're on one side when actually you've flipped can lead to some bad times, for you and others.

Comment by gallabytes on MIRI announces new "Death With Dignity" strategy · 2022-04-06T03:31:25.650Z · LW · GW

I don't know how everyone else on LessWrong feels but I at least am getting really tired of you smugly dismissing others' attempts at moral reductionism wrt qualia by claiming deep philosophical insight you've given outside observers very little reason to believe you have. In particular, I suspect if you'd spent half the energy on writing up these insights that you've spent using the claim to them as a cudgel you would have at least published enough of a teaser for your claims to be credible.

Comment by gallabytes on Artificial Wordcels · 2022-02-25T21:53:56.255Z · LW · GW

I disagree that GPT’s job, the one that GPT-∞ is infinitely good at, is answering text-based questions correctly. It’s the job we may wish it had, but it’s not, because that’s not the job its boss is making it do. GPT’s job is to answer text-based questions in a way that would be judged as correct by humans or by previously-written human text. If no humans, individually or collectively, know how to align AI, neither would GPT-∞ that’s trained on human writing and scored on accuracy by human judges.

This is actually also an incorrect statement of GPT's job. GPT's job is to predict the most likely next token in the distribution its corpus was sampled from. GPT-∞ would give you, uh, probably with that exact prompt a blog post about a paper which claims that it solves the alignment problem. It would be on average exactly the same quality as other articles from the internet containing that text.

Comment by gallabytes on Occupational Infohazards · 2021-12-24T00:12:38.075Z · LW · GW

I think this is a persistent difference between us but isn't especially relevant to the difference in outcomes here.

I'd more guess that the reason you had psychoses and I didn't had to do with you having anxieties about being irredeemably bad that I basically didn't at the time. Seems like this would be correlated with your feeling like you grew up in a Shin Sekai Yori world?

Comment by gallabytes on Occupational Infohazards · 2021-12-23T16:16:33.140Z · LW · GW

hmm... this could have come down to spending time in different parts of MIRI? I mostly worked on the "world's last decent logic department" stuff - maybe the more "global strategic" aspects of MIRI work, at least the parts behind closed doors I wasn't allowed through, were more toxic? Still feels kinda unlikely but I'm missing info there so it's just a hunch.

Comment by gallabytes on Occupational Infohazards · 2021-12-23T16:06:49.579Z · LW · GW

By latent tendency I don't mean family history, though it's obviously correlated. I claim that there's this fact of the matter about Jess' personality, biology, etc, which is that it's easier for her to have a psychotic episode than for most people. This seems not plausibly controversial.

I'm not claiming a gears-level model here. When you see that someone has a pattern of <problem> that others in very similar situations did not have, you should assume some of the causality is located in the person, even if you don't know how.

Comment by gallabytes on Occupational Infohazards · 2021-12-22T23:31:12.956Z · LW · GW

Verbal coherence level seems like a weird place to locate the disagreement - Jessica maintained approximate verbal coherence (though with increasing difficulty) through most of her episode. I'd say even in October 2017, she was more verbally coherent than e.g. the average hippie or Catholic, because she was trying at all.

The most striking feature was actually her ability to take care of herself rapidly degrading, as evidenced by e.g. getting lost almost immediately after leaving her home, wandering for several miles, then calling me for help and having difficulty figuring out where she was - IIRC, took a few minutes to find cross streets. When I found her she was shuffling around in a daze, her skin looked like she'd been scratching it much more than usual, clothes were awkwardly hung on her body, etc. This was on either the second or third day, and things got almost monotonically worse as the days progressed.

The obvious cause for concern was "rapid descent in presentation from normal adult to homeless junkie". Before that happened, it was not at all obvious this was an emergency. Who hasn't been kept up all night by anxiety after a particularly stressful day in a stressful year?

I think the focus on verbal coherence is politically convenient for both of you. It makes this case into an interesting battleground for competing ideologies, where they can both try to create blame for a bad thing.

Scott wants to do this because AFAICT his agenda is to marginalize discussion of concepts from woo / psychedelia / etc, and would like to claim that Jess' interest in those was a clear emergency. Jess wants to do this because she would like to claim that the ideas at MIRI directly drove her crazy.

I worked there too, and left at the same time for approximately the same reasons. We talked about it extensively at the time. It's not plausible that it was even in-frame that considering details of S-risks in the vein of Unsong's Broadcast would possibly be helpful for alignment research. Basilisk-baiting like that would generally have been frowned upon, but mostly just wouldn't have come up.

The obvious sources of madness here were

  1. The extreme burden of responsibility for the far future (combined with the position that MIRI was uniquely essential to this), and encouragement to take this responsibility seriously, is obviously stressful.
  2. The local political environment at the time was a mess - splinters were forming, paranoia was widespread. A bunch of people we respected and worked with had decided the world was going to end, very soon, uncomfortably soon, and they were making it extremely difficult for us to check their work. This uncertainty was, uh, stressful.
  3. Psychedelics very obviously induces states closer-than-usual to psychosis. This is what's great about them - they let you dip a toe into the psychotic world and be back the next day, so you can take some of the insights with you. Also, this makes them a risk for inducing psychotic episodes. It's not a coincidence that every episode I remember Jess having in 2017 and 2018 was a direct result of a trip-gone-long.
  4. Latent tendency towards psychosis

Critically, I don't think any of these factors would have been sufficient on their own. The direct content of MIRI's research, and the woo stuff, both seem like total red herrings in comparison to any of these 4 issues.

Comment by gallabytes on Occupational Infohazards · 2021-12-22T23:28:52.578Z · LW · GW
Comment by gallabytes on Occupational Infohazards · 2021-12-22T23:28:13.807Z · LW · GW
Comment by gallabytes on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-25T01:08:03.396Z · LW · GW

There's this general problem of Rationalists splitting into factions and subcults with minor doctrinal differences, each composed of relatively elite members of The Community, each with a narrative of how they’re the real rationalists and the rest are just posers and/or parasites. And, they're kinda right. Many of the rest are posers, we have a mop problem.

There’s just one problem. All of these groups are wrong. They are in fact only slightly more special than their rival groups think they are. In fact, the criticisms each group makes of the epistemics and practices of other groups are mostly on-point.

Once people have formed a political splinter group, almost anything they write will start to contain a subtle attempt to slip in the doctrine they're trying to push. With sufficient skill, you can make it hard to pin down where the frame is getting shoved in.

I have at one point or another been personally involved with a quite large fraction of the rationalist subcults. This has made the thread hard to read - I keep feeling a tug of motivation to jump into the fray, to take a position in the jostling for credibility or whatever it is being fought over here, which is then marred by the realization that this will win nothing. Local validity isn't a cure for wrong questions. The tug of political defensiveness that I feel, and that many commenters are probably also feeling, is sufficient to show that whatever question is being asked here is not the right one.

Seeing my friends behave this way hurts. The defensiveness has at this point gone far enough that it contains outright lies.

I'm stuck with a political alignment because of history and social ties. In terms of political camps, I've been part of the Vassarites since 2017. It's definitely a faction, and its members obviously know this at some level, despite their repeated insistence to me of the contrary over the years.

They’re right about a bunch of stuff, and wrong about a bunch of stuff. Plenty of people in the comments are looking to scapegoat them for trying to take ideas seriously instead of just chilling out and following somebody’s party line. That doesn’t really help anything. When I was in the camp, people doing that locked me in further, made outsiders seem more insane and unreachable, and made public disagreement with my camp feel dangerous in the context of a broader political game where the scapegoaters were more wrong than the Vassarites.

So I’m making a public declaration of not being part of that camp anymore, and leaving it there. I left earlier this year, and have spent much of the time since trying to reorient / understand why I had to leave. I still count them among my closest friends, but I don't want to be socially liable for the things they say. I don't want the implicit assumption to be that I'd agree with them or back them up.

I had to edit out several lines from this comment because they would just be used as ammunition against one side or another. The degree of truth-seeking in the discourse is low enough that any specific information has to be given very carefully so it can’t be immediately taken up as a weapon.

This game sucks and I want out.

Comment by gallabytes on Seemingly Popular Covid-19 Model is Obvious Nonsense · 2020-04-12T01:03:44.528Z · LW · GW

Even with that as the goal this model is useless - social distancing demonstrably does not lead to 0 new infections. Even Wuhan didn't manage that, and they were literally welding people's doors shut.

Comment by gallabytes on A War of Ants and Grasshoppers · 2019-05-23T02:16:35.192Z · LW · GW

...they're ants. That's just not how ants work. For a myriad of reasons. The whole point of the post is that there isn't necessarily local deliberative intent, just strategies filling ecological niches.

Comment by gallabytes on How rapidly are GPUs improving in price performance? · 2019-03-25T07:47:56.462Z · LW · GW

Of course, if you don’t like how an exponential curve fits the data, you can always change models—in this case, probably to a curve with 1 more free parameter (indicating a degree of slowdown of the exponential growth) or 2 more free parameters (to have 2 different exponentials stitched together at a specific point in time).

Oh that's actually a pretty good idea. Might redo some analysis we built on top of this model using that.

Comment by gallabytes on Blackmailers are privateers in the war on hypocrisy · 2019-03-15T08:30:56.296Z · LW · GW

correct. edited to make this more obvious

Comment by gallabytes on Blackmailers are privateers in the war on hypocrisy · 2019-03-14T11:13:16.085Z · LW · GW

This argument would make much more sense in a just world. Information that should damage someone is very different from information that will damage someone. With blackmail you're optimized to maximize damage to the target, and I expect tails to mostly come apart here. I don't see too many cases of blackmail replacing MeToo. When was the last time the National Enquirer was a valuable whistleblower?

EDIT: fixed some wording

Comment by gallabytes on How rapidly are GPUs improving in price performance? · 2018-12-14T07:02:22.837Z · LW · GW
When trying to fit an exponential curve, don't weight all the points equally

We didn't. We fit a line in log space, but weighted the points by sqrt(y). The reason we did that is because it doesn't actually appear linear in log space.

This is what it looks like if we don't weight them. If you want to bite the bullet of this being a better fit, we can bet about it.

Comment by gallabytes on Act of Charity · 2018-11-18T08:04:37.983Z · LW · GW
I'd optimize more for not making enemies or alienating people than for making people realize how bad the situation is or joining your cause.

Why isn't this a fully general argument for never rocking the boat?

Comment by gallabytes on Act of Charity · 2018-11-18T07:46:33.130Z · LW · GW
Based on my models (such as this one), the chance of AGI "by default" in the next 50 years is less than 15%, since the current rate of progress is not higher than the average rate since 1945, and if anything is lower (the insights model linked has a bias towards listing recent insights).

Both this comment and my other comment are way understating our beliefs about AGI. After talking to Jessica about it offline to clarify our real beliefs rather than just playing games with plausible deniability, my actual probability is between 0.5 and 1% in the next 50 years. Jessica can confirm that hers is pretty similar, but probably weighted towards 1%.

Comment by gallabytes on Act of Charity · 2018-11-18T01:44:33.973Z · LW · GW
I think I'm more skeptical than you are that it's possible to do much better (i.e., build functional information-processing institutions) before the world changes a lot for other reasons (e.g., superintelligent AIs are invented)

Where do you think the superintelligent AIs will come from? AFAICT it doesn't make sense to put more than 20% on AGI before massive international institutional collapse, even being fairly charitable to both AGI projects and prospective longevity of current institutions.

Comment by gallabytes on Where does ADT Go Wrong? · 2017-12-02T10:21:03.000Z · LW · GW

When considering an embedder , in universe , in response to which SADT picks policy , I would be tempted to apply the following coherence condition:

(all approximately of course)

I'm not sure if this would work though. This is definitely a necessary condition for reasonable counterfactuals, but not obviously sufficient.