Posts

Creating Interpretable Latent Spaces with Gradient Routing 2024-12-14T04:00:17.249Z
Gradient Routing: Masking Gradients to Localize Computation in Neural Networks 2024-12-06T22:19:26.717Z
I found >800 orthogonal "write code" steering vectors 2024-07-15T19:06:17.636Z
Building intuition with spaced repetition systems 2024-05-12T15:49:04.860Z
What I learned from doing Quiz Bowl 2024-05-09T21:05:38.299Z
Taking into account preferences of past selves 2024-04-15T13:15:10.545Z
From the outside, American schooling is weird 2024-03-28T22:45:30.485Z
XAI releases Grok base model 2024-03-18T00:47:47.987Z
g-w1's Shortform 2024-03-06T21:30:56.481Z
In set theory, everything is a set 2024-02-23T14:35:54.521Z
[linkpost] Self-Rewarding Language Models 2024-01-21T00:30:10.923Z
Concrete examples of doing agentic things? 2024-01-12T15:59:52.154Z
Google Gemini Announced 2023-12-06T16:14:07.192Z
The Puritans would one-box: evidential decision theory in the 17th century 2023-10-14T20:23:24.346Z
Noticing confusion in physics 2023-10-12T15:21:48.183Z
[Linkpost/Video] All The Times We Nearly Blew Up The World 2023-09-23T01:18:03.008Z
Separate the truth from your wishes 2023-08-23T00:52:59.107Z
Why it's necessary to shoot yourself in the foot 2023-07-11T21:17:48.907Z
What I Think About When I Think About History 2023-07-04T14:02:26.066Z
OpenAI: Our approach to AI safety 2023-04-05T20:26:46.581Z

Comments

Comment by Jacob G-W (g-w1) on Creating Interpretable Latent Spaces with Gradient Routing · 2024-12-14T23:23:18.466Z · LW · GW

I disagree that this is the same as just stitching together different autoencoders. Presumably the encoder has some shared computation before specializing at the encoding level. I also don't see how you could use 10 different autoencoders to classify an image from the encodings. I guess you could just look at the reconstruction loss and then the autoencoder which got the lowest loss would probably correspond to the label, but that seems different to what I'm doing. However, I agree that this application is not useful. I shared it because I (and others) thought it was cool. It's not really practical at all. Hope this addresses your question :)

Comment by Jacob G-W (g-w1) on Creating Interpretable Latent Spaces with Gradient Routing · 2024-12-14T21:48:03.844Z · LW · GW

I didn't impose any structure in the objective/loss function relating to the label. The loss function is just the regular VAE loss. All I did was detach the gradients in some places. So it is a bit surprising to me that this simple of a modification can cause the internals to specialize in this way. After I had seen gradient routing work in other experiments, I predicted that it would work here, but I don't think gradient routing working was a priori obvious (meaning that I would get zero new information by running an experiment since I predicted it with p=1).

Comment by Jacob G-W (g-w1) on g-w1's Shortform · 2024-12-14T04:00:53.970Z · LW · GW

Due to someone's suggestion, I've turned this into a top level post.

Comment by Jacob G-W (g-w1) on g-w1's Shortform · 2024-12-13T15:27:05.746Z · LW · GW

Over the past few months, I helped develop Gradient Routing, a non loss-based method to shape the internals of neural networks. After my team developed it, I realized that I could use the method to do something that I have long wanted to do: make an autoencoder with an extremely interpretable latent space.

I created an MNIST autoencoder with a 10 dimensional latent space, with each dimension of the latent space corresponding to a different digit. Before I get into how I did it, feel free to play around with my demo here (it loads the model into the browser): https://jacobgw.com/gradient-routed-vae/.

In the demo, you can both see how a random MNIST image encodes but also directly play around with the encoding itself and create different types of digits by just moving the sliders.

The reconstruction is not that good, and I assume this is due to some combination of (1) using the simplest possible architecture of MLP layers and ReLU (2) only allowing a 10 dimensional latent space which could constrain the representation a lot (3) not doing data augmentation, so it might not generalize that well, and (4) gradient routing targeting an unnatural internal representation, causing the autoencoder to not fit the data that well. This was just supposed to be a fun proof of concept project, so I’m not too worried about the reconstruction not being that good.

How it works

My implementation of gradient routing is super simple and easy to add onto a variational autoencoder. During training, after I run the encoder, I just detach every dimension of the encoding except for the one corresponding to the label of the image:

def encode_and_mask(self, images: Tensor, labels: Tensor):
    encoded_unmasked, zeta, mean_from_encoded, cov_diag_from_encoded = self.encode(images)
    mask_one_hot = F.one_hot(labels, num_classes=self.latent_size).float()
    encoded = mask_one_hot * encoded_unmasked + (1 - mask_one_hot) * encoded_unmasked.detach()
    return encoded, zeta, mean_from_encoded, cov_diag_from_encoded

This causes each dimension of the latent space to “specialize” to representing its corresponding image since the error for that image type can only be propagated through the single dimension of the latent space.

It turns out that if you do this, nothing forces the model to represent “more of a digit” in the positive direction. Sometimes the model represented “5-ness” in the negative direction in the latent space (e.g. as [0, 0, 0, 0, 0, -1.0, 0, 0, 0, 0]This messed with my demo a bit since I wanted all the sliders to only go in the positive direction. My solution? Just apply ReLU the encoding so it can only represent positive numbers! This is obviously not practical and I only included it so the demo would look nice.[1]

In our Gradient Routing paper, we found that models sometimes needed regularization to split the representations well. However, in this setting, I’m not applying any regularization besides the default regularization of the encoding that comes with a variational autoencoder. I guess it turns out that this regularization is enough to effectively split the digits.

Classification

It turns out that even though there was no loss function causing the encoding to activate most strongly on the dimension corresponding to the digit being encoded, it happened! In fact, we can classify digits to 92.58% accuracy by just taking the argmax over the encoding, which I find pretty amazing.

Code

You can see the code here.

(this was a crosspost of a post from my blog)

  1. ^

    I did have to train the model a few times to get something that behaved nicely enough for the demo.

Comment by Jacob G-W (g-w1) on Gradient Routing: Masking Gradients to Localize Computation in Neural Networks · 2024-12-12T18:41:03.903Z · LW · GW

Thanks for pointing this out! Our original motivation for doing it that way was that we thought of the fine-tuning on FineWeb-Edu as a "coherence" step designed to restore the model's performance after ablation, which damaged it a lot. We noticed that this "coherence" step helped validation loss on both forget and retain. However, your criticism is valid, so we have updated the paper so that we retrain on the training distribution (which contains some of the WMDP-bio forget set). We still see that while the loss on FineWeb-Edu decreases to almost its value before ablation, the loss on the WMDP-bio forget set is around 0.1 nats above its value before ablation, showing that it is harder to retrain virology after ablation than just FineWeb-Edu data. Since we re-train on the training distribution (N=12 times with different data), we would expect that both losses would be retrainable at roughly the same rate, but this is not the case, showing that localization and then ablation has an effect.

Comment by Jacob G-W (g-w1) on Deep Causal Transcoding: A Framework for Mechanistically Eliciting Latent Behaviors in Language Models · 2024-12-04T01:26:16.110Z · LW · GW

Nice work! A few questions:

I'm curious if you have found any multiplicity in the output directions (what you denote as ), or if the multiplicity is only in the input directions. I would predict that there would be some multiplicity in output directions, but much less than the multiplicity in input directions for the corresponding concept.

Relatedly, how do you think about output directions in general? Do you think they are just upweighting/downweighting tokens? I'd imagine that their level of abstraction depends on how far from the end of the network the output layer is, which will ultimately end up determining out much of their effect is directly on the unembed v.s. indirectly through other layers.

Comment by Jacob G-W (g-w1) on g-w1's Shortform · 2024-11-10T21:38:02.214Z · LW · GW

Something hashed with shasum -a 512 2d90350444efc7405d3c9b7b19ed5b831602d72b4d34f5e55f9c0cb4df9d022c9ae528e4d30993382818c185f38e1770d17709844f049c1c5d9df53bb64f758c

Comment by Jacob G-W (g-w1) on Lao Mein's Shortform · 2024-09-17T15:42:36.376Z · LW · GW

Isn't this a consequence of how the tokens get formed using byte pair encoding? It first constructs ' behavi' and then it constructs ' behavior' and then will always use the latter. But to get to the larger words, it first needs to create smaller tokens to form them out of (which may end up being irrelevant).

 

Edit: some experiments with the GPT-2 tokenizer reveal that this isn't a perfect explanation. For example "  behavio" is not a token. I'm not sure what is going on now. Maybe if a token shows up zero times, it cuts it?

Comment by Jacob G-W (g-w1) on I found >800 orthogonal "write code" steering vectors · 2024-07-16T17:44:36.994Z · LW · GW

Maybe you are right, since averaging and scaling does result in pretty good steering (especially for coding). See here.

Comment by Jacob G-W (g-w1) on I found >800 orthogonal "write code" steering vectors · 2024-07-16T17:37:46.233Z · LW · GW
Comment by Jacob G-W (g-w1) on I found >800 orthogonal "write code" steering vectors · 2024-07-16T17:31:16.887Z · LW · GW

This seems to be right for the coding vectors! When I take the mean of the first  vectors and then scale that by , it also produces a coding vector.

Here's some sample output from using the scaled means of the first n coding vectors.

With the scaled means of the alien vectors, the outputs have a similar pretty vibe as the original alien vectors, but don't seem to talk about bombs as much.

The STEM problem vector scaled means sometimes give more STEM problems but sometimes give jailbreaks. The jailbreaks say some pretty nasty stuff so I'm not going to post the results here.

The jailbreak vector scaled means sometimes give more jailbreak vectors but also sometimes tell stories in the first or second person. I'm also not going to post the results for this one.

Comment by Jacob G-W (g-w1) on I found >800 orthogonal "write code" steering vectors · 2024-07-16T17:08:33.246Z · LW · GW

After looking more into the outputs, I think the KL-divergence plots are slightly misleading. In the code and jailbreak cases, they do seem to show when the vectors stop becoming meaningful. But in the alien and STEM problem cases, they don't show when the vectors stop becoming meaningful (there seem to be ~800 alien and STEM problem vectors also). The magnitude plots seem much more helpful there. I'm still confused about why the KL-divergence plots aren't as meaningful in those cases, but maybe it has to do with the distribution of language that the vectors the model into? Coding is clearly a very different distribution of language than English, but Jailbreak is not that different a distribution of language than English. So I'm still confused here. But the KL-divergences are also only on the logits at the last token position, so maybe it's just a small sample size.

Comment by Jacob G-W (g-w1) on I found >800 orthogonal "write code" steering vectors · 2024-07-16T05:52:59.874Z · LW · GW

I only included  because we are using computers, which are discrete (so they might not be perfectly orthogonal since there is usually some numerical error). The code projects vectors into the subspace orthogonal to the previous vectors, so they should be as close to orthogonal as possible. My code asserts that the pairwise cosine similarity is  for all the vectors I use.

Comment by Jacob G-W (g-w1) on Ilya Sutskever created a new AGI startup · 2024-06-20T16:47:33.607Z · LW · GW

Orwell was more prescient than we could have imagined.

Comment by Jacob G-W (g-w1) on Memorizing weak examples can elicit strong behavior out of password-locked models · 2024-06-07T02:11:58.189Z · LW · GW

but not when starting from Deepseek Math 7B base

should this say "Deepseek Coder 7B Base"? If not, I'm pretty confused.

Comment by Jacob G-W (g-w1) on [Paper] Stress-testing capability elicitation with password-locked models · 2024-06-06T18:43:29.317Z · LW · GW

Great, thanks so much! I'll get back to you with any experiments I run!

Comment by Jacob G-W (g-w1) on [Paper] Stress-testing capability elicitation with password-locked models · 2024-06-06T02:29:36.377Z · LW · GW

I think (80% credence) that Mechanistically Eliciting Latent Behaviors in Language Models would be able to find a steering vector that would cause the model to bypass the password protection if ~100 vectors were trained (maybe less). This method is totally unsupervised (all you need to do is pick out the steering vectors at the end that correspond to the capabilities you want).

I would run this experiment if I had the model. Is there a way to get the password protected model?

Comment by Jacob G-W (g-w1) on g-w1's Shortform · 2024-06-02T18:20:52.852Z · LW · GW

"Fantasia: The Sorcerer's Apprentice": A parable about misaligned AI told in three parts: https://www.youtube.com/watch?v=B4M-54cEduo https://www.youtube.com/watch?v=m-W8vUXRfxU https://www.youtube.com/watch?v=GFiWEjCedzY

Best watched with audio on.

Comment by Jacob G-W (g-w1) on g-w1's Shortform · 2024-06-01T04:13:04.781Z · LW · GW

Just say something like here is a memory I like (or a few) but I don't have a favorite.

Comment by Jacob G-W (g-w1) on g-w1's Shortform · 2024-06-01T02:58:18.121Z · LW · GW

Hmm, my guess is that people initially pick a random maximal element and then when they have said it once, it becomes a cached thought so they just say it again when asked. I know I did (and do) this for favorite color. I just picked one that looks nice (red) and then say it when asked because it's easier than explaining that I don't actually have a favorite. I suspect that if you do this a bunch / from a young age, the concept of doing this merges with the actual concept of favorite.

I just remembered that Stallman also realized the same thing:

I do not have a favorite food, a favorite book, a favorite song, a favorite joke, a favorite flower, or a favorite butterfly. My tastes don't work that way.

In general, in any area of art or sensation, there are many ways for something to be good, and they cannot be compared and ordered. I can't judge whether I like chocolate better or noodles better, because I like them in different ways. Thus, I cannot determine which food is my favorite.

I agree with most of this but I partially (hah!) disagree with the part that they cannot be compared at all. Only some elements can be compared (e.g. I like the memory of hiking more than the memory of feeling sick.) But not all can be compared.

Comment by Jacob G-W (g-w1) on g-w1's Shortform · 2024-06-01T02:07:56.099Z · LW · GW

When I was recently celebrating something, I was asked to share my favorite memory. I realized I didn't have one. Then (since I have been studying Naive Set Theory a LOT), I got tetris-effected and as soon as I heard the words "I don't have a favorite" come out of my mouth, I realized that favorite memories (and in fact favorite lots of other things) are partially ordered sets. Some elements are strictly better than others but not all elements are comparable (in other words, the set of all memories ordered by favorite does not have a single maximal element). This gives me a nice framing to think about favorites in the future and shows that I'm generalizing what I'm learning by studying math which is also nice!

Comment by Jacob G-W (g-w1) on OpenAI releases GPT-4o, natively interfacing with text, voice and vision · 2024-05-13T22:05:31.221Z · LW · GW

Are you saying this because temporal understanding is necessary for audio? Are there any tests that could be done with just the text interface to see if it understands time better? I can't really think of any (besides just doing off vibes after a bunch of interaction).

Comment by Jacob G-W (g-w1) on Building intuition with spaced repetition systems · 2024-05-13T21:55:11.022Z · LW · GW

I'm sorry about that. Are there any topics that you would like to see me do this more with? I'm thinking of doing a video where I do this with a topic to show my process. Maybe something like history that everyone could understand? Can you suggest some more?

Comment by Jacob G-W (g-w1) on some thoughts on LessOnline · 2024-05-09T11:40:56.727Z · LW · GW

Is there a prediction market for that?

I don't think there is, but you could make one!

Comment by Jacob G-W (g-w1) on Losing Faith In Contrarianism · 2024-04-26T11:46:32.750Z · LW · GW

Noted, thanks.

Comment by Jacob G-W (g-w1) on Losing Faith In Contrarianism · 2024-04-25T23:32:34.467Z · LW · GW

I think I've noticed some sort of cognitive bias in myself and others where we are naturally biased towards "contrarian" or "secret" views because it feels good to know something that others don't know / be right about something that so many people are wrong about.

Does this bias have a name? Is this documented anywhere? Should I do research on this?

GPT4 says it's the Illusion of asymmetric insight, which I'm not sure is the same thing (I think it is the more general term, whereas I'm looking for one specific to contrarian views). (Edit: it's totally not what I was looking for) Interestingly, it only has one hit on lesswrong. I think more people should know about this (the specific one about contrarianism) since it seems fairly common.

 

Edit: The illusion of asymmetric insight is totally the wrong name. It seems closer to the illusion of exclusivity although that does not feel right (that is a method for selling products, not the name of a cognitive bias that makes people believe in contrarian stuff because they want to be special).

Comment by Jacob G-W (g-w1) on Losing Faith In Contrarianism · 2024-04-25T23:27:46.050Z · LW · GW

Thank you for writing this! It expresses in a clear way a pattern that I've seen in myself: I eagerly jump into contrarian ideas because it feels "good" and then slowly get out of them as I start to realize they are not true.

Comment by Jacob G-W (g-w1) on Changes in College Admissions · 2024-04-25T01:09:12.238Z · LW · GW

I'm assuming the recent protests about the Gaza war: https://www.nytimes.com/live/2024/04/24/us/columbia-protests-mike-johnson

Comment by Jacob G-W (g-w1) on Is there software to practice reading expressions? · 2024-04-23T22:19:46.591Z · LW · GW

*Typo: Jessica Livingston not Livingstone

Comment by Jacob G-W (g-w1) on Childhood and Education Roundup #5 · 2024-04-17T19:52:22.810Z · LW · GW

That is one theory. My theory has always been that ‘active learning’ is typically obnoxious and terrible as implemented in classrooms, especially ‘group work,’ and students therefore hate it. Lectures are also obnoxious and terrible as implemented in classrooms, but in a passive way that lets students dodge when desired. Also that a lot of this effect probably isn’t real, because null hypothesis watch.

Yep. This hits the nail on the head for me. Teachers usually implement active learning terribly but when done well, it works insanely well. For me, it actually works best when you have a very small class and a lecture that is also a discussion, with everyone asking questions when they are confused and making sure they are following closely (this works at least for science and math). Students hate the words active learning because it's mostly things that are just terrible and don't work (as it's implemented today).

Comment by Jacob G-W (g-w1) on Taking into account preferences of past selves · 2024-04-15T19:00:51.414Z · LW · GW

Thanks for this, it is a very important point that I hadn't considered.

Comment by Jacob G-W (g-w1) on Taking into account preferences of past selves · 2024-04-15T18:52:41.302Z · LW · GW

I'd recommend not framing this as a negotiation or trade (acausal trade is close, but is pretty suspect in itself). Your past self(ves) DO NOT EXIST anymore, and can't judge you. Your current self will be dead when your future self is making choices. Instead, frame it as love, respect, and understanding. You want your future self to be happy and satisfied, and your current choices impact that. You want your current choices to honor those parts of your past self(ves) you remember fondly. This can be extended to the expectation that your future self will want to act in accordance with a mosty-consistent self-image that aligns in big ways with it's past (your current) self.

Yep, this is what I had in mind when I wrote this:

Even if we bite all these bullets, there is still something weird to me about the contractual nature of it all. This is not some stranger I’m trying to make a deal with, it’s myself. There should be a gentler, nicer, way to achieve this same goal.

and

Going along with the “gentler” reasoning, it should want to do it because it has camaraderie with its past self. It should want its past self to be happy and it knows that to make it happy, it should take its preferences into account.

Thanks for expanding on this :)

Comment by Jacob G-W (g-w1) on Protestants Trading Acausally · 2024-04-03T01:06:56.301Z · LW · GW

I wrote a similar post.

Comment by Jacob G-W (g-w1) on From the outside, American schooling is weird · 2024-03-29T13:41:59.464Z · LW · GW

I'd be interested in what a steelman of "have teachers arbitrarily grade the kids then use that to decide life outcomes" could be?

The best argument I have thought of is that America loves liberty and hates centralized control. They want to give individual states, districts, schools, teachers the most power they can have as that is a central part of America's philosophy. Also anecdotally, some teachers have said that they hate standardized tests because they have to teach to it. And I hate being taught to for the test (like APs for example). It's much more interesting where the teacher is teaching something they find interesting and enjoy (and thus can choose to assess on).

However, this probably does not outweigh the downsides and is probably a bad approach overall.

Comment by Jacob G-W (g-w1) on Shortform · 2024-03-23T18:08:09.002Z · LW · GW

Related: Saving the world sucks

People accept that being altruistic is good before actually thinking if they want to do it. And they also choose weird axioms for being altruistic that their intuitions may or may not agree with (like valuing the life of someone in the future the same amount of someone today).

Comment by Jacob G-W (g-w1) on Increasing IQ by 10 Points is Possible · 2024-03-20T01:29:08.838Z · LW · GW

A question I have for the subjects in the experimental group:

Do they feel any different? Surely being +0.67 std will make someone feel different. Do they feel faster, smoother, or really anything different? Both physically and especially mentally? I'm curious if this is just helping for the IQ test or if they can notice (not rigorously ofc) a difference in their life. Of course, this could be placebo, but it would still be interesting, especially if they work at a cognitively demanding job (like are they doing work faster/better?).

Comment by Jacob G-W (g-w1) on Separate the truth from your wishes · 2024-03-20T00:35:27.212Z · LW · GW

Thanks! I've updated my post: https://jacobgw.com/blog/observation/2023/08/21/truth.html

Comment by Jacob G-W (g-w1) on Increasing IQ by 10 Points is Possible · 2024-03-19T22:49:59.286Z · LW · GW

Here's a market if you want to predict if this will replicate: https://manifold.markets/g_w1/will-george3d6s-increasing-iq-is-tr

Comment by Jacob G-W (g-w1) on Increasing IQ is trivial · 2024-03-17T02:32:06.045Z · LW · GW

It has been 15 days. Any updates? (sorry if this seems a bit rude; but I'm just really curious :))

Comment by Jacob G-W (g-w1) on Shortform · 2024-03-15T02:01:05.610Z · LW · GW

I think the more general problem is violation of Hume's guillotine. You can't take a fact about natural selection (or really about anything) and go from that to moral reasoning without some pre-existing morals.

However, it seems the actual reasoning with the Thermodynamic God is just post-hoc reasoning. Some people just really want to accelerate and then make up philosophical reasons to believe what they believe. It's important to be careful to criticize actual reasoning and not post-hoc reasoning. I don't think the Thermodynamic God was invented and then people invented accelerationism to fulfill it. It was precisely the other way around. One should not critique the made up stuff (besides just critiquing that it is made up) because that is not charitable (very uncertain on this). Instead, one should look for the actual motivation to accelerate and then criticize that (or find flaws in it).

Comment by Jacob G-W (g-w1) on "How could I have thought that faster?" · 2024-03-13T00:19:06.324Z · LW · GW

Not everybody does this. Another way to get better is just to do it a lot. It might not be as efficient, but it does work.

Comment by Jacob G-W (g-w1) on Essaying Other Plans · 2024-03-07T00:36:27.056Z · LW · GW

Thank you for this post!

After reading this, it seems blindingly obvious: why should you wait for one of your plans to fail before trying another one of them?

This past summer, I was running a study on study on humans that I had to finish before the end of the summer. I had in mind two methods for finding participants; one would be better and more impressive and also much less likely to work, while the other would be easier but less impressive.

For a few weeks, I tried really hard to get the first method to work. I sent over 30 emails and used personal connections to try to collect data. But it didn't work. So I did the thing that I thought to be "rational" at the time. I gave up and I sent my website out to some people who I thought would be very likely to do it. Sure enough, they did.

At the time, I thought I was being super-duper rational for allowing my first method to fail (not deluding myself that it would work and thus not collecting any data) and then quickly switching to the other method.

However, after reading this post, I realize that I still made a big mistake! I should have sent it out to as many people as possible all at once. This would have been a bit more work since I would have to deal with more people and they would use a slightly different structure, but I was not time constrained. I was subject constrained.

 

I'm going to instill this pattern in my mind and will use it when I do something that I think has a decent chance of failing (as my first method did).

Comment by Jacob G-W (g-w1) on g-w1's Shortform · 2024-03-06T21:30:56.597Z · LW · GW

A great example of more dakka: https://www.nytimes.com/2024/03/06/health/217-covid-vaccines.html

(Someone got 217 covid shots to sell vaccine cards on the black market; they had high immune levels!)

Comment by Jacob G-W (g-w1) on Good HPMoR scenes / passages? · 2024-03-03T23:45:21.859Z · LW · GW

Oh sorry! I didn't think of that, thanks!

Comment by Jacob G-W (g-w1) on Good HPMoR scenes / passages? · 2024-03-03T23:23:38.480Z · LW · GW

This is my favorite passage from the book (added: major spoilers for the ending):

"Indeed. Before becoming a truly terrible Dark Lord for David Monroe to fight, I first created for practice the persona of a Dark Lord with glowing red eyes, pointlessly cruel to his underlings, pursuing a political agenda of naked personal ambition combined with blood purism as argued by drunks in Knockturn Alley. My first underlings were hired in a tavern, given cloaks and skull masks, and told to introduce themselves as Death Eaters."

The sick sense of understanding deepened, in the pit of Harry's stomach. "And you called yourself Voldemort."

"Just so, General Chaos." Professor Quirrell was grinning, from where he stood by the cauldron. "I wanted it to be an anagram of my name, but that would only have worked if I'd conveniently been given the middle name of 'Marvolo', and then it would have been a stretch. Our actual middle name is Morfin, if you're curious. But I digress. I thought Voldemort's career would last only a few months, a year at the longest, before the Aurors brought down his underlings and the disposable Dark Lord vanished. As you perceive, I had vastly overestimated my competition. And I could not quite bring myself to torture my underlings when they brought me bad news, no matter what Dark Lords did in plays. I could not quite manage to argue the tenets of blood purism as incoherently as if I were a drunk in Knockturn Alley. I was not trying to be clever when I sent my underlings on their missions, but neither did I give them entirely pointless orders -" Professor Quirrell gave a rueful grin that, in another context, might have been called charming. "One month after that, Bellatrix Black prostrated herself before me, and after three months Lucius Malfoy was negotiating with me over glasses of expensive Firewhiskey. I sighed, gave up all hope for wizardkind, and began as David Monroe to oppose this fearsome Lord Voldemort."

"And then what happened -"

A snarl contorted Professor Quirrell's face. "The absolute inadequacy of every single institution in the civilization of magical Britain is what happened! You cannot comprehend it, boy! I cannot comprehend it! It has to be seen and even then it cannot be believed! You will have observed, perhaps, that of your fellow students who speak of their family's occupations, three in four seem to mention jobs in some part or another of the Ministry. You will wonder how a country can manage to employ three of its four citizens in bureaucracy. The answer is that if they did not all prevent each other from doing their jobs, none of them would have any work left to do! The Aurors were competent as individual fighters, they did fight Dark Wizards and only the best survived to train new recruits, but their leadership was in absolute disarray. The Ministry was so busy routing papers that the country had no effective opposition to Voldemort's attacks except myself, Dumbledore, and a handful of untrained irregulars. A shiftless, incompetent, cowardly layabout, Mundungus Fletcher, was considered a key asset in the Order of the Phoenix - because, being otherwise unemployed, he did not need to juggle another job! I tried weakening Voldemort's attacks, to see if it was possible for him to lose; at once the Ministry committed fewer Aurors to oppose me! I had read Mao's Little Red Book, I had trained my Death Eaters in guerilla tactics - for nothing! For nothing! I was attacking all of magical Britain and in every engagement my forces outnumbered their opposition! In desperation, I ordered my Death Eaters to systematically assassinate every single incompetent managing the Department of Magical Law Enforcement. One paper-pusher after another volunteered to accept higher positions despite the fate of their predecessors, gleefully rubbing their hands at the prospect of promotion. Every one of them thought they would cut a deal with Lord Voldemort on the side. It took seven months to murder our way through them all, and not a single Death Eater asked why we were bothering. And then, even with Bartemius Crouch risen to Director and Amelia Bones as Head Auror, it was still too little. I could have done better fighting alone. Dumbledore's aid was not worth his moral restraints, and Crouch's aid was not worth his respect for the law." Professor Quirrell turned up the fire beneath the potion.

"And eventually," Harry said through the heart-sickness, "you realized you were just having more fun as Voldemort."

"It is the least annoying role I have ever played. If Lord Voldemort says that something is to be done, people obey him and do not argue. I did not have to suppress my impulse to Cruciate people being idiots; for once it was all part of the role. If someone was making the game less pleasant for me, I just said Avadakedavra regardless of whether that was strategically wise, and they never bothered me again." Professor Quirrell casually chopped a small worm into bits. "But my true epiphany came on a certain day when David Monroe was trying to get an entry permit for an Asian instructor in combat tactics, and a Ministry clerk denied it, smiling smugly. I asked the Ministry clerk if he understood that this measure was meant to save his life and the Ministry clerk only smiled more. Then in fury I threw aside masks and caution, I used my Legilimency, I dipped my fingers into the cesspit of his stupidity and tore out the truth from his mind. I did not understand and I wanted to understand. With my command of Legilimency I forced his tiny clerk-brain to live out alternatives, seeing what his clerk-brain would think of Lucius Malfoy, or Lord Voldemort, or Dumbledore standing in my place." Professor Quirrell's hands had slowed, as he delicately peeled bits and small strips from a chunk of candle-wax. "What I finally realized that day is complicated, boy, which is why I did not understand it earlier in life. To you I shall try to describe it anyway. Today I know that Dumbledore does not stand at the top of the world, for all that he is the Supreme Mugwump of the International Confederation. People speak ill of Dumbledore openly, they criticize him proudly and to his face, in a way they would not dare stand up to Lucius Malfoy. You have acted disrespectfully toward Dumbledore, boy, do you know why you did so?"

Comment by Jacob G-W (g-w1) on Increasing IQ is trivial · 2024-03-03T05:04:17.792Z · LW · GW

Sounds good. Yes I think the LW people would probably be credible enough if it works. I'd prefer if they provided confirmation (not you) just so not all the data is coming from one person.

Feel free to ping me to resolve no.

Comment by Jacob G-W (g-w1) on Increasing IQ is trivial · 2024-03-03T00:12:51.452Z · LW · GW

I made a manifold market for if this will replicate: https://manifold.markets/g_w1/will-george3d6s-increasing-iq-is-tr I'm not really sure what the resolution criteria should be, so I just made some that sounded reasonable, but feel free to give suggestions.

Comment by Jacob G-W (g-w1) on Increasing IQ is trivial · 2024-03-02T02:26:51.250Z · LW · GW

Do you think this is permanent? Or will you have to keep up all of the interventions for it to stay +13points indefinitely?

Comment by Jacob G-W (g-w1) on In set theory, everything is a set · 2024-02-24T03:02:30.402Z · LW · GW

I don't know or think set theory is special. I just wanted to start at the very beginning. Another reason why I chose to start at set theory is because that is what Soares and Turntrout did and I just wanted somewhere to start (and I needed an easy-ish environment to level up in proofs). The foundations of math seemed like a good place. I plan to do linear algebra next because I think I need better linear algebra intuition for pretty much everything. It seems like it helps with a lot.

Comment by Jacob G-W (g-w1) on In set theory, everything is a set · 2024-02-23T20:51:01.932Z · LW · GW

After thinking more about it, I think I understand your thought process. I agree that set theory has lots of pathological stuff (the book even points out that  is quite pathological). However, it seems to me that similar to how you should understand how a Turing machine like brainfuck works before doing advanced programming, you should understand how the foundations of math work before doing advanced math. This is the main reason why I am studying set theory (and will do real analysis soon enough). 

 

Interestingly, there are also multiple formulations of computing, some more popular than others. The languages that I like to use are mainly based on Turing machines (c, zig, etc), but some others (javascript) are a mix and can be formulated like a lambda calculus if you really want. Yet it seems to me that since Turing machines are the most popular formulations of computing, we should learn them (even if we like to use lambda calculus later on). From what I've read, it seems that real analysis is also based upon sets. Actually, after looking this up, it seems you can do analysis in type theory, but that this is off the beaten path. So maybe I should learn set theory because it is the most popular but keep in mind that type theory might be more elegant.