Hoagy's Shortform

post by Hoagy · 2020-09-21T22:00:43.682Z · LW · GW · 12 comments

Contents

12 comments

12 comments

Comments sorted by top scores.

comment by Hoagy · 2023-10-04T11:12:48.472Z · LW(p) · GW(p)

There's an argument that I've been thinking about which I'd really like some feedback or pointers to literature on:

the tldr is that overcomplete bases necessitate linear representations

  • Neural networks use overcomplete bases to represent concepts. Especially in vector spaces without non-linearity, such as the transformer's residual stream, there are just many more things that are stored in there than there are dimensions, and as Johnson Lindenstrauss shows, there are exponentially many almost-orthogonal directions to store them in (of course, we can't assume that they're stored linearly as directions, but if they were then there's lots of space). (see also Toy models of transformers, my sparse autoencoder posts)
  • Many different concepts may be active at once, and the model's ability to read a representation needs to be robust to this kind of interference.
  • Highly non-linear information storage is going to be very fragile to interference because, by the definition of non-linearity, the model will respond differently to the input depending on the existing level of that feature. For example, if the response is quadratic or higher in the feature direction, then the impact of turning that feature on will be much different depending on whether certain not-quite orthogonal features are also on. If feature spaces are somehow curved then they will be similarly sensitive.

Of course linear representations will still be sensitive to this kind of interferences but I suspect there's a mathematical proof for why linear features are the most robust to represent information in this kind of situation but I'm not sure where to look for existing work or how to start trying to prove it.

comment by Hoagy · 2021-06-23T09:39:58.197Z · LW(p) · GW(p)

I've been looking at papers involving a lot of 'controlling for confounders' recently and am unsure about how much weight to give their results.

Does anyone have recommendations about how to judge the robustness of these kind of studies?

Also, I was considering doing some tests of my own based on random causal graphs, testing what happens to regressions when you control for a limited subset of confounders, varying the size/depth of graph and so on. I can't seem to find any similar papers but I don't know the area, does anyone know of similar work?

Replies from: ChristianKl, yagudin
comment by ChristianKl · 2021-06-23T19:31:48.774Z · LW(p) · GW(p)

Robust statistics is a field. Wikipedia links to http://lagrange.math.siu.edu/Olive/ol-bookp.htm which has chapters like Chapter 7-Robust Regression and Chapter 8-Robust Regression Algorithms

Replies from: Hoagy
comment by Hoagy · 2021-06-24T12:34:34.962Z · LW(p) · GW(p)

Thanks, I'll give it a read.

comment by yagudin · 2021-06-23T14:43:42.317Z · LW(p) · GW(p)

Maybe reading Gelman's self-contained comments on SSC's More Confounders would make you more confused in a good way.

Replies from: Hoagy
comment by Hoagy · 2021-06-24T12:36:18.247Z · LW(p) · GW(p)

Cheers, glad I'm not dealing with 300 variables. Don't think the situation is quite as dire as for sleeping pills luckily.

comment by Hoagy · 2022-04-25T08:36:18.156Z · LW(p) · GW(p)

Question:

Does anyone know of papers on creating human-interpretable latent spaces with auto-encoders?

An example of the systems I have in mind would be a NN generating face images from a latent space, designed such that dimension 0 encodes skin tone, dimension 1 encodes hair colour etc.

Will be doing my own literature search but if anyone knows the area some pointers to papers or search terms would be very helpful!

Replies from: Frederik
comment by Tom Lieberum (Frederik) · 2022-04-25T09:38:51.082Z · LW(p) · GW(p)

There is definitely something out there, just can't recall the name. A keyword you might want to look for is "disentangled representations".

One start would be the beta-VAE paper https://openreview.net/forum?id=Sy2fzU9gl

Replies from: Hoagy
comment by Hoagy · 2022-04-25T10:26:04.478Z · LW(p) · GW(p)

Cheers!

comment by Hoagy · 2022-07-06T11:02:52.679Z · LW(p) · GW(p)

Suggestion:

Eliezer has huge respect in the community; he has strong, well thought-out opinions (often negative) on a lot of the safety research being done (with exceptions, Chris Olah mentioned a few times); but he's not able to work full time on research directly (or so I understand, could be way off).

Perhaps he should institute some kind of prize for work done, trying to give extra prestige and funding to work going in his preferred direction? Does this exist in some form without my noticing? Is there a reason it'd be bad? Time/energy usage for Eliezer combined with difficulty of delegation?

comment by Hoagy · 2020-09-21T22:00:44.368Z · LW(p) · GW(p)

Question about error-correcting codes that's probably in the literature but I don't seem to be able to find the right search terms:

How can we apply error-correcting codes to logical *algorithms*, as well as bit streams?

If we want to check that bit-stream is accurate, we know how to do this for a manageable overhead - but what happens if there's an error in the hardware that does the checking? It's not easy for me to construct a system that has no single point of failure - you can run the correction algorithm multiple times but how do you compare the results without ending up back with a single point of failure?

Anyone know any relevant papers or got a cool solution?

Interested for the stability of computronium-based futures!

Replies from: Marc Randolph
comment by Marc Randolph · 2020-09-21T23:58:44.443Z · LW(p) · GW(p)

At the risk of pointing to the obvious, the "typical" method that has been used in the past military and space is hardware redundancy (often x3).