Posts

No Abstraction Without a Goal 2022-01-10T21:28:58.572Z
Regularization Causes Modularity Causes Generalization 2022-01-01T23:34:37.316Z

Comments

Comment by dkirmani on You Can Get Fluvoxamine · 2022-01-19T20:39:03.555Z · LW · GW

Works like a charm for Adderall.

Comment by dkirmani on [deleted post] 2022-01-17T15:36:10.391Z

Very cool! However, it looks like Debate rests upon the assumption that truth is always more convincing than falsehood to a human judge. Multipolar schemes need not have humans in the loop at all.

From this comment:

What intrinsically goes wrong, I'd say, is that the human operators have an ability to recognize good arguments that's only rated to withstand up to a certain intensity of search, which will break down beyond that point. Our brains' ability to distinguish good arguments from bad arguments is something we'd expect to be balanced to the kind of argumentative pressure a human brain was presented with in the ancestral environment / environment of evolutionary adaptedness, and if you optimize against a brain much harder than this, you'd expect it to break.

Comment by dkirmani on [deleted post] 2022-01-17T14:52:27.270Z

Yeah, true. I'm asking about the potential of harnessing multipolar traps in general, though.

Comment by dkirmani on [deleted post] 2022-01-17T14:36:20.092Z

See the footnote. These rules aren't enforced by the programming of the environment, they're announced in natural language. These rules are enforced on agents, by agents, like how humans enforce norms upon each other.

Comment by dkirmani on Personal blogging as self-imposed oppression · 2022-01-15T18:11:52.036Z · LW · GW
  1. It will enforce your current beliefs and identity

I don't like value drift; I want my future self to be aligned with me. Making public statements about what you value/intend to do can be very useful for constraining future you's behavior.

Comment by dkirmani on Future ML Systems Will Be Qualitatively Different · 2022-01-12T05:57:55.771Z · LW · GW

What's the relationship between grokking and Deep Double Descent?

Comment by dkirmani on No Abstraction Without a Goal · 2022-01-12T03:55:31.659Z · LW · GW

There are lots of goals that are helped by having the abstraction "tree", like "run to the nearest tree and climb it in order to escape the charging rhino". My point was that the set of goals that are helped by having the abstraction "tree" is smaller than the set of all possible goals, so if we know that the abstraction "tree" is useful to you, we have more information about your goals.

Comment by dkirmani on No Abstraction Without a Goal · 2022-01-12T03:49:20.672Z · LW · GW

For instance, it's easy to construct a lossy map that takes high dimensional data to low dimensional data, whether or not it's useful seems like a different issue.

Yep. Most such maps are useless (to you) because the goals you have occupy a small fraction of the possible goals in goal-space.

You might also reply to this, "no, condensation of information without goal-relevance is just condensation of information, but it is not an abstraction" but then the claim that an abstraction only exists with goal-relevance seems tautilogical.

Nope, all condensation of information is abstraction. Different abstractions imply different regions of goal-space are more likely to contain your goals.

Comment by dkirmani on No Abstraction Without a Goal · 2022-01-12T03:45:35.472Z · LW · GW

I am less sure that those variants that do not have natural occurencies have an associated goal.

The abstractions that do not occur naturally do not prioritize fitness-relevant information. You could conceive of goals that they serve, but these goals are not obviously subgoals of fitness-maximization.

Comment by dkirmani on No Abstraction Without a Goal · 2022-01-12T03:22:59.003Z · LW · GW

Yes, I spoke too strongly. In the weighted causal graph of subgoals, I would bet that "provide a list of wildlife" would be less relevant to the goal "win the war" than "report #bombers".

Comment by dkirmani on No Abstraction Without a Goal · 2022-01-11T10:37:02.466Z · LW · GW

This implies that you care about things like "who owns the land", "are the buildings intact", et cetera. The information you care about leaks information about your values.

Comment by dkirmani on No Abstraction Without a Goal · 2022-01-11T05:38:50.706Z · LW · GW

"Provide a list of wildlife" has subgoal "communicate the concept in my brain to yours" has subgoal "use specific sounds to represent animals". "Provide a list of wildlife" is not a subgoal of "win the war".

Comment by dkirmani on No Abstraction Without a Goal · 2022-01-10T23:37:26.133Z · LW · GW

The set of things not influenced by any optimisation process is pretty small - so we'd probably have to be clearer in what counts as "non-optimized". (I'm also not sure I'd want to say that selection processes need to have a 'goal' exactly.)

Both good points. "Goal" isn't the best word for what selection processes move towards.

It strikes me that the argument you're making might not say much about abstraction specifically - unless I'm missing something essential, it'd apply to any a-priori-unlikely configuration of information.

Besides just being an unlikely configuration of information, abstractions destroy sensory information that did not previously have much of a bearing on actions that increased fitness (or is "selection stability" a better term?).

Comment by dkirmani on No Abstraction Without a Goal · 2022-01-10T23:13:59.335Z · LW · GW

If the military scout returns with a poem about nature, then yes, that's still an abstraction. The scout's abstraction prioritizes information that is useless to the general's goals, so we can guess that the scout's goals are not well aligned with the general's.

You seem to be defining a goal in terms of what the abstraction retains.

I'm not sure if it's possible to fully specify goals given abstractions. But for a system subject to some kind of optimization pressure, knowing an abstraction that the system uses is evidence that shifts probability mass within goal-space.

Comment by dkirmani on No Abstraction Without a Goal · 2022-01-10T22:38:00.533Z · LW · GW

That's true. But do abstractions ever show up in non-optimized things? I can't think of a single example.

Comment by dkirmani on No Abstraction Without a Goal · 2022-01-10T21:52:19.852Z · LW · GW

Yes, abstraction is compression, but real-world abstractions (like trees, birds, etc.) are very lossy forms of compression. When performing lossy compression, you need to ask yourself what information you value.

When compressing images, for example, humans usually don't care about the values of the least-significant bits, so you can round all 8-bit RGB intensity values down to the nearest even number and save yourself 3 bits per pixel in exchange for a negligible degradation in subjective image quality. Humans not caring about the least-significant bit is useful information about your goal, which is to compress an image for someone to look at.

Comment by dkirmani on How an alien theory of mind might be unlearnable · 2022-01-04T16:22:40.533Z · LW · GW

I think that ants like sugar. However, if I spill some sugar on the countertop, I'm not going to be shocked when every ant in a ten mile radius doesn't immediately start walking towards the sugar. It's reasonable to expect a model of an agent's behavior to include a model of that agent's model of its environment.

Comment by dkirmani on No Really, Why Aren't Rationalists Winning? · 2022-01-04T10:58:39.730Z · LW · GW

Update: I messaged Dr. Pechenick on LinkedIn, and I regret to report that he is not in fact Psychohistorian on LessWrong, but Psychohistorian on Twitter. Still, hell of a coincidence.

Comment by dkirmani on DL towards the unaligned Recursive Self-Optimization attractor · 2022-01-04T03:55:51.244Z · LW · GW

Yes! Anecdotal confirmation of my previously-held beliefs!

Comment by dkirmani on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-04T03:51:36.360Z · LW · GW

I don't think they do either. I was thinking they would provide alignment advice / troubleshooting services and would facilitate multipolar coordination in the event of a multiple slow-takeoff scenario.

Comment by dkirmani on Regularization Causes Modularity Causes Generalization · 2022-01-03T21:55:14.408Z · LW · GW

Good question! I'll go look at those two papers.

My intuition says that dropout is more useful when working with supervised learning on a not-massive dataset for a not-massive model, although I'm not yet sure why this is. I suspect this conceptual hole is somehow related to Deep Double Descent, which I don't yet understand on an intuitive level (Edit: looks like nobody does). I also suspect that GPT-3 is pretty modular even without using any of those tricks I listed.

Comment by dkirmani on Regularization Causes Modularity Causes Generalization · 2022-01-03T19:01:33.439Z · LW · GW

Thanks :)

One thought I have is that L1/L2 and pruning seem similar to one another on the surface, but very different to dropout, and all of those seem very different to goal-varying.

Agreed. Didn't really get into pruning much because some papers only do weight pruning after training, which isn't really the same thing as pruning during training, and I don't want to conflate the two.

Could it be the case that dropout is actually just penalizing connections? (e.g. as the effect of a non-firing neuron is propagated to fewer downstream neurons)

Could very well be, I called this post 'exploratory' for a reason. However, you could make the case that dropout has the opposite effect based on the same reasoning. If upstream dropout penalizes downstream performance, why don't downstream neurons form more connections to upstream neurons in order to hedge against dropout of a particular critical neuron?

I can't immediately see a reason why a goal-varying scheme could penalize connections but I wonder if this is in fact just another way of enforcing the same process.

Oh damn, I meant to write more about goal-varying but forgot to. I should post something about that later. For now, though, here are my rough thoughts on the matter:

I don't think goal-varying directly imposes connection costs. Goal-varying selects for adaptability (aka generalization ability) because it constantly makes the model adapt to related goals. Since modularity causes generalization, selecting for generalization selects for modularity.

Comment by dkirmani on How an alien theory of mind might be unlearnable · 2022-01-03T12:37:39.489Z · LW · GW

Asking people to state what their revealed preferences are is a fool's game. Brains are built to deceive themselves about their preferences; even if someone was trying to be totally honest with you, they would still mislead you. If I wanted to figure out the preferences of an alien race, I wouldn't try to initiate political or philosophical conversations. I would try to trade with them.

If I could only observe the aliens, I would try to figure out how they decide where to spend their energy. Whether the aliens hunt prey, grow crops, run solar farms, or maintain a Dyson swarm, they must gather energy in some fashion. Energy is and always will be a scarce resource, so building predictive models of alien energy allocation policy will reveal information about their preferences.

Looking at humans from an alien perspective:

  • Notice that humans allocate energy by swapping currency for a proportional amount of energy
  • Notice that there is a system by which humans swap currency for a wide variety of other things
  • Build a causal model of this system
  • In doing so, model the valuation structures of the human cortex
Comment by dkirmani on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence · 2022-01-03T08:59:41.952Z · LW · GW

MIRI should have a form you can fill out (or an email address) specifically for people who think they've advanced AI and are worried about ending the world with it. MIRI should also have Cold-War-style hotlines to OpenAI, DeepMind, and the other major actors.

Comment by dkirmani on Regularization Causes Modularity Causes Generalization · 2022-01-02T07:49:21.945Z · LW · GW

Thank you!

Yeah, that passage doesn't effectively communicate what I was getting at. (Edit: I modified the post so that it now actually introduces the relevant quote instead of dumping it directly into the reader's visual field.) I was gesturing at the quote from Design Principles of Biological Circuits that says that if you evolve an initially modular network towards a fixed goal (without dropout/regularization), the network sacrifices its existing modularity to eke out a bit more performance. I was also trying to convey that the dropout rate sets the specialization/redundancy tradeoff.

So yeah, a lack of dropout would lead to "lots and lots of modules, each focused on some very narrow task", if it wasn't for the fact that not having dropout would also blur the boundaries between those modules by allowing the optimizer to make more connections that break modularity and increase fitness. Not having dropout would allow more of these connections because there would be no pressure for redundancy, which means less pressure for modularity. I hope that's a more competent explanation of the point I was trying to make.

Comment by dkirmani on The Plan · 2022-01-02T00:58:09.931Z · LW · GW

It's hard to articulate exactly why, but I feel like "utility-maximizing agent(s)" is not the right frame to think about AI in. You can fit a utility function to any sequence of 'actions' an 'agent' makes, so the abstraction "utility function" has no real power to predict the 'actions' of an 'agent'. There's also the fundamental human bias of ascribing agency to non-agentic systems (the weather, printers).

Comment by dkirmani on No Really, Why Aren't Rationalists Winning? · 2021-12-31T04:41:24.261Z · LW · GW

Psychohistorian (Eitan Pechenick, Academia)

Holy shit. Pyschohistorian taught my AP Calc BC class. I am in shock.

Comment by dkirmani on What is a probabilistic physical theory? · 2021-12-26T01:36:12.560Z · LW · GW

What do we mean when we say that we have a probabilistic theory of some phenomenon?

If you have a probabilistic theory of a phenomenon, you have a probability distribution whose domain, or sample space, is the set of all possible observations of that phenomenon.

Comment by dkirmani on What is a probabilistic physical theory? · 2021-12-26T01:17:20.757Z · LW · GW

The alternative is to adopt a Bayesian approach, in which case the function of a probabilistic theory becomes purely normative - it informs us about how some agent with a given expected utility should act.

Not sure I buy this assertion. A Bayesian approach tells you how to update the plausibilities of various competing {propositions/hypotheses/probabilistic theories}. Sure, you could then use those plausibilities to select an action that maximizes the expectation of some utility function. But that isn't what Bayes' rule is about.

Comment by dkirmani on [Book Review] "The Most Powerful Idea in the World" by William Rosen · 2021-12-23T12:47:27.600Z · LW · GW

The early steampunks did not have statistical mechanics in their toolbox. They built their machines first. The science came afterward. The earliest steam engines were extremely inefficient compared to the earliest atomic bombs because they were developed mostly through trial-and-error instead of computing the optimal design mathematically from first principles.

Of course they didn't have statistical mechanics in their toolbox! Gears-level models are capital investments, and you should only invest in things that might be valuable. You don't know that steam engines have value until you see them doing useful things, like pumping water out of coal mines. You don't know that streamlining steam engines has value until Matthew Boulton works out that it does. Only then do you do the legwork of iterated experimentation and modeling. The original steam engine was invented in the classical era, and it went nowhere because people used it solely as a party trick.

From Antifragile by Taleb:

One can make a list of medications that came Black Swan–style from serendipity and compare it to the list of medications that came from design. I was about to embark on such a list until I realized that the notable exceptions, that is, drugs that were discovered in a teleological manner, are too few—mostly AZT, AIDS drugs.

Practice precedes theory, with rare exceptions like the Manhattan Project. In an internship, I was semirandomly tinkering with my project, and found that applying [REDACTED] to the input data led to a significant reduction in loss. In my written report and PowerPoint, I drew analogies between my methods and the human [REDACTED], implying that I was inspired by my theoretical knowledge of biology, instead of just happening upon [REDACTED] by chance. That was one of the few times I've done Actual Technological Innovation, and it wouldn't surprise me at all if most tech progress worked the same way: trial-and-error, then theoretical explanation.

Comment by dkirmani on On characterizing heavy-tailedness · 2021-12-19T05:20:31.007Z · LW · GW

For example, a definition of heavy tailedness formalized as leptokurtic distributions is unsatisfying in this sense - there is no meaningful way to talk about right and left leptokursis.

That's what skewness is for.

Comment by dkirmani on Ordering yourself around with an app · 2021-12-07T06:45:16.393Z · LW · GW

I very much relate to the cycle of adopting productivity systems and then developing antibodies to them. I'm glad you found a self-hack that works! However, I'm pretty disagreeable by nature, and loathe being told what to do. Maybe I should get a planner that tells me not to do the laundry, or else.

Comment by dkirmani on Second-order selection against the immortal · 2021-12-04T18:45:05.854Z · LW · GW

This is what psychedelics do, especially high doses.

Comment by dkirmani on You are way more fallible than you think · 2021-11-25T14:36:00.505Z · LW · GW

This is one of the main themes of Nassim Taleb's books. You can't really predict the future and you especially can't predict improbable things, so minimize left-tail risk, maximize your exposure to right-tail events, and hope for the best.

Comment by dkirmani on The Maker of MIND · 2021-11-21T21:53:29.335Z · LW · GW

No, I just thought about it some more, and I realized that increasing the learning rate of a model (assuming the optimizer is something like SGD) would inject more randomness, just like increasing the temperature of simulated annealing would.

Comment by dkirmani on The Maker of MIND · 2021-11-21T03:16:12.279Z · LW · GW

I really cannot say what that means, but I am also told "learning rate" is itself something of a misnomer and involves as much forgetting as learning.

Maybe temperature is a better word.

Comment by dkirmani on Why I am no longer driven · 2021-11-17T01:45:33.613Z · LW · GW

I used to be very driven. I'm talking "wake up at 4 AM, go run with my dog, take a cold shower, study for two hours, go work, meditate, cook, go to MMA practice, read a book before bed" driven.

That sounds sick. Also, potentially very useful, since akrasia is a bottleneck. Any tips on getting yourself in the state where you do that regularly, besides watching DragonBall Z?

Comment by dkirmani on Taking a simplified model · 2021-11-17T01:39:41.837Z · LW · GW

It's probably because it's much easier to steal from somebody you don't know. When everyone knows everyone, little theft occurs.

Comment by dkirmani on Why I am no longer driven · 2021-11-17T01:36:34.553Z · LW · GW

From Antifragile by Nassim Taleb:

Or, if I have to work, I find it preferable (and less painful) to work intensely for very short hours, then do nothing for the rest of the time (assuming doing nothing is really doing nothing), until I recover completely and look forward to a repetition, rather than being subjected to the tedium of Japanese style low-intensity interminable office hours with sleep deprivation. Main course and dessert are separate.

Indeed, Georges Simenon, one of the most prolific writers of the twentieth century, only wrote sixty days a year, with three hundred days spent “doing nothing.” He published more than two hundred novels.

Comment by dkirmani on Taking a simplified model · 2021-11-17T01:07:04.053Z · LW · GW

One thing is that it's much harder to blatantly steal from the commons in sub-Dunbar groups, because everyone knows everyone else, so formal norm-enforcement (police, RAs) is unnecessary. Social sanctions suffice. Despite students having high variance in family income, property theft was a non-issue. In high school, I could save myself one of the good seats in the library by leaving my laptop there, but if I did the same thing here in the engineering library (I go to UIUC, a large state college), my laptop would likely be taken within minutes. There is an asabiyyah in small groups that does not exist for larger ones.

Comment by dkirmani on Taking a simplified model · 2021-11-16T22:46:37.691Z · LW · GW

e.g. imagining a society of only ten people

Societies significantly above Dunbar's number have fundamentally different dynamics than those below it. I have lived in both, having attended a boarding high school in a remote location with a population of 120. I think a lot of suffering and inefficiency in modern society is caused by trying to apply sub-Dunbar logic to super-Dunbar groups.

Comment by dkirmani on Why Save The Drowning Child: Ethics Vs Theory · 2021-11-16T22:39:06.068Z · LW · GW

Newton's theories give you a good way to predict what you'll see when you throw a ball in the air, but it feels incorrect to me to say that Newton's goal was to find order in our sensory experience of ball throwing.

I like this framing! The entire point of having a theory is to predict experimental data, and the only way I can collect data is through my senses.

Do you think that there are in fact ordered moral laws that we're subject to, which our impulses respond to, and which we're trying to hone in on?

You could construct predictive models of people's moral impulses. I wouldn't call these models laws, though.

Comment by dkirmani on Why Save The Drowning Child: Ethics Vs Theory · 2021-11-16T22:09:52.461Z · LW · GW

Clearly our moral standards are informed by our society, and in no small part those standards emerge from discussions about what we collectively would like those standards to be, and not just a genetically hardwired disloyalty sensor.

Yes, these discussions set / update group norms. Perceived defection from group norms triggers the genetically hardwired disloyalty sensor.

In pressured environments we act on instinct, but those instincts don't exist in a vacuum

Right, System 1 contains adaptations optimized to signal adherence to group norms.

the societal project of working out what [people's instincts] ought to be is quite important and pretty hard

The societal project of working out what norms other people should adhere to is known as "politics", and lots of people would agree that it's important.

Comment by dkirmani on Why Save The Drowning Child: Ethics Vs Theory · 2021-11-16T20:35:54.217Z · LW · GW

At once, they all cease their arguing and leap in to save the child. But why?

Because all three of their System 1s executed adaptations that made a decision to save the child. They all have these adaptations because System 1s without this kind of adaptation would have lower fitness, because other members of the tribe would see them as untrustworthy potential allies and disloyal potential mates.

Their System 2s then make three different post-hoc rationalizations for their decision, and pretend like these rationalizations are the reasons that led them to jump in the pond.

Comment by dkirmani on Televised sports exist to gamble with testosterone levels using prediction skill · 2021-11-15T05:44:40.562Z · LW · GW

Wait, so can you raise your testosterone (and get the associated health / anti-akrasia benefits) by cheating on online FPSs? What about playing ultraviolent games on easy mode, like one of the newer DOOM games?

Comment by dkirmani on Education on My Homeworld · 2021-11-15T04:25:12.586Z · LW · GW

This is great! I've had a recurring daydream (mostly during boring classes, ha) about how I would want my kids to be educated if I ever get to be a father. It's pretty much identical to the system you describe on your homeworld.

In the same vein, I made a post on Hacker News a few days ago in order to gather information on the prospects I would face if I dropped out of college. I'm probably not going to (yet), but I'm still reminded of the corrosive effects of compulsory education on a daily basis.

Comment by dkirmani on What would we do if alignment were futile? · 2021-11-15T03:18:03.942Z · LW · GW

If anything, humanity is an excellent example of alignment failure considering we have discovered the true utility function of our creator and decided to ignore it anyway and side with proxy values such as love/empathy/curiosity etc.

Or we are waiting to be outbred by those who didn't. A few centuries ago, the vast majority of people were herders or farmers who had as many kids as they could feed. Their actions were aligned with maximization of their inclusive genetic fitness. We are the exception, not the rule.

Comment by dkirmani on What would we do if alignment were futile? · 2021-11-15T02:51:00.631Z · LW · GW

Yeah. On the off chance that the CIA actually does run the government from the shadows, I really hope some of them lurk on LessWrong.

Comment by dkirmani on What would we do if alignment were futile? · 2021-11-14T17:16:35.559Z · LW · GW

Serious idea: lobby for the full legalization of psychedelics in certain AI-research-heavy polities (California, Washington, Massachusetts) so that alignment researchers can have novel ideas at a higher rate.

Comment by dkirmani on What would we do if alignment were futile? · 2021-11-14T17:12:26.820Z · LW · GW

I think we may have different terminal values. I would much rather live out my life in a technologically stagnant world than be eaten by a machine that comes up with interesting mathematical proofs all day.