Posts

Classification of AI alignment research: deconfusion, "good enough" non-superintelligent AI alignment, superintelligent AI alignment 2020-07-14T22:48:04.929Z · score: 34 (12 votes)
Why take notes: what I get from notetaking and my desiderata for notetaking systems 2020-05-29T21:46:10.221Z · score: 9 (4 votes)
Is there software for goal factoring? 2020-02-18T19:55:37.764Z · score: 11 (2 votes)
Hard Problems in Cryptocurrency: Five Years Later - Buterin 2019-11-24T09:38:20.045Z · score: 19 (6 votes)
crabman's Shortform 2019-09-14T12:30:37.482Z · score: 3 (1 votes)
Reneging prosocially by Duncan Sabien 2019-06-18T18:52:46.501Z · score: 59 (16 votes)
How to determine if my sympathetic or my parasympathetic nervous system is currently dominant? 2019-05-31T20:40:30.664Z · score: 20 (8 votes)
AI Safety Prerequisites Course: Revamp and New Lessons 2019-02-03T21:04:16.213Z · score: 33 (11 votes)
Fundamentals of Formalisation Level 7: Equivalence Relations and Orderings 2018-08-10T15:12:46.683Z · score: 9 (3 votes)
Fundamentals of Formalisation Level 6: Turing Machines and the Halting Problem 2018-07-23T09:46:42.076Z · score: 11 (4 votes)
Fundamentals of Formalisation Level 5: Formal Proof 2018-07-09T20:55:04.617Z · score: 15 (3 votes)
Fundamentals of Formalisation Level 4: Formal Semantics Basics 2018-06-16T19:09:16.042Z · score: 15 (3 votes)
Fundamentals of Formalisation Level 3: Set Theoretic Relations and Enumerability 2018-06-09T19:57:20.878Z · score: 20 (5 votes)
Idea: OpenAI Gym environments where the AI is a part of the environment 2018-04-12T22:28:20.758Z · score: 10 (3 votes)

Comments

Comment by crabman on Free Educational and Research Resources · 2020-07-31T11:51:11.733Z · score: 3 (2 votes) · LW · GW

I would appreciate easy-to-see tags on entries useful only for people living in the US. This definitely includes community college enrollment and maybe includes libary card, Kanopy, Libby. I've tried to use my Russian library card on Kanopy, and it wasn't recognized.

Comment by crabman on avturchin's Shortform · 2020-07-27T09:42:01.748Z · score: 1 (1 votes) · LW · GW

You started self quarantining, and by that I mean sitting at home alone and barely going outside, since december or january. I wonder, how's it going for you? How do you deal with loneliness?

Comment by crabman on Classification of AI alignment research: deconfusion, "good enough" non-superintelligent AI alignment, superintelligent AI alignment · 2020-07-19T16:57:38.185Z · score: 3 (2 votes) · LW · GW

No, I was talking about an almost omnipotent AI, not necessarily aligned. I've now fixed what words I use there.

Comment by crabman on Classification of AI alignment research: deconfusion, "good enough" non-superintelligent AI alignment, superintelligent AI alignment · 2020-07-15T23:53:23.579Z · score: 5 (3 votes) · LW · GW

I see MIRI's research on agent foundations (including embedded agency) as something like "We want to understand ${an aspect of how agents should work}, so let's take the simplest case first and see if we understand everything about it. The simplest case is the case when the agent is nearly omniscient and knows all logical consequences. Hmm, we can't figure out this simplest case yet - it breaks down if the conditions are sufficiently weird". Since it turns out that it's difficult to understand embedded agency even for such simple cases, it seems plausible that an AI trained to understand embedded agency by a naive learning procedure (similar to the evolution) will break down under sufficiently weird conditions.

Why don't these arguments apply to humans? Evolution didn't understand embedded agency, but managed to create humans who seem to do okay at being embedded agents.

(I buy this as an argument that an AI system needs to not ignore the fact that it is embedded, but I don't buy it as an argument that we need to be deconfused about embedded agency.)

Hmm, very good argument. Since I think humans have imperfect understanding of embedded agency, thanks to you I now no longer think that "If we build an AI without understanding embedded agency, and that AI builds a new AI, that new AI also won't understand embedded agency" since that would imply we can't get the "lived happily ever after" at all. We can ignore the case where we can't get the "lived happily ever after" at all, because in that case nothing matters anyway.

I suppose, we could run evolutionary search or something, selecting for AIs which can understand the typical cases of being modified by itself or by the environment, which we include in the training dataset. I wonder how we can make it understand very atypical cases of modification. A near-omnipotent AI will be a very atypical case.

Can we come up with a learning procedure to have the AI learn embedded agency on its own? It seems plausible to me that we will need to understand embedded agency better to do this, but I don't really know.

Btw, in another comment, you say

But usually when LessWrongers argue against "good enough" alignment, they're arguing against alignment methods, saying that "nothing except proofs" will work, because only proofs give near-100% confidence.But usually when LessWrongers argue against "good enough" alignment, they're arguing against alignment methods, saying that "nothing except proofs" will work, because only proofs give near-100% confidence.

I basically subscribe to the argument that nothing except proofs will work in the case of superintelligent agentic AI.

Comment by crabman on Classification of AI alignment research: deconfusion, "good enough" non-superintelligent AI alignment, superintelligent AI alignment · 2020-07-15T19:26:57.277Z · score: 5 (3 votes) · LW · GW

Here are my responses to your comments, sorted by how interesting they are to me, descending. Also, thanks for your input!

Non-omnipotent AI aligning omnipotent AI

The AI will be making important decisions long before it becomes near-omnipotent, as you put it. In particular, it should be doing all the work of aligning future AI systems well before it is near-omnipotent.

Please elaborate. I can imagine multiple versions of what you're imagining. Is one of the following scenarios close to what you mean?

  1. Scientists use AI-based theorem provers to prove theorems about AI alignment.
  2. There's an AI, with which you can have conversations. It tries to come up with new mathematical definitions and theorems related to what you're discussing.
  3. The AI (or multiple AIs) is not near-omnipotent yet, but it already controls most of the humanity's resources and makes most of the decisions, so it does research into AI instead of humans.

I think, the requirements for how well the non-omnipotent AI in the 3rd scenario should be aligned are basically the same as for a near-omnipotent AI. If the non-omnipotent AI in the 3rd scenario is very misaligned, but it's not catastrophic because the AI is not smart enough, the near-omnipotent AI it'll create will also be misaligned, and that'll be catastrophic.

Embedded agency

Note though it's quite possible that some things we're confused about are also simply irrelevant to the thing we care about. (I would claim this of embedded agency with not much confidence.)

So, you think embedded agency research is unimportant for AI alignment. On the contrast, I think it's very important. I worry about it mainly for 3 reasons. Suppose we don't figure out embedded agency. Then

  • An AI won't be able to safely self-modify
  • An AI won't be able to comprehend that it can be killed or damaged or modified by others
  • I am not sure about this one. I am very interested to know if this is not the case. I think, if we build an AI without understanding embedded agency, and that AI builds a new AI, that new AI also won't understand embedded agency. In other words, the set of AIs built without taking embedded agency into account is closed under the operation of an AI building a new AI. [Upd: comments under this comment mostly refute this]
  • I am even less sure about this item, but maybe such an AI will be too dogmatic (as in dogmatic prior) about how the world might work, because it is sure that it can't be killed or damaged or modified. Due to this, if the physics laws turn out to be weird (e.g. we live in a multiverse, or we're in a simulation), the AI might fail to understand that and thus fail to turn the whole world into hedonium (or whatever it is that we would want it to do with the world).
  • If an AI built without taking embedded agency into account meets very smart aliens someday, it might fuck up due to its inability to imagine that someone can predict its actions.

Usefulness of type-2 research for aligning superintelligent AI

Unless your argument is that type 2 research will be of literally zero use for aligning superintelligent AI.

I think that if one man-year of type-1 research produces 1 unit of superintelligent AI alignment, one man-year of type-2 research produces about 0.15 units of superintelligent AI alignment.

As I see it, the mechanisms by which type-2 research helps align superintelligent AI are:

  • It may produce useful empirical data which'll help us make type-1 theoretical insights.
  • Thinking about type-2 research contains a small portion of type-1 thinking.

For example, if someone works on making contemporary neural networks robust to out-of-distribution examples, and they do that mainly by experimenting, their experimental data might provide insights about the nature of robustness in abstract, and also, surely some portion of their thinking will be dedicated to theory of robustness.

My views on tractability and neglectedness

Tractability and neglectedness matter too.

Alright, I agree with you about tractability.

About neglectedness, I think type-2 research is less neglected than type-1 and type-3 and will be less neglected in the next 10 years or so, because

  • It's practical, you can sell it to companies which want to make robots or unbreakable face detection or whatever.
  • Humans have bias towards near-term thinking.
  • Neural networks are a hot topic.
Comment by crabman on Spoiler-Free Review: Witcher 3: Wild Hunt (plus a Spoilerific section) · 2020-07-05T10:42:33.055Z · score: 1 (1 votes) · LW · GW

I've finished The Witcher 3 two days ago, and here you're posting your review just in time. Nice!

Another thing I would add to the list of the best things about The Witcher 3, although they might be irrelevant for native English speakers:

The game feels slavic rather than western. I love it! In the Russian version of the game, the peasants talk in a funny way, which shows that they really are uneducated and mostly stupid. They use funny figures of speech, modified words, etc. Everyone swears a lot using words which are on the more insulting end. I love it! I can believe a few hundred years ago, peasants in slavic countries actually talked like this. I would guess they also talk like this in the Polish version, but I don't know about the English version.

And for the bad things, I would add:

Movement and combat feel extremely clunky. Some people recommend setting movement response time to alternative in the options. I did it, but it still feels clunky. It's like instead of using a proper videogame engine with good physics, they took a very old engine, in which jumping and climbing are not first-class citizens, the locations are supposed to be mostly flat, and you are expected to move around very slowly (like in Neverwinter Nights or Baldur's Gate). I am probably biased here, because right before The Witcher 3, I've played The Legend of Zelda: Breath of the Wild, which has the BEST feeling of moving around EVER - you can climb anything (and not jumping between specially designated protrusions like in old Assassin's Creed games), jump on anything, fly around, aim your bow while jumping, and the controls during the combat and out of the combat are basically the same. And because of this, consider playing BotW as your next game.

Btw, have you read The Witcher or have you watched The Witcher? I've read all the books twice, and game Geralt is very similar to book Geralt, so that's another thing I liked.

Comment by crabman on What's the most easy, fast, efficient way to create and maintain a personal Blog? · 2020-07-02T14:43:04.264Z · score: 2 (2 votes) · LW · GW

A github repo with posts as markdown or org-mode files. https://github.com/ChALkeR/notes as an example. Post links to your posts to lesswrong/reddit/wherever if you want people to discuss them.

Comment by crabman on crabman's Shortform · 2020-06-28T21:28:52.084Z · score: 1 (1 votes) · LW · GW

I've added 2 examples.

Comment by crabman on crabman's Shortform · 2020-06-26T20:07:38.297Z · score: 5 (3 votes) · LW · GW

Often in psychology articles I see phrases like "X is associated with Y". These articles' sections often read like the author thinks that X causes Y. But if they had evidence that X causes Y, surely they would've written exactly that. And in such cases I feel that I want to punish them, so in my mind I instead read it as "Y causes X", just for contrarianism's sake. Or, sometimes, I imagine what variable Z can exist which causes both X and Y. I think the latter is a useful exercise.

Examples:

It appears that some types of humor are more effective than others in reducing stress. Chen and Martin (2007) found that humor that is affiliative (used to engage or amuse others) or self-enhancing (maintaining a humorous perspective in the face of adversity) is related to better mental health. In contrast, coping through humor that is self-defeating (used at one’s own expense) or aggressive (criticizing or ridiculing others) is related to poorer mental health.

The author says that non-self-defeating non-agressive humor helps reduce stress. But notice the words "related". For the first "related", it seems plausible that not having a good mental health causes you to lose humor. For the second "related", I think it's very probable that poor mental health, such as depression and low self esteem, causes self-defeating humor.

How does humor help reduce the effects of stress and promote wellness? Several explanations have been proposed (see Figure 4.7). One possibility is that humor affects appraisals of stressful events. Jokes can help people put a less threatening spin on their trials and tribulations. Kuiper, Martin, and Olinger (1993) demonstrated that students who used coping humor were able to appraise a stressful exam as a positive challenge, which in turn lowered their perceived stress levels.

Or it could be that students, who are well prepared for the exams or simply tend to not be afraid of them, will obviously have lower perceived stress levels, and maybe will be able to think about the exams as a positive challenge, hence they'' able to joke about them in this way.

It's possible in this example, that the original paper Kuiper, Martin, and Olinger (1993) actually did an intervention making students use humor, in which case the causality must go from humor to stress reduction. But I don't want to look at every source, so screw you author of Psychology Applied to Modern Life (both quotes are from it) for not making it clear whether that study found causation or only correlation.

Comment by crabman on FactorialCode's Shortform · 2020-06-23T20:39:58.193Z · score: 2 (2 votes) · LW · GW

What do you mean "approve a new user"? AFAIK, registration is totally free.

Comment by crabman on Iterated Distillation and Amplification · 2020-06-21T19:00:04.705Z · score: 1 (1 votes) · LW · GW

I think there are 2 mistakes in the pseudocode.

First mistake

what rmoehn said.

Second mistake

In the personal assistant example you say

In the next iteration of training, the Amplify(H, A[0]) system takes over the role of H as the overseer.

which implies that we do

H <- Amplify(H, A)

But in the pseudocode the original human overseer acts as the overseer all the time.

Suggested change of the pseudocode, which fixes both mistakes

def IDA(H):
   repeat:
      A ← Distill(H)
      H ← Amplify(H, A)
Comment by crabman on Where to Start Research? · 2020-06-16T20:28:11.642Z · score: -1 (6 votes) · LW · GW

I think epistemic spot checks prevent building gears-level models. And so does reading only small parts of books. The reasons why I think so are obvious. What's your take on this problem?

Comment by crabman on Does taking extreme measures to avoid the coronavirus make sense when you factor in the possibility of a really long life? · 2020-06-05T11:10:08.037Z · score: 0 (2 votes) · LW · GW

I think your value of your life is too high, since you almost certainly can't earn nearly that much during your life. Let's say you'll be getting 150k per year for 40 years. Then in total you'll earn only 6kk .

Comment by crabman on cousin_it's Shortform · 2020-06-03T23:33:24.218Z · score: 1 (1 votes) · LW · GW

Do you by any chance have a typo here? Sorry if I am wrong, since I don't actually know quantum information theory.

A pure state, like ( |00> + |11> ) / √2, is a vector in that space.

I think this state is mixed, since it's a sum of two vectors, which can't be represented as just one kronecker product.

Comment by crabman on cousin_it's Shortform · 2020-06-03T23:26:06.327Z · score: 1 (1 votes) · LW · GW

Hey, I've got a sudden question for you. Probability distribution on a set of binary variables is to a quantum state as ??? is to a unitary linear operator.

What should ??? be replaced with?

Here's why I have this question. Somehow I was thinking about Normalizing flows, which are invertible functions which, when applied to a sample from an N-dimensional probability distribution, transform it into a sample from another N-dimensional probability distribution. And then I thought: isn't this similar to how quantum operator is always unitary? Maybe then I can combine encoding an image as a pure state (like in Stoudenmire 2016 - Supervised learning with quantum-inspired tensor networks with representing quantum operators as tensor networks to get a quantum-inspired generative model similar to normalizing flows.

Comment by crabman on The Zettelkasten Method · 2020-05-21T00:10:35.491Z · score: 3 (2 votes) · LW · GW

A question about your "don't sort often" advice. How do you deal with linking unsorted cards?

  1. At first, you create a card and put it in the unsorted pile of cards, and you don't give it an index. Is this correct? Or do you give the card an index, add some links, and then put it back into the unsorted pile of cards?
  2. At some point (which per your suggestion should not be too soon) you give it an index and put it in the sorted part. Do you only think of links at this point?
Comment by crabman on Mark Xu's Shortform · 2020-05-20T08:37:18.934Z · score: 5 (4 votes) · LW · GW

There are a bunch of explanations of logarithm as length on Arbital.

Comment by crabman on Small Data · 2020-05-14T08:49:24.793Z · score: 5 (3 votes) · LW · GW

“big data” refers to situations with so much training data you can get away with weak priors The most powerful recent advances in machine learning, such as neural networks, all use big data.

This is only partially true. Consider some image classification dataset, say MNIST or CIFAR10 or ImageNet. Consider some convolutional relu network architecture, say, conv2d -> relu -> conv2d -> relu -> conv2d -> relu -> conv2d -> relu -> fullyconnected with some chosen kernel sizes and numbers of channels. Consider some configuration of its weights . Now consider the multilayer perceptron architecture fullyconnected -> relu -> fullyconnected -> relu -> fullyconnected -> relu -> fullyconnected -> relu -> fullyconnected. Clearly, there exist hyperparameters of the multilayer perceptron (numbers of neurons in hidden layers) such that there exists a configuration of weights of the multilayer perceptron, such that the function implemented by the multilayer perceptron with is the same function as the function implemented by the convolutional architecture with . Therefore, the space of functions which can be implemented by the convolutional neural network (with fixed kernel sizes and channel counts) is a subset of the space of functions which can be implemented by the multilayer perceptron (with correctly chosen numbers of neurons). Therefore, training the convolutional relu network is updating on evidence and having a relatively strong prior, while training the multilayer perceptron is updating on evidence and having a relatively weak prior.

Experimentally, if you train the networks described above, the convolutional relu network will learn to classify images well or at least okay-ish. The multilayer perceptron will not learn to classify images well, its accuracy will be much worse. Therefore, the data is not enough to wash away the multilayer perceptron's prior, hence by your definition it can't be called big data. Here I must note that ImageNet is the biggest publically available data for training image classification, so if anything is big data, it should be.

--

Big data uses weak priors. Correcting for bias is a prior. Big data approaches to machine learning therefore have no built-in method of correcting for bias.

This looks like a formal argument, a demonstration or dialectics as Bacon would call it, which uses shabby definitions. I disagree with the conclusion, i.e. with the statement "modern machine learning approaches have no built-in method of correcting for bias". I think in modern machine learning people are experimenting with various inductive biases and various ad-hoc fixes or techniques with help correcting for all kinds of biases.

--

In your example with a non-converging sequence, I think you have a typo - there should be rather than .

Comment by crabman on Legends of Runeterra: Early Review · 2020-05-13T17:38:12.305Z · score: 1 (1 votes) · LW · GW

Nice review. I like CCGs in general, but I haven't heard about Legends of Runeterra and thanks to your review I decided not to play it.

Regarding Emergents, what platforms will it be on and can I be an alpha/beta tester?

Comment by crabman on crabman's Shortform · 2020-05-10T06:29:45.574Z · score: 3 (2 votes) · LW · GW

How to download the documentation of a programming library for offline use.

  1. On the documentation website, look for "downloads" section. Preferrably choose HTML format, because then it will be nicely searchable - I can even create a krunner web shortcut for searching it. Example: Numpy - find "HTML+zip".
  2. If you need pytorch, torchvision, or sklearn - simply download https://github.com/unknownue/PyTorch.docs.
  3. If you need the documentation hosted on https://readthedocs.io: in the bottom left press "Read the docs" a download type from "Downloads". Search field won't work in the HTML version, so feel free to download whatever format you like. Example: Elpy. Warning: for some libraries (e.g. more-itertools) the downloaded version is basically broken, so you should check if what you've downloaded is complete.
  4. In some weird cases ReadTheDocs documentation for the latest version might of a library might be unlisted in the downloads secion of ReadTheDocs. For example, if you click the readthedocs icon in the bottom right of https://click.palletsprojects.com/en/master/, you won't find a download link for version 8.0. In this case copy the hyperlink https://media.readthedocs.org/pdf/pallets-click/latest/pallets-click.pdf or https://media.readthedocs.org/pdf/pallets-click/stable/pallets-click.pdf and replace pallets-click with the name of the project you want. It doesn't work for all projects, but it works for some.
  5. Use httrack to mirror the documentation website. In my experience it doesn't take long. Do it like $ httrack https://click.palletsprojects.com/en/7.x/. This will download everything hosted in https://click.palletsprojects.com/en/7.x/ and will not go outside of this server directory. In this case the search field won't work.
Comment by crabman on Michaël Trazzi's Shortform · 2020-05-10T04:28:16.864Z · score: 2 (2 votes) · LW · GW

Do you have any tips on how to make the downloaded documentation of programming languages and libraries searchable?

Btw here's my shortform on how to download documentations of various libraries: https://www.lesswrong.com/posts/qCrTYSWE2TgfNdLhD/crabman-s-shortform?commentId=Xt9JDKPpRtzQk6WGG

Comment by crabman on The Zettelkasten Method · 2020-05-10T04:16:32.126Z · score: 5 (3 votes) · LW · GW

It turns out Staples index-cards-on-a-ring are not a thing in Russia. It might be the case in other countries as well, so here I am posting my solution which goes in the spirit of Abram's suggestions. A small A6 binder and pages for it on Aliexpress (archived version). In my opinion it looks nice and feels nice, although now I think A6 is too small and I would prefer A5.

Comment by crabman on Named Distributions as Artifacts · 2020-05-04T21:43:28.959Z · score: 1 (1 votes) · LW · GW

Let’s start with the application of the central limit theorem to champagne drinkers. First, there’s the distinction between “liver weights are normally distributed” and “mean of a sample of liver weights is normally distributed”. The latter is much better-justified, since we compute the mean by adding a bunch of (presumably independent) random variables together. And the latter is usually what we actually use in basic analysis of experimental data—e.g. to decide whether there’s a significant different between the champagne-drinking group and the non-champagne-drinking group. That does not require that liver weights themselves be normally distributed.

I think your statement in bold font is wrong. I think in cases such as champagne drinkers vs non-champagne-drinkers people are likely to use Student's two-sample t-test or Welch's two-sample unequal variances t-test. It assumes that in both groups, each sample is distributed normally, not that the means are distributed normally.

Comment by crabman on crabman's Shortform · 2020-04-29T22:19:12.862Z · score: 1 (1 votes) · LW · GW

Tbh what I want right now is a very weak form of reproducibility. I want the experiments I am doing nowadays to work the same way on my own computer every time. That works for me so far.

Comment by crabman on crabman's Shortform · 2020-04-29T20:54:16.329Z · score: 3 (2 votes) · LW · GW

It turns out, Pytorch's pseudorandom number generator generates different numbers on different GPUs even if I set the same random seed. Consider the following file do_different_gpus_randn_the_same.py:

import torchseed = 0
torch.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False

foo = torch.randn(500, 500, device="cuda")
print(f"{foo.min():.30f}")
print(f"{foo.max():.30f}")
print(f"{foo.min() / foo.max()=:.30f}")

On my system, I get the following for two runs on two different GPUs:

$ CUDA_VISIBLE_DEVICES=0 python do_different_gpus_randn_the_same.py 
-4.230118274688720703125000000000
4.457311630249023437500000000000
foo.min() / foo.max()=-0.949029088020324707031250000000
$ CUDA_VISIBLE_DEVICES=1 python do_different_gpus_randn_the_same.py 
-4.230118751525878906250000000000
4.377007007598876953125000000000
foo.min() / foo.max()=-0.966440916061401367187500000000

Due to this, I am going to generate all pseudorandom numbers on my CPU and then transfer them to GPU for reproducibility's sake like foo = torch.randn(500, 500, device="cpu").to("cuda").

Comment by crabman on 3 Interview-like Algorithm Questions for Programmers · 2020-04-25T12:48:04.236Z · score: 1 (1 votes) · LW · GW

I want to know why the answer to the first question is like that.

Comment by crabman on Mozilla Hubs Virtual Meetup 10:30AM PDT, April 19th · 2020-04-10T18:00:17.300Z · score: 1 (1 votes) · LW · GW

It's 10:30 AM, right?

Comment by crabman on Choosing the Zero Point · 2020-04-08T14:22:58.806Z · score: 2 (2 votes) · LW · GW

I suggest not only shifting the zero point, but also scaling utilities when you update on information about what's achievable and what's not. For example, suppose you thought that saving 1-10 people in poor countries was the best you could do with your life, and you felt like every life saved was +1 utility. But then you learned about longtermism and figured out that if you try, then in expectation you can save 1kk lives in the far future. In such situation it doesn't make sense to continue caring about saving an individual life as much as you cared before this insight - your system 1 feeling for how good thing can be won't be able to do its epistemological job then. It's better to scale utility of saving lives down, so that +1kk lives is +10 utility, and +1 life is +1/100000 utility. This is related to Caring less.

However, this advice has a very serious downside - it will make it very difficult to communicate with "normies". If a person thinks saving a life is +1 utility and tells you that there's this opportunity to go and do it, and if you're like "meh, +1/100000 utility", they will see your reaction and think you're weird or heartless or something.

Comment by crabman on Option Value in Effective Altruism: Worldwide Online Meetup · 2020-04-04T11:35:13.992Z · score: 2 (2 votes) · LW · GW

Will it be in English?

Comment by crabman on Is there software for goal factoring? · 2020-03-31T21:57:10.444Z · score: 3 (2 votes) · LW · GW

Thanks to your advice, I've tried it for goal factoring and for drawing various diagrams. It's great! (by which I mean it's less awful than other software)

Comment by crabman on The Zettelkasten Method · 2020-03-18T00:04:19.406Z · score: 10 (7 votes) · LW · GW

Failure mode: perfectionism

After creating a couple Zettelkasten pages on Roam and rereading this post, I decided to try it on paper. That was a week ago. I still haven't created a single page. Aaaaah. You can't change things on paper, so it must be PERFECT. And if it's not perfect, then it's a working memory dump which shouldn't be in Zettelkasten in the first place. During this week I filled perhaps 15 A4 pages of my working notebook, but all of it wasn't good enough for Zettelkasten. And then when some of it was good enough, I used it to write a long answer on stackoverflow. And after having done that, why would I also write it on paper? Yeah, perhaps paper zettelkasten is not for me.

Comment by crabman on Why don't singularitarians bet on the creation of AGI by buying stocks? · 2020-03-13T09:22:43.452Z · score: 2 (2 votes) · LW · GW

Do you mind sharing your list of stocks which belong to companies with a nontrivial probability of creating AGI? Also, why Uber?

Comment by crabman on Why don't singularitarians bet on the creation of AGI by buying stocks? · 2020-03-12T23:04:04.371Z · score: 8 (5 votes) · LW · GW

This post's arguments seemed correct to me, so I am gonna sell some S&P500 stocks and buy some google, facebook, tencent, etc. stocks instead. Thank you for writing this post.

Comment by crabman on The Zettelkasten Method · 2020-03-12T22:56:29.342Z · score: 3 (2 votes) · LW · GW

DIsclaimer: I have tried Zettelkasten in Roam very recently, it hasn't impressed me, but I want to try it on paper.

Here's something I don't understand about Zettelkasten. Do you people actually open your index note and then go through all your notes related to your project from time to time? If yes, why? When I am working on a project (say, figuring out how to train a novel machine learning model I came up with), I usually remember most of the relevant information. Usually I write things on paper as an extension of my working memory, but right after having finished the thought, I can throw it away.

I do keep notes in emacs org-mode, but I almost never go and read them sequentially. I think it would be boring - I'd rather go read stuff on the internet. Actually I rarely read my notes at all. Usually I only do it when I want to remind myself something specific and I remember that I have something written about it.

Comment by crabman on Nate Soares' Replacing Guilt Series compiled in epub Format · 2020-03-04T20:08:49.442Z · score: 2 (2 votes) · LW · GW

The posts listed under "Related" on http://mindingourway.com/guilt/, including "Conclusion of the Replacing Guilt series", are missing.

Comment by crabman on The Zettelkasten Method · 2020-03-01T23:04:40.914Z · score: 10 (3 votes) · LW · GW

After half a year, do you still use Zettelkasten? Do you still think it has significantly boosted your productivity?

Comment by crabman on crabman's Shortform · 2020-02-26T17:37:44.648Z · score: 1 (1 votes) · LW · GW

A new piece of math notation I've invented which I plan to use whenever I am writing proofs for myself (rather than for other people).

Sometimes when writing a proof, for some long property P(x) I want to write:

It follows from foo, that there exists x such that P(x). Let x be such that P(x). Then ...

I don't like that I need to write P(x) twice here. And the whole construction is too long for my liking, especially when the reason foo why such x exists is obvious. And if I omit the first sentence It follows from foo, that there exists x such that P(x). and just write

Let x be such that P(x). Then ...

then it's not clear what I mean. It could be that I want to show that such x exists and that from its existence some statement of interest follows. Or it could be that I want to prove some statement of form

For each x such that P(x), it holds that Q(x).

Or it could even be that I want to show that something follows from existence of such x, but I am not asserting that such x exists.

The new notation I came up with is to write L∃t in cases when I want to assert that such x exists and to bound the variable x in the same place. An example (an excerpt, not a complete proof):

  • Suppose is a countably infinite set, suppose is a set of subsets of , suppose .
  • L∃t be a bijection from onto .
  • L∃t such that , if is in , otherwise .
  • By recursion theorem, L∃t be such that , .
  • L∃t .
  • ...
Comment by crabman on Rationalist prepper thread · 2020-01-29T15:39:21.247Z · score: 2 (2 votes) · LW · GW

That's why I multiplied it by 10.

Comment by crabman on Rationalist prepper thread · 2020-01-28T22:23:30.826Z · score: 10 (7 votes) · LW · GW

I've done a very rough fermi estimate (I even pull some numbers out of my ass), according to which I don't need to worry. The probability of me dying due to ncov 2019 is

  • Here is my expected number of infected people, which I got by looking at median according to metaculus, which is 311k, and multiplying it by 10 (a number I pulled out of my ass), since I think the probability density function of the probability distribution of the number of infections is mostly convex. is Earth population.
  • is the probability of a random person dying if given that they were infected - I got this number by seeing on EA forum a comment that said 3% of identified ncov 2019 cases die, and that probably many infected people haven't been identified but most deaths have, so I decreased it to 1% (another number out of my ass).
  • - I multiply the probability by 4 for myself since I live in a dense city, and my girlfriend works in a large office with a lot of people. (another number out of my ass)
  • - since mostly old people die, with another number pulling I decrease my probability of dying by 5.
  • - my immune system seems weaker than average, since I often get common cold.

In total I get approximately 6 micromort, which is really low - on average I probably get about 25 micromort per day according to wikipedia. (EDIT: 25 micromort per day is a huge overestimate for my age of 25 - actually I have about 1.34 all causes micromort per day)

Comment by crabman on How to Throw Away Information in Causal DAGs · 2020-01-08T11:03:49.611Z · score: 4 (3 votes) · LW · GW

Instead of saying " contains all information in relevant to ", it would be better to say that, contains all information in that is relevant to if you don't condition on anything. Because it may be the case that if you condition on some additional random variable , no longer contains all relevant information.

Example:

Let be i.i.d. binary uniform random variables, i.e. each of the variables takes the value 0 with probability 0.5 and the value 1 with probability 0.5. Let be a random variable. Let be another random variable, where is the xor operation. Let be the function .

Then contains all information in that is relevant to . But if we know the value of , then no longer contains all information in that is relevant to .

Comment by crabman on ozziegooen's Shortform · 2019-12-23T10:30:47.864Z · score: 5 (4 votes) · LW · GW

It's definitely the first. The second is bizarre. The third can be steelmanned as "Given my evidence, an ideal thinker would estimate the probability to be 20%, and we all here have approximately the same evidence, so we all should have 20% probabilities", which is almost the same as the first.

Comment by crabman on Understanding Machine Learning (I) · 2019-12-23T10:22:47.045Z · score: 4 (3 votes) · LW · GW

Two nitpicks:

like if you want it to recognize spam emails, but you only show it aspects of the emails such that there is at best a statistically weak correlation between them and whether the email is spam or not-spam

Here "statistically weak correlation" should be "not a lot of mutual information", since correlation is only about linear dependence between random variables.

i.d.d.

Should be i.i.d.

Comment by crabman on [Personal Experiment] One Year without Junk Media · 2019-12-15T16:36:06.094Z · score: 1 (1 votes) · LW · GW

What about videogames?

Comment by crabman on crabman's Shortform · 2019-11-20T17:36:43.972Z · score: 6 (3 votes) · LW · GW

In my understanding, here are the main features of deep convolutional neural networks (DCNN) that make them work really well. (Disclaimer: I am not a specialist in CNNs, I have done one masters level deep learning course, and I have worked on accelerating DCNNs for 3 months.) For each feature, I give my probability, that having this feature is an important component of DCNN success, compared to having this feature to the extent that an average non-DCNN machine learning model has it (e.g. DCNN has weight sharing, an average model doesn't have weight sharing).

  1. DCNNs heavily use transformations, which are the same for each window of the input - 95%
  2. For any set of pixels of the input, large distances between pixels in the set make the DCNN model interactions between these pixels less accurately - 90% (perhaps usage of dilution in some DCNNs is a counterargument to this)
  3. Large depth (together with the use of activation functions) lets us model complicated features, interactions, logic - 82%
  4. Having a lot of parameters lets us model complicated features, interactions, logic - 60%
  5. Given 3 and 4, SGD-like optimization works unexpectedly fast for some reason - 40%
  6. Given 3 and 4, SGD-like optimization with early stopping doesn't overfit too much for some reason - 87% (I am not sure if S in SGD is important, and how important is early stopping)
  7. Given 3 and 4, ReLu-like activation function works really well (compared to, for example, sigmoid).
  8. Modern deep neural network libraries are easy to use compared to the baseline of not having specific well-developed libraries - 60%
  9. Deep neural networks work really fast, when using modern deep neural network libraries and modern hardware - 33%
  10. DCNNs find such features in photos, which are invisible to the human eye and to most ML algorithms - 20%
  11. Dropout helps reducing overfitting a lot - 25%
  12. Batch normalization improve quality of the model a lot for some reason - 15%
  13. Batch normalization makes the optimization much faster - 32%
  14. Skip connections (or residual connections, I am not sure if there's a difference) help a lot - 20%

Let me make it more clear how I was assigning the probabilities and why I created this list. I am trying to come up with a tensor network based machine learning model, which would have the main advantages of DCNNs, but which would not, itself, be a deep relu neural network. So I decided to make this list to see which important components my model has.

Comment by crabman on cousin_it's Shortform · 2019-10-27T12:26:26.044Z · score: 1 (1 votes) · LW · GW

Would it be correct to say that (2) and (3) can be replaced with just "apply any linear operator"?

Also, what advantages does working with amplitudes have compared to working with probabilities? Why don't we just use probability theory?

Comment by crabman on Raemon's Scratchpad · 2019-10-21T11:51:12.249Z · score: 1 (1 votes) · LW · GW

Talk to your roommates and make an agreement, that each of you, in round robin order, orders apartment cleaning service, with period equal to X weeks. This will alleviate part of the problem.

Comment by crabman on What's going on with "provability"? · 2019-10-13T15:39:51.573Z · score: 3 (2 votes) · LW · GW

So it is perfectly okay to have a statement that is obviously true, but still cannot be proved using some set of axioms and rules.

The underlying reason is that if you imagine a Platonic realm where all abstractions allegedly exist, the problem is that there are actually multiple abstractions ["models"] compatible with ZF, but different from each other in many important ways.

So, when you say Godel's sentence is obviously true, in which "abstraction" is it true?

Comment by crabman on What funding sources exist for technical AI safety research? · 2019-10-01T16:43:46.069Z · score: 1 (1 votes) · LW · GW

Are you interested in AI safety jobs, i.e. to be hired by a company and work in their office?

Comment by crabman on The first step of rationality · 2019-09-30T21:45:31.005Z · score: 2 (2 votes) · LW · GW

The article's title is misleading. He didn't harass or rape anyone. He had sex with prostitutes and hid that from his wife.

Comment by crabman on The Zettelkasten Method · 2019-09-30T10:24:30.417Z · score: 3 (2 votes) · LW · GW

When you go outside, how do you choose decks to take with you?

Small cards seem awful for writing sequences of transformations of large equations - do you sometimes do things like that and if yes then do you do that outside of Zettelkasten?

When developing an idea I use paper as an expansion of my working memory, so it becomes full of things which become useless right after I finish. Do you throw away such "working memory dumps" and only save actually useful pieces of knowledge?