Posts

Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers 2021-04-09T19:19:42.826Z
Interpretability in ML: A Broad Overview 2020-08-04T19:03:20.828Z
Operationalizing Interpretability 2020-07-20T05:22:14.798Z
Insights Over Frameworks 2020-06-20T19:52:18.602Z
An Illustrated Proof of the No Free Lunch Theorem 2020-06-08T01:54:27.695Z
Overcorrecting As Deliberate Practice 2020-06-01T06:26:31.504Z
Obsidian: A Mind Mapping Markdown Editor 2020-05-27T16:26:31.824Z
Problems with p-values 2020-04-08T00:58:19.399Z
Perceptrons Explained 2020-02-14T17:34:38.999Z
Jane Street Inteview Guide (Notes on Probability, Markets, etc.) 2019-09-17T05:28:23.058Z
Does anyone else feel LessWrong is slow? 2019-09-06T19:20:05.622Z
GPT-2: 6-Month Follow-Up 2019-08-21T05:06:52.461Z
Neural Nets in Python 1 2019-08-18T02:48:54.903Z
Calibrating With Cards 2019-08-08T06:44:44.853Z
Owen Another Thing 2019-08-08T02:04:56.511Z
Can I automatically cross-post to LW via RSS? 2019-07-08T05:04:55.829Z
MLU: New Blog! 2019-06-12T04:20:37.499Z
Why books don't work 2019-05-11T20:40:27.593Z
345M version GPT-2 released 2019-05-05T02:49:48.693Z
Moving to a World Beyond “p < 0.05” 2019-04-19T23:09:58.886Z
Pedagogy as Struggle 2019-02-16T02:12:03.665Z
Doing Despite Disliking: Self‐regulatory Strategies in Everyday Aversive Activities 2019-01-19T00:27:05.605Z
mindlevelup 3 Year Review 2019-01-09T06:36:01.090Z
Letting Others Be Vulnerable 2018-11-19T02:59:21.423Z
Owen's short-form blog 2018-09-15T20:13:37.047Z
Communication: A Simple Multi-Stage Model 2018-09-15T20:12:16.134Z
Fading Novelty 2018-07-25T21:36:06.303Z
Generating vs Recognizing 2018-07-14T05:10:22.112Z
Do Conversations Often Circle Back To The Same Topic? 2018-05-24T03:07:38.516Z
Meditations on the Medium 2018-04-29T02:21:35.595Z
Charting Deaths: Reality vs Reported 2018-03-30T00:50:00.314Z
Taking the Hammertime Final Exam 2018-03-22T17:22:17.964Z
A Developmental Framework for Rationality 2018-03-13T01:36:27.492Z
ESPR 2018 Applications Are Open! 2018-03-12T00:02:26.774Z
ESPR 2018 Applications Are Open 2018-03-11T20:07:45.460Z
Kegan and Cultivating Compassion 2018-03-11T01:32:31.217Z
Unconscious Competence and Counter-Incentives 2018-03-10T06:38:34.057Z
If rationality skills were Harry Potter spells... 2018-03-09T15:36:11.130Z
Replace Stereotypes With Experiences 2018-01-29T00:07:15.056Z
mindlevelup: 2 Years of Blogging 2018-01-06T06:10:52.022Z
Conceptual Similarity Does Not Imply Actionable Similarity 2017-12-30T05:06:04.556Z
Unofficial ESPR Post-mortem 2017-10-25T02:05:05.416Z
Instrumental Rationality: Postmortem 2017-10-21T06:23:31.707Z
Instrumental Rationality 7: Closing Disclaimer 2017-10-21T06:03:19.714Z
Instrumental Rationality 6: Attractor Theory 2017-10-18T03:54:28.211Z
Instrumental Rationality 5: Interlude II 2017-10-14T02:05:37.208Z
Instrumental Rationality 4.3: Breaking Habits and Conclusion 2017-10-12T23:11:18.127Z
Instrumental Rationality 4.2: Creating Habits 2017-10-12T02:25:06.007Z
The Recognizing vs Generating Distinction 2017-10-09T16:56:09.379Z
Instrumental Rationality 4.1: Modeling Habits 2017-10-09T01:21:41.396Z

Comments

Comment by lifelonglearner on Maximizing Yield on US Dollar Pegged Coins · 2021-06-10T05:14:51.210Z · LW · GW

ah yes, the proof of stake bridge is faster.

i guess it depends if you're running this strategy with size. e.g. for over $100,000, 10% returns means you'd earn back gas fees in ~3 days.

Comment by lifelonglearner on Maximizing Yield on US Dollar Pegged Coins · 2021-06-08T06:06:36.376Z · LW · GW

fyi you can get around half these returns on aave on ethereum mainnet without having to mess with matic at all.

while i don't think the matic team is untrustworthy, it's worth pointing out their entire network is currently secured by an upgradeable multisig wallet.

there is also a ~1 week period to move back from matic to ethereum mainnet which can be irksome if you e.g. want to sell quickly back to fiat via some centralized exchange.

Comment by lifelonglearner on The Alignment Forum should have more transparent membership standards · 2021-06-04T22:21:27.275Z · LW · GW

Just chiming in here to say that I completely forgot about Intercom during this entire series of events, and I wish I had remembered/used it earlier.

(I disabled the button a long time ago, and it has been literal years since I used it last.)

Comment by lifelonglearner on Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers · 2021-04-19T23:45:41.808Z · LW · GW

Hi Rohin! Thanks for this summary of our post. I think one other sub-field that has seen a lot of progress is in creating somewhat competitive models that are inherently more interpretable (i.e. a lot of the augmented/approximate decision tree models), as well as some of the decision set stuff. Otherwise, I think it's a fair assessment, will also link this comment to Peter so he can chime in with any suggested clarifications of our opinions, if any. Cheers, Owen

Comment by lifelonglearner on Defining "optimizer" · 2021-04-18T19:40:02.073Z · LW · GW

Ah, I didn't mean to ask about the designing part, but moreso about how you use the word optimize in your definition when it comes to 'optimizing from scratch', which might get a little recursive.

Comment by lifelonglearner on Defining "optimizer" · 2021-04-17T16:33:54.086Z · LW · GW

Your definition of optimizer uses "optimizing that function from scratch" which might need some more unpacking.

You may be interested in this prior discussion on optimization which shares some things with your definition but takes a more control theory / systems perspective.

Comment by lifelonglearner on Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers · 2021-04-12T16:31:16.103Z · LW · GW

I have not read the book, perhaps Peter has.

A quick look at the table of contents suggests that it's focused more on model-agnostic methods. I think you'd get a different overview of the field compared to the papers we've summarized here, as an fyi.

I think one large area you'd miss out on from reading the book is the recent work on making neural nets more interpretable, or designing more interpretable neural net architectures (e.g. NBDT).

Comment by lifelonglearner on Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers · 2021-04-11T15:49:36.722Z · LW · GW

Thanks! Didn't realize we had a double entry, will go and edit.

Comment by lifelonglearner on Reasonable ways for an average LW retail investor to get upside risk? · 2021-02-17T03:08:10.233Z · LW · GW

For crypto:

  1. buy btc
  2. buy eth
  3. buy defipulse index

For even higher variance crypto:

  1. buy defi small cap
  2. get eth, turn it into st-eth/eth LP on curve.fi, and then stake into the harvest.finance st-eth pool for ~30% APY on your eth
Comment by lifelonglearner on Creating A Kickstarter for Coordinated Action · 2021-02-03T05:35:54.888Z · LW · GW

In case you haven't seen, similar projects exist:

collAction

See also previous discussion here

Comment by lifelonglearner on Instrumental Rationality: Postmortem · 2021-01-17T05:00:25.153Z · LW · GW

Thanks for the feedback Emiya! I hope it ends up being useful for helping you get what you want to get done, done.

I never got the chance to update here, but I cleaned up some of the essays in the years since writing this series.

They can now be found here. Of note is that I massively edited Habits 101, and I think it now reads a lot tighter than before.

Comment by lifelonglearner on [link] The AI Girlfriend Seducing China’s Lonely Men · 2020-12-15T23:05:21.927Z · LW · GW

The extent to which this app is used and to which people bond over the assistant.

Comment by lifelonglearner on [link] The AI Girlfriend Seducing China’s Lonely Men · 2020-12-15T07:23:29.532Z · LW · GW

my friend from china says this is likely sensationalized. agree w/ gwillen about being skeptical.

Comment by lifelonglearner on What is your electronic drawing set up? · 2020-10-15T15:51:16.763Z · LW · GW

Seconding the Boox Note as being a very good device I'm overall pleased with. (I have the large 13 inch Boox Note Max which makes reading papers very bearable, and it can do file drop via local wifi.)

Comment by lifelonglearner on Memorizing a Deck of Cards · 2020-09-16T08:21:20.291Z · LW · GW

The way I did this for a specific ordering of cards (used for a set of magic tricks called Mnemonica) was to have some sort of 1 to 1 mapping between each card and its position in the deck.

Some assorted examples: 5 : 4 of Hearts because 4 is 5 minus 1 (and the Hearts are just there). 7 : Ace of Spades because 7 is a lucky number and the Ace of Spades is a lucky card. 8 : 5 of Hearts because 5 looks a little like 8. 49 : 5 of Clubs because 4.9 is almost 5 (and the Clubs are just there).

Comment by lifelonglearner on Using GPT-N to Solve Interpretability of Neural Networks: A Research Agenda · 2020-09-04T18:23:46.168Z · LW · GW

This is a good point, and this is where I think a good amount of the difficulty lies, especially as the cited example of human interpretable NNs (i.e. Microscope AI) doesn't seem easily applicable to things outside of image recognition.

Comment by lifelonglearner on Using GPT-N to Solve Interpretability of Neural Networks: A Research Agenda · 2020-09-04T01:24:49.242Z · LW · GW

Interesting stuff!

My understanding is that the OpenAI Microscope (is this what you meant by microscope AI?) is mostly feature visualization techniques + human curation by looking at the visualized samples. Do you have thoughts on how to modify this for the text domain?

Comment by lifelonglearner on Using GPT-N to Solve Interpretability of Neural Networks: A Research Agenda · 2020-09-04T01:21:28.879Z · LW · GW

Interesting stuff!

I would guess that one of the main difficulties is figuring out how to actually get a Modular NN. Do you have thoughts on how to enforce this type of structure through regularization during training, or through some other type of model selection?

Comment by lifelonglearner on Charting Is Mostly Superstition · 2020-08-24T00:13:35.862Z · LW · GW

Same here. I am working for a small quant trading firm, and the collective company wisdom is to prefer CDFs over PDFs.

Comment by lifelonglearner on Interpretability in ML: A Broad Overview · 2020-08-23T20:43:16.663Z · LW · GW

Regarding how interpretability can help with addressing motivation issues, I think Chris Olah's views present situations where interpretability can potentially sidestep some of those issues. One such example is that if we use interpretability to aid in model design, we might have confidence that our system isn't a mesa-optimizer, and we've done this without explicitly asking questions about "what our model desires".

I agree that this is far from the whole picture. The scenario you describe is an example where we'd want to make interpretability more accessible to more end-users. There is definitely more work to be done to bridge "normal" human explanations with what we can get from our analysis.

I've spent more of my time thinking about the technical sub-areas, so I'm focused on situations where innovations there can be useful. I don't mean to say that this is the only place where I think progress is useful.

Comment by lifelonglearner on Interpretability in ML: A Broad Overview · 2020-08-23T16:16:13.869Z · LW · GW

I think that the general form of the problem is context-dependent, as you describe. Useful explanations do seem to depend on the model, task, and risks involved.

However, from an AI safety perspective, we're probably only considering a restricted set of interpretability approaches, which might make it easier. In the safety context, we can probably less concerned with interpretability that is useful for laypeople, and focus on interpretability that is useful for the people doing the technical work.

To that end, I think that "just" being careful about what the interpretability analysis means can help, like how good statisticians can avoid misuse of statistical testing, even though many practitioners get it wrong.

I think it's still an open question, though, what even this sort of "only useful for people who know what they're doing" interpretability analysis would be. Existing approaches still have many issues.

Comment by lifelonglearner on Search versus design · 2020-08-16T19:43:43.840Z · LW · GW

I mostly focused on the interpretability section as that's what I'm most familiar with, and I think your criticisms are very valid. I also wrote up some thoughts recently on where post-hoc interpretability fails, and Daniel Filan has some good responses in the comments below.

Also, re: disappointment on tree regularization, something that does seem more promising is Daniel Filan and others at CHAI working on investigating modularity in neural nets. You can probably ask him more, but last time we chatted, he also had some thoughts (unpublished) on how to enforce modularization as a regularizer, which seems to be what you wished the tree reg paper would have done.

Overall, this is great stuff, and I'll need to spend more time thinking about the design vs search distinction (which makes sense to me at first glance)/

Comment by lifelonglearner on The Best Educational Institution in the World · 2020-08-15T14:50:38.529Z · LW · GW

Got it.

I think unbundling them seems like a good thing to strive for.

I guess the parts that I might still be worried about are:

  • I see below that you claim that more accountability is probably net-good for most students, in the sense that would help improve learning? I'm not sure that I fully agree with that. My experience in primary to upper education has been that there is a great many students who don't seem that motivated to learn due to differing priorities, home situations, or preferences. I think improving education will need to find some way of addressing this beyond just accountability.

  • Do you envision students enrolling in this Improved Education program for free? Public schools right now have a distinct advantage because they receive a lot of funding from taxpayers.

  • I think the issue of, "Why can't we just immediately get switch everyone to a decoupled situation where credentialing and education are separate?" is due to us being stuck in an inadequate equilibrium. Do you have plans to specifically tackle these inertia-related issues that can make mass-adoption difficult? (e.g until cheap credentialing services become widespread, why would signaling-conscious students decide to enroll in Improved Education instead of Normal Education?)

Comment by lifelonglearner on The Best Educational Institution in the World · 2020-08-15T00:04:55.104Z · LW · GW

I think figuring out how to make education better is definitely a worthwhile goal, and I'm reading this post (and your other one) with interest.

I'm curious to what extent you're going to be addressing the issue of education as-partially-or-mostly signaling, like what Caplan argues for in The Case Against Education? I can imagine a line of argument that says paying for public education is worthwhile, even if all it does is accreditation because it's useful to employers. What those actual costs look like and what they should be is, of course, up for debate.

I could also see the point that all this signaling stuff is orthogonal if all we "really" care about is optimizing for learning. Just wondering what stance you're taking.

Comment by lifelonglearner on Sentence by Sentence: "Why Most Published Research Findings are False" · 2020-08-14T02:41:54.582Z · LW · GW

I think the OSC's reproducibility project is much more of what you're looking for, if you're worried that Many Labs is selecting only for a specific type of effect.

They focus on selecting studies quasi-randomly and use a variety of reproducibility measures (confidence interval, p-value, effect size magnitude + direction, subjective assessment). They find that around 30-50% of effects replicate, depending on the criteria used. They looked at 100 studies, in total.

I don't know enough about the biomedical field, but a brief search on the web yields the following links, which might be useful?

Comment by lifelonglearner on Investment idea: basket of tech stocks weighted towards AI · 2020-08-13T04:07:49.180Z · LW · GW

A quote from the thread which suggests weighing Google and FB more than Amazon, or at least more consideration than above.

I don't understand why one would invest more in Amazon over Alphabet. Alphabet owns 1. the strongest industry research division around (especially DeepMind), 2. a very strong vertical with Google Cloud -> Tensorflow/Jax -> TPUs. Amazon only has an arguably more established cloud (I'm not sure if this is even true for machine learning purposes), but has a much weaker research division and doesn't own any of the underlying stack. I mean, for example, GPT2 was trained primarily on TPUs. So Google owns better shovels and also better diggers.

Facebook owns the second best industry research division, as well as PyTorch (which is the most popular framework in ML research right now). Unfortunately for FB stock, it doesn't have a particularly clear path towards monetizing it. However, many of the other companies mentioned (Microsoft and OpenAI for example) are heavily invested in it.

Comment by lifelonglearner on Investment idea: basket of tech stocks weighted towards AI · 2020-08-13T04:05:18.092Z · LW · GW

Relevant thread from r/slatestarcodex which has some additional discussion.

Comment by lifelonglearner on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-12T01:21:25.873Z · LW · GW

This makes sense to me, given the situation you describe.

Comment by lifelonglearner on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-11T19:20:25.907Z · LW · GW

Some OpenAI people are on LW. It'd be interesting to hear their thoughts as well.

Two general things which have made me less optimistic about OpenAI are that:

  1. They recently spun-out a capped-profit company, which seems like the end goal is monetizing some of their recent advancements. The page linked in the previous sentence also has some stuff about safety and about how none of their day-to-day work is changing, but it doesn't seem that encouraging.

  2. They've recently partnered up with Microsoft, presumably for product integration. This seems like it positions them as less of a neutral entity, especially as Alphabet owns DeepMind.

Comment by lifelonglearner on Forecasting AI Progress: A Research Agenda · 2020-08-11T01:34:40.283Z · LW · GW

I hadn't heard of the Delphi method before, so this paper brought it to my attention.

It's nice to see concrete forecasting questions laid out in a principled way. Now the perhaps harder step is trying to get traction on them ;^).

Note: The tables in pages 9 and 10 are a little blurry to read. They are also not text, so it's not easy to copy-paste them into another format for better viewing. I think it'd be good to update the images to either be clearer or translate it into a text table.

Comment by lifelonglearner on Canonical forms · 2020-07-13T05:42:44.020Z · LW · GW

I found your categorization of three ways to improve explanations to be useful, and they seem like they cover most of the issues.

However, I feel like the brunt of the article itself was too short to give me a good sense of what canonical forms are like in math, or how to apply them conversationally. In particular, I think having more examples (or making the examples clearer) for each item on your list would have been helpful.

Also, I personally would have also enjoyed a more technical explanation of how to think about canonical forms mathematically. (Which I would guess would help me understand the connection to conversations.)

Comment by lifelonglearner on Your Prioritization is Underspecified · 2020-07-12T03:11:07.294Z · LW · GW

I pattern-matched many of your between-task ambiguities to the different types of scheduling algorithms that can occur in operating systems.

Comment by lifelonglearner on UML final · 2020-07-09T22:08:27.596Z · LW · GW

I've been working through the textbook as well. I know I've commented a few times before, but, once again, thanks for writing up your thoughts for each chapter. They've been useful for summarizing / checking my own understanding.

Comment by lifelonglearner on Replicated cognitive bias list? · 2020-07-07T05:29:03.446Z · LW · GW

For some recent meta-analyses, the OSC's paper on reproducibility in social science has ~100 studies, and I think you can explore those and others at osf.io.

In general, I know that anchoring effects are quite reliable with a large effect size, and many priming effects have in recent years failed to replicate.

Comment by lifelonglearner on How do you visualize the Poisson PDF? · 2020-07-05T16:53:36.191Z · LW · GW

Wait, sorry, I misunderstood what you needed. Please disregard.

Comment by lifelonglearner on How do you visualize the Poisson PDF? · 2020-07-05T16:52:10.844Z · LW · GW

Desmos has a handy interactive calculator where you can adjust the parameters to get a better feel for what's going on. I think that can potentially help.

Comment by lifelonglearner on The Book of HPMOR Fanfics · 2020-07-04T14:47:49.654Z · LW · GW

Significant Digits.

Comment by lifelonglearner on Site Redesign Feedback Requested · 2020-07-04T03:04:22.258Z · LW · GW

The new design appears to have higher contrast between the foreground and background, which I'm a fan of. It's an improvement, I think.

(Also hoping for reduced page weight and performance tweaks, but I get that they're already in progress :P)

Comment by lifelonglearner on The Book of HPMOR Fanfics · 2020-07-04T00:36:27.092Z · LW · GW

Specific stories from this list that I've enjoyed:

  • Following the Phoenix: probably my favorite continuation fic that ups the ante in an interesting way with a satisfying ending
  • Significant Digits: the famous one that got EY's recommendation for worldbuilding. Very cool exploration of a potential future of HPMOR, but the characters' personalities deviate from canon, perhaps too much.
  • Orders of Magnitude: an extension (side-quel?) to SD that also goes deep on the worldbuilding.
  • Reductionism for the Win: satisfying alternative ending arc.
  • Minds, Names, and Faces: also a fairly good alternative ending arc.

Revial also looks promising but I haven't read it fully.

Comment by lifelonglearner on DontDoxScottAlexander.com - A Petition · 2020-06-25T16:02:04.525Z · LW · GW

Oh, right, that's a fair point.

Comment by lifelonglearner on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-25T06:25:37.716Z · LW · GW

Did a cursory look through Twitter and found several critical accounts spreading it, so as gilch said, it's already happening to an extent :/

Comment by lifelonglearner on DontDoxScottAlexander.com - A Petition · 2020-06-25T06:13:20.405Z · LW · GW

Is anyone worried about Streisand effect type scenarios with this?

I get that the alternative is Scott being likely doxxed by the article being published, so this support against the NYT seems like a much better outcome.

At the same time, it seems like this might also lead to some malicious people being more motivated (now that they've heard of Scott through these channels) to figure out who he is and then share that to people who Scott would like to not know?

Comment by lifelonglearner on Preview On Hover · 2020-06-24T23:57:40.348Z · LW · GW

Yes, having them to the margin is much much better. :)

Comment by lifelonglearner on Preview On Hover · 2020-06-24T22:33:27.310Z · LW · GW

Can other people comment about the UX of preview on hover?

I dislike it because the pop-ups are often quite large, like on gwern.net, where they can completely block whatever it is I'm reading. Arbital-style tool-tips and the Wikipedia ones are borderline okay as they aren't too large, but I find that the visual contrast is often too jarring for me :/

Comment by lifelonglearner on The Bright Side of rationalization · 2020-06-24T01:38:03.248Z · LW · GW

I think that, while it's true that some people might do this, this seems like an especially steep price to pay if it's the only benefit afforded to us by rationalization. (I realize you're not necessarily claiming that here, just pointing out that rationalization seems to have some possible social benefits for a certain group of people.)

If we are crunching the numbers, though, it seems like the flip side is much much more common, i.e. people doing things to benefit themselves under ostensibly altruistic motivations.

Comment by lifelonglearner on Half-Baked Products and Idea Kernels · 2020-06-24T01:33:46.849Z · LW · GW

Also I want to point out that, perhaps against better design judgment, in actual industry most of modern software engineering has embraced the "agile" methodology where the product is being iterated in small sprints. This means that the design team checks in with the users' needs, changes are made, tests are added, and the cycle begins again. (Simplifying things here, of course.)

It was more common in the past to spend much more time understanding the clients' needs, in what is termed the "waterfall" methodology, where code is shipped perhaps only a few times a year, rather than bi-weekly (or whatever your agile sprint duration is).

Comment by lifelonglearner on AI Benefits Post 1: Introducing “AI Benefits” · 2020-06-23T00:26:42.603Z · LW · GW

Just a note that your windfall clause link to your website is broken. https://cullenokeefe.com/windfall-clause takes me a "We couldn't find the page you're looking for" error.

Comment by lifelonglearner on Insights Over Frameworks · 2020-06-22T22:34:19.189Z · LW · GW

That seems reasonable, yeah.

Comment by lifelonglearner on Training our humans on the wrong dataset · 2020-06-21T19:27:29.211Z · LW · GW

Goodhart's Law also seems relevant to invoke here, if we're talking about goal vs incentive mismatch.

Comment by lifelonglearner on Insights Over Frameworks · 2020-06-21T16:10:25.861Z · LW · GW

You're right that I'm making assumptions about insights which not always be applicable. And I don't mean to claim that theory isn't useful. This post is partially also for me to push back against some default theorizing that happens.

I think that sometimes the right thing to do is to focus on just "reporting the data", so to speak, if we use an analogy from research papers. There are experimental papers which might do some speculation, but their focus is on the results. Then there are also papers which try to do more theorizing and synthesis.

I guess I'm trying to discourage what I see as experimental papers focusing too much on the theorizing aspect.