Posts

Jane Street Inteview Guide (Notes on Probability, Markets, etc.) 2019-09-17T05:28:23.058Z · score: 18 (8 votes)
Does anyone else feel LessWrong is slow? 2019-09-06T19:20:05.622Z · score: 11 (3 votes)
GPT-2: 6-Month Follow-Up 2019-08-21T05:06:52.461Z · score: 31 (7 votes)
Neural Nets in Python 1 2019-08-18T02:48:54.903Z · score: 11 (6 votes)
Calibrating With Cards 2019-08-08T06:44:44.853Z · score: 22 (11 votes)
Owen Another Thing 2019-08-08T02:04:56.511Z · score: 15 (5 votes)
Can I automatically cross-post to LW via RSS? 2019-07-08T05:04:55.829Z · score: 10 (3 votes)
MLU: New Blog! 2019-06-12T04:20:37.499Z · score: 18 (5 votes)
Why books don't work 2019-05-11T20:40:27.593Z · score: 16 (11 votes)
345M version GPT-2 released 2019-05-05T02:49:48.693Z · score: 30 (11 votes)
Moving to a World Beyond “p < 0.05” 2019-04-19T23:09:58.886Z · score: 25 (10 votes)
Pedagogy as Struggle 2019-02-16T02:12:03.665Z · score: 14 (6 votes)
Doing Despite Disliking: Self‐regulatory Strategies in Everyday Aversive Activities 2019-01-19T00:27:05.605Z · score: 14 (3 votes)
mindlevelup 3 Year Review 2019-01-09T06:36:01.090Z · score: 19 (5 votes)
Letting Others Be Vulnerable 2018-11-19T02:59:21.423Z · score: 34 (17 votes)
Owen's short-form blog 2018-09-15T20:13:37.047Z · score: 13 (6 votes)
Communication: A Simple Multi-Stage Model 2018-09-15T20:12:16.134Z · score: 13 (4 votes)
Fading Novelty 2018-07-25T21:36:06.303Z · score: 24 (14 votes)
Generating vs Recognizing 2018-07-14T05:10:22.112Z · score: 16 (6 votes)
Do Conversations Often Circle Back To The Same Topic? 2018-05-24T03:07:38.516Z · score: 9 (2 votes)
Meditations on the Medium 2018-04-29T02:21:35.595Z · score: 46 (12 votes)
Charting Deaths: Reality vs Reported 2018-03-30T00:50:00.314Z · score: 38 (11 votes)
Taking the Hammertime Final Exam 2018-03-22T17:22:17.964Z · score: 42 (12 votes)
A Developmental Framework for Rationality 2018-03-13T01:36:27.492Z · score: 61 (19 votes)
ESPR 2018 Applications Are Open! 2018-03-12T00:02:26.774Z · score: 4 (1 votes)
ESPR 2018 Applications Are Open 2018-03-11T20:07:45.460Z · score: 24 (5 votes)
Kegan and Cultivating Compassion 2018-03-11T01:32:31.217Z · score: 49 (12 votes)
Unconscious Competence and Counter-Incentives 2018-03-10T06:38:34.057Z · score: 37 (9 votes)
If rationality skills were Harry Potter spells... 2018-03-09T15:36:11.130Z · score: 67 (18 votes)
Replace Stereotypes With Experiences 2018-01-29T00:07:15.056Z · score: 16 (5 votes)
mindlevelup: 2 Years of Blogging 2018-01-06T06:10:52.022Z · score: 4 (1 votes)
Conceptual Similarity Does Not Imply Actionable Similarity 2017-12-30T05:06:04.556Z · score: 19 (9 votes)
Unofficial ESPR Post-mortem 2017-10-25T02:05:05.416Z · score: 58 (20 votes)
Instrumental Rationality: Postmortem 2017-10-21T06:23:31.707Z · score: 38 (11 votes)
Instrumental Rationality 7: Closing Disclaimer 2017-10-21T06:03:19.714Z · score: 13 (4 votes)
Instrumental Rationality 6: Attractor Theory 2017-10-18T03:54:28.211Z · score: 22 (9 votes)
Instrumental Rationality 5: Interlude II 2017-10-14T02:05:37.208Z · score: 12 (2 votes)
Instrumental Rationality 4.3: Breaking Habits and Conclusion 2017-10-12T23:11:18.127Z · score: 5 (4 votes)
Instrumental Rationality 4.2: Creating Habits 2017-10-12T02:25:06.007Z · score: 19 (8 votes)
The Recognizing vs Generating Distinction 2017-10-09T16:56:09.379Z · score: 18 (4 votes)
Instrumental Rationality 4.1: Modeling Habits 2017-10-09T01:21:41.396Z · score: 18 (8 votes)
Instrumental Rationality 3: Interlude I 2017-10-07T05:22:09.663Z · score: 18 (8 votes)
Instrumental Rationality 2: Planning 101 2017-10-06T14:23:06.190Z · score: 25 (11 votes)
Instrumental Rationality 1: Starting Advice 2017-10-05T04:37:21.557Z · score: 21 (14 votes)
The Best Self-Help Should Be Self-Defeating 2017-09-26T06:16:32.059Z · score: 14 (6 votes)
Instrumental Rationality Sequence Finished! (w/ caveats) 2017-09-09T01:49:53.109Z · score: 5 (5 votes)
Habits 101: Techniques and Research 2017-08-22T10:54:45.552Z · score: 5 (5 votes)
Bridging the Intention-Action Gap (aka Akrasia) 2017-08-01T22:31:31.577Z · score: 1 (1 votes)
Daniel Dewey on MIRI's Highly Reliable Agent Design Work 2017-07-09T04:35:44.356Z · score: 10 (10 votes)
The Use and Abuse of Witchdoctors for Life 2017-06-24T20:59:06.983Z · score: 4 (4 votes)

Comments

Comment by lifelonglearner on Raemon's Scratchpad · 2019-09-14T06:54:34.277Z · score: 2 (1 votes) · LW · GW

There's jquery UI which maybe counts?

Comment by lifelonglearner on Owen Another Thing · 2019-09-08T04:47:31.298Z · score: 13 (3 votes) · LW · GW

Ben Pace has a new post up on LessWrong that's asking about good exercises for rationality / general LW-adjacent stuff. I think this is a good thing to put up a bounty for, and I started thinking about what makes a good exercise. Exercises are good because they help you further the develop the material; they give you opportunities to put whatever relevant skill to use.

There are differing levels of what you can be trying to assess:

  • Identifying the correct idea from a group of different ones
  • Summarizing the correct idea
  • Transferring the idea to someone else
  • Actually demonstrating whatever skill it is (if it's something you can do)
  • Actually using the skill to deduce something else (if it's a model thing)

I think there's a good set of stuff to dive into here about the distinction between optimizing for pedagogy versus effectiveness. In the most stark case, you want to teach people using less potent versions of something, at least at first. Think not just training wheels on a bike, but successively more advanced models for physics or arithmetic. There's a gradual shift happening.

More than that, I wonder if the two angles are greatly orthogonal.

Anyway, back to the original idea at hand. When you give people exercises, there's a sense of broad vs narrow that seems important, but I'm still teasing it out. In one sense, you can think of tests that do multiple choice vs open-ended answers. But it's not like multiple-choice questions have to suck. You could give people very plausible-sounding answers which require them to do a lot of work to determine which one is correct. Similarly, open-ended questions could allow for bullshitting.

It's not exactly the format, but what sort of work it induces.

At the very least, it's about pushing for more Generative content. But beyond that, it gets into pedagogy questions:

  1. How can you give questions which increase in difficulty?
    1. What does difficulty correspond to? If something is "hard to figure out", what is that quality referring to?
  2. If you give open-ended questions, how can you assess the answers you get?
  3. How much of this is covered already by the teaching literature?
Comment by lifelonglearner on Rationality Exercises Prize of September 2019 ($1,000) · 2019-09-08T01:56:10.932Z · score: 4 (2 votes) · LW · GW

I recently wrote about three things you can try with cards to see what your internal calibration feels like. They have some question prompts, but the gist of it is something to do, rather than something with a direct answer.

https://www.lesswrong.com/posts/Ktw9L67Nwq8o2kecP/calibrating-with-cards

Comment by lifelonglearner on Does anyone else feel LessWrong is slow? · 2019-09-06T20:42:17.398Z · score: 4 (2 votes) · LW · GW

I see! Thanks for the breakdown for where the pain points are when it comes to performance. Really appreciate the openness into where things could have gone better / what's happening right now!

Comment by lifelonglearner on Neural Nets in Python 1 · 2019-08-18T06:11:34.822Z · score: 4 (2 votes) · LW · GW

Oh, wow! I didn't realize that could have been tripping things up. Thank you for the formatting help!

Comment by lifelonglearner on Neural Nets in Python 1 · 2019-08-18T03:04:11.305Z · score: 2 (1 votes) · LW · GW

META: The code block editor wasn't very friendly and ate up all of my tabs. I'm working on better formatting, and this'll probably end up being a post on my own blog later on, which will hopefully also have things like syntax highlighting.

Comment by lifelonglearner on A Primer on Matrix Calculus, Part 3: The Chain Rule · 2019-08-17T15:48:01.760Z · score: 2 (1 votes) · LW · GW

For sure! To be honest, I got a little lost reading your 3-part series here, so I think I'll revisit it later on.

I'm newer to deep learning, so I think my goals are similar to yours (e.g. writing it up so I have a better understanding of what's going on), but I'm still hashing out the more introductory stuff.

I'll definitely link it here after I finish!

Comment by lifelonglearner on A Primer on Matrix Calculus, Part 3: The Chain Rule · 2019-08-17T06:15:37.562Z · score: 5 (3 votes) · LW · GW

Thanks for writing this series!

I'm working on my own post on NNs that focuses more on deriving backprop from computational graphs. I think that method of doing so also builds up a lot of the Chain Rule intuition, as you can easily see how the derivatives for earlier weights can be derived from those in later weights.

Comment by lifelonglearner on Hazard's Shortform Feed · 2019-08-13T20:38:19.270Z · score: 4 (2 votes) · LW · GW

I really like that you're doing this! I've tried to get into the series, but I haven't done so in a while. Thanks for the summaries!


(Also, maybe it'd be good for future comments about what you're doing to be children of this post, so it doesn't break the flow of summaries.)

Comment by lifelonglearner on Owen Another Thing · 2019-08-08T20:55:07.218Z · score: 4 (2 votes) · LW · GW

Your advice about demonstrating that you are capable alone is really interesting. Thanks for the extended examples!

Comment by lifelonglearner on Owen Another Thing · 2019-08-08T05:24:36.064Z · score: 5 (3 votes) · LW · GW

Experience As Compounding:

Sometimes I ask myself: "A bunch of cool stuff seems to be happening in the present. So why can't I move faster and let these things in? Why do I feel stuck by past things?"

Well, experience compounds. One reason childhood events can be so influential isn't just that they happened when you were at a formative time and developing your models. In addition, the fact that you pick them up early means they've had the privilege of being part of your thought processes for longer. They're more well-worn tools.

Then, there's also the default answer that each additional year of your life is, relative to the amount of years you've lived, a lesser amount. EX: From year 6 to 7, you've gained an extra ~15% of your total lifespan in new experiences. Whereas from 26 to 27, you've gained closer to 4% of your total lifespan in new experiences.

But, I'd like every year to be measured more equally with one another. I feel like cool stuff is passing by me right now, and I'm just slow on the uptake. I'm not taking it in!

Yes, you can get set in your older ways of thinking, and you will have seen more with each successive year. But experientially speaking I'd like to get my brain to also pay more attention to the recent stuff.

I guess one hacky way to do this would be to spend more time ruminating on the present (which is also harder because if you've lived for 30 years, then by the same proportionality argument, there's just less stuff to think about if you restrict yourself to years 29-30).


I'm confused because there is also:


Experience as a Sliding Window:

There's some sort of cutoff point where I might be able to recall things, but it no longer feels "recent" or directly connected to my identity.

The feeling of recency is quite interesting to me because it seems to imply that important things are going to fade over time. And if you want to preserve certain parts of your identity, there's some sort of "upkeep" you'll need to pay, i.e. having more of those sort of experiences consistently so they stay in recent memory.

Anyway, that's if you equate identity with memory, and that's definitely an oversimplification. But, whatever.

As new things filter in, older things drop out. I'm unsure how to square this with the theory of compounding experience. Presumably if something has effects, even if it falls out of the window, then things it influenced can continue to resound, ala domino effect, but that feels quite contrived. The obvious answer, of course, is that there are several factors at play.

Comment by lifelonglearner on Owen Another Thing · 2019-08-08T05:21:47.066Z · score: 4 (2 votes) · LW · GW

One common theme that I return to, time and time again, is that of addictiveness. More specifically, what makes something habit-forming in a bad way? I've previously talked about this in the context of Attractors. Lately, my thing to hate on is mobile games, or the thing that they represent. Which, yes, is a little late to the game. And I don't even play games on my mobile phone, so it seems a little out of place.

But I digress. The point here is to talk about the Skinner Box. Or, the application of the same concept to human things. Gamification and notification spam both fall into this category. But maybe not games. But maybe some games. Definitely mobile games. The point here is that there's this category I want to get some clarity on, and it's about these things which seem habit-forming and suck you in.

So, what's clearly a Skinner Box? I think that clicker games are totally Skinner Boxes. Also Clash of Clans, Farmville (i.e. everything Zynga / Zynga-clones). But this line is often hazy; Candy Box was innovative and exciting in certain ways. There was a game a while back about alpacas eating one another that seemed surprisingly deep for an idle game. It's one thing to put on a sophisticated veneer on a game, but it still seems fine to critique the underlying mechanics.

What does make a Skinner Box?
  1. Lack of a challenge
    1. Despite having progression, idle and clicker games don't really have anything that forces the player to do anything strategic. They just...click things, and they get reinforcement.
  2. Instant gratification
    1. Mobile games often leverage this desire by time-locking content, prompting you to pay in order to get something now. The other thing to pay attention here is if the feedback loop is tight.
  3. Incentives to keep going?
    1. Intermittent rewards / reward schedules
What doesn't make a Skinner Box?
  1. Skill and growth
    1. The more something is like an instrument or a sport, the less it seems like a Skinner Box. Although the many casual LoL players seem to indicate that even something which has a high skill cap can still be addictive.
  2. Meaning
    1. The more you invoke artistic purpose, narrative, or some other agenda, we seem to be a lot more forgiving about the actual mechanics involved.
  3. Instrumentality
    1. When we're hungry, we eat and eat and eat. And no one bats an eye. The same thing with sleep. Stuff that's useful isn't often seen as dangerous.
Comment by lifelonglearner on Owen Another Thing · 2019-08-08T05:20:57.618Z · score: 4 (3 votes) · LW · GW

It feels like there's been a push towards getting people to start creating their own content. Platforms like YouTube + the Internet make it a lot easier for people to start.

Growing an audience, though, seems hard because there's not often a lot of free attention. Most of the competition is zero-sum between different content. People only have so much free time, so minutes they spend engaging with your stuff is minutes they don't spend engaging in other people's stuff.

There's a cynical viewpoint here which is something like "If you don't think you're creating Good Content, don't broadcast it! We have enough low-quality stuff as it is, out there."

I think people often want to create, though. It's one of the default responses people have if you ask them "Say you could live comfortably without needing to work. What would you do then?" ("Well, I'd write. Or I'd learn to play an instrument...")

Often, though, implementation takes far more time than coming up with the initial idea. There is an asymmetry across many fields where the actual ideation is done by only a small group of people. This then requires maybe 10X as many people to actually put into practice. (EX: the people who design the look/feel of a piece of software at a company vs those who build it.)

Thus, if you want people to join your project (which is of course great because you came up with it), you'll need to convince other people to go with you. On the flip side, I think there's a skill worth practicing where you let go of idea ownership. Stuff is going to get done, and you're going to be doing it; whoever came up with the idea might be less important than whether or not you want the stuff to happen.

But maybe the desire for individual ideation points to something important. A really large amount of people seem to want to partake in creative endeavors.

Comment by lifelonglearner on Owen Another Thing · 2019-08-08T05:14:07.007Z · score: 4 (2 votes) · LW · GW

Here's something that feels like another instance of the deontologist vs consequentialist abstraction, except that the particulars of the situation are what stick out to me: When I choose between doing something sane or something that's endorsed by an official rule, I'll more-often-than-I-like opt to do the endorsed thing, even when it's obviously worse-off for me.

Some examples, of varying quality:

  • Not jaywalking, even when it's in a neighborhood or otherwise not-crowded place.
  • Asking for permission to do obvious things instead of just doing them
  • Focusing on the literal words that someone initially said, rather than their intent, or if they later recant.
  • Letting harmful policies happen instead of appealing them.

I'm reminded of that study which showed that people following an evacuation robot were led to stay in a room even when there was a fire, even when the robot was observed to be previously faulty. There's something about rules that overrides appealing to sanity. I'm a little worried that I bias towards this side compared to just doing the thing that works out better.

There are of course benefits to choosing the official option. The biggest one is that if someone questions your judgment later on, you can appeal to the established rules. That gives you a lot of social backing to lean on.

I think there's also a weird masochistic aspect of craving pity, of wanting to be in a situation that seems bad by virtue of nature, so I can absolve myself of responsibility. Something about how this used to be a play to secure ourselves more resources, through a pity play?

Comment by lifelonglearner on Owen Another Thing · 2019-08-08T05:13:22.383Z · score: 4 (2 votes) · LW · GW

Malcolm Ocean gets it. There's a terrible thing that happens when you try to encapsulate your essay with a title. Somehow, the label takes on a life of its own, and you sometimes forget the content inside the essay.

This happens to my own essays where I think "Oh, huh, this essay is called 'Learning from Past Experiences'". Sounds kinda boring.

And in fact it was not boring and it was good.

I'm thinking of maybe instead transitioning to just numbers + summaries instead.

For example, a format like: Essay 10 [Fading novelty, ways to address it, and a brief digression into typography.]

Comment by lifelonglearner on MLU: New Blog! · 2019-08-08T00:56:49.676Z · score: 2 (1 votes) · LW · GW

Update: currently using Remarkbox for comments.

Comment by lifelonglearner on How much background technical knowledge do LW readers have? · 2019-07-12T05:46:16.398Z · score: 4 (2 votes) · LW · GW

FWIW, both linear algebra and multivariable calculus are required for students at UC San Diego, which is a large public institution.

(Although it's a little tricky as our university has sub-colleges, not all of which require both).

Comment by lifelonglearner on Can I automatically cross-post to LW via RSS? · 2019-07-08T05:52:04.505Z · score: 3 (2 votes) · LW · GW

Thanks! I'm a little worried about my own signal-to-noise ratio for now, so I'll do that later on if I feel my short-form quality improves.

Comment by lifelonglearner on Can I automatically cross-post to LW via RSS? · 2019-07-08T05:51:34.135Z · score: 2 (1 votes) · LW · GW

Meant for this to be a reply.

Comment by lifelonglearner on MLU: New Blog! · 2019-06-13T18:09:19.386Z · score: 3 (2 votes) · LW · GW

Yeah, Netlify was really easy to set up, so I recommend them!

I don't think I'll be able to migrate blog comments because I'm still not sure what I'll do for comments on the new blog, actually.

I don't want to use Disqus because it's bulky, other options cost money, so maybe a self-hosted option...

Comment by lifelonglearner on MLU: New Blog! · 2019-06-13T02:27:04.144Z · score: 2 (1 votes) · LW · GW

Long-term, I'll probably do some more organizing to put posts into sequences / other things to improve readability.

Right now, the focus is on updating/editing old posts, so that the main page is a set of polished essays that work as standalones.

Comment by lifelonglearner on Go Do Something · 2019-05-21T15:47:12.426Z · score: 19 (6 votes) · LW · GW

This seems reasonable.

See also Yudkowsky in Inadequate Equilibria for a similar sentiment:

Try to spend most of your time thinking about the object level. If you’re spending more of your time thinking about your own reasoning ability and competence than you spend thinking about Japan’s interest rates and NGDP, or competing omega-6 vs. omega-3 metabolic pathways, you’re taking your eye off the ball.
Comment by lifelonglearner on Why books don't work · 2019-05-12T05:47:10.095Z · score: 8 (4 votes) · LW · GW

I don't think the author disagrees all that much with you. I'm reading his claim as something more like "the default attitude (some) people have towards reading does not set them up for good learning".

In the essay, he acknowledges the role that effort and metacognition play in making the actual learning happen. The actionable parts I found useful were at the end where he was hypothesizing about improved mediums, e.g. an online textbook with spaced repetition built in to facilitate recall.

Comment by lifelonglearner on Why books don't work · 2019-05-12T05:45:08.217Z · score: 9 (3 votes) · LW · GW

I think I agree that the generalizations you cited were hasty, especially as there is no formal review on those matters. I, too, find that I can get value out of books and can probably do a better job than just giving a brief summary.

Let me try to pick out the parts that I liked and see if you disagree with those:

The main part of the thesis that I found interesting was the analogy between books and lectures. In both cases, there is the potential of transmissionism as one naive way of thinking about how information gets absorbed.

It's clearer, however, that lectures in an educational setting aren't enough. Without problem sets, examples, and other applications, most of the "real learning" doesn't happen. (i.e. "mathematics is not a spectator sport") Good professors seem to recognize this and supplement their teachings accordingly.

Then, the author points out that a similar acknowledgment for books is not the norm. There is still room for improving the medium, and one example he gave was the spaced repetition enhanced online book. There is an undercurrent of "you as the reader need to put in effort to get value out of your reading", which I agree with. It's a different state of mind when I'm reading for insight vs reading for fun. In the first case, I might re-read passages, skip around, stop for a bit, take some notes, etc. etc. whereas in the second case, I'm probably just reading rather quickly from cover to cover.

Comment by lifelonglearner on Crypto quant trading: Naive Bayes · 2019-05-08T20:20:50.762Z · score: 2 (1 votes) · LW · GW

I see. Thanks for providing the additional info!

Comment by lifelonglearner on Crypto quant trading: Naive Bayes · 2019-05-08T04:50:39.359Z · score: 6 (3 votes) · LW · GW

I've read over briefly both this article and the previous one in the series. Thank you for putting these together!

What I'm curious about in quant trading is the actual implementation. Once you, say, have a model which you think works, how important is latency? How do you make decisions about when to buy / sell? (Partially echoing Romeo's sentiment about curiosity around stop losses and the actual nitty-gritty of extracting value after you think you've figured something out.)

Comment by lifelonglearner on Neural networks for games · 2019-05-02T04:53:41.463Z · score: 3 (2 votes) · LW · GW

Heads-up: nowadays, when people talk about neural networks for games, they really mean deep learning combined with reinforcement learning.

Back to your question: When you don't have a log of games, you typically have some other way of assessing performance, e.g. assigning a "score" to the state of the game, which you can quantify and optimize.

For a specific well-known example, I think this paper on training to play Atari games with deep reinforcement learning goes over a lot of the actual math / implementation details.

Comment by lifelonglearner on AI Safety Prerequisites Course: Basic abstract representations of computation · 2019-03-14T17:06:07.770Z · score: 9 (4 votes) · LW · GW

I've looked a little bit at the RAISE website, and I've looked at the overview of curriculum topics, and I'm finding it a little...sparse, maybe? (I haven't actually looked at the class materials on grasple though, so maybe there's more stuff there.) I'm wondering how realistic it would be for someone to start engaging with MIRI-esque topics after learning just the courses RAISE has outlined.

At least for the prerequisites course, these are all topics covered throughout the first two years of a typical undergraduate computer science degree. And that doesn't seem like quite enough.

EX: TurnTrout's sequence of essays on their journey to become able to contribute towards MIRI-esque topics seems to span a much greater gamut of topics (linear algebra, analysis, etc.) at greater depth, closer to what one might cover in graduate school.

I guess, to operationalize, I'm curious about:

1. What target audience RAISE has in mind (technical people looking for a refresher, people who have had zero real exposure to technical subjects before, etc. etc.) for their materials.

2. What degree of competence RAISE expects people to come out of the curriculum with, either best-case or average-case.

3. In the best case, how many units of material do you think RAISE can product? In other words, is it enough for students to study RAISE's material for a 6-month long curriculum? 1 year long?

****

(Of course, it's also much easier from my position to be engaging/critiquing existing works, than to actually put in the effort to make all of this happen. I don't mean any of the above as an indictment. It's admirable and impressive that y'all have coordinated to make this happen at all!)

Comment by lifelonglearner on What exercises go best with 3 blue 1 brown's Linear Algebra videos? · 2019-01-01T22:17:50.007Z · score: 15 (6 votes) · LW · GW

If you want a proof-based approach, Linear Algebra Done Right is the typical go-to that's also on the MIRI page. I went through maybe the first 3/4ths of it, and I thought it was pretty good, in terms of number of exercises and helping you think about manipulating vector spaces, etc. in a more abstract sense.

Otherwise, I've heard good things about Gilbert Strang's MIT OCW course here: https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/.

In general, I think that 3B1B's videos are really good for building intuition about a concept, but trying to do exercises off of the pedagogy in his videos alone can be quite challenging, especially as he often assumes some mastery with the subject already. (EX: In the eigen-stuffs video, he doesn't actually explain how to find the eigenvalues of a matrix.)

Thus, I think it makes more sense to stick to a traditional textbook / course for learning linear algebra and using 3B1B as supplementary stuff for when you want a visual / different way of looking at a concept.

Also, it might be worth checking in to see what you want to learn linear algebra for. I suspect there are more domain specific resources if, for example, you cared about just the useful parts of linear algebra used in machine learning (dimensionality reduction, etc.).

Comment by lifelonglearner on How can I most effectively improve my writing abilities? · 2019-01-01T17:20:11.137Z · score: 8 (5 votes) · LW · GW

I think the overarching thing to do is to simply write more. To that end, there are a lot of ways to make this happen. For example: participating in NaNoWriMo, committing to writing a sequence of articles about topic X, engaging in discourse online, summarizing research papers for a larger audience, writing guest articles for a blog, or journaling.

As for improving your actual workflow, I think that the Typical Writing Class you take in school largely gets it right. For example, writing outlines / summaries before starting the actual writing, asking people to proofread, have several drafts / edit, and try to imagine reading it from the reader's perspective w/o the extra context you have as a writer.

(I think most of the dissatisfaction I had with the skills taught to me during class had a lot more to do with the context of "Oh man, I have to write this thing using this technique for school and not of my own volition?" rather than the skills themselves not being very good.)

Comment by lifelonglearner on Card Collection and Ownership · 2018-12-27T17:18:50.396Z · score: 2 (1 votes) · LW · GW

Do you have any thoughts written up anywhere on what a game you'd design would look like? I've been following all of your MTG / digital CCG articles with a lot of interest; I'd be really curious which aspects of game design you'd like to incorporate.

Comment by lifelonglearner on What self-help has helped you? · 2018-12-21T16:31:39.843Z · score: 5 (3 votes) · LW · GW

I am wondering if there are commonalities between what different self-help things are doing. For example, it seems that a lot of self-help is focused on changing our default actions, ala debiasing, so there is a train of thought that starts with cognitive biases and goes from there.

Comment by lifelonglearner on What self-help has helped you? · 2018-12-21T02:47:08.161Z · score: 4 (3 votes) · LW · GW

A related question I'm wondering about, which seems related to this is "Why does self-help work? What is it doing?"

Not 100% related to the question, but maybe the discussion here in the comments could spark more (or another question).

Comment by lifelonglearner on Letting Others Be Vulnerable · 2018-11-21T17:17:16.175Z · score: 3 (2 votes) · LW · GW

Oh, right, yes, I didn't think about that.

That is a good example where the above advice I listed, as well as the following models, break.

Comment by lifelonglearner on Letting Others Be Vulnerable · 2018-11-21T04:44:04.647Z · score: 2 (1 votes) · LW · GW

Oh, right. I think an implicit thing here is "for people you want to be close to, this makes sense to do".

In other cases, as I sorta skimmed over, having simplified models, relying on norms/roles, etc. etc. is usually enough to get by.

Comment by lifelonglearner on Alignment Newsletter #30 · 2018-10-30T02:48:07.128Z · score: 7 (4 votes) · LW · GW

The Learning from Untrusted Data paper was really clear, and I found the results surprising. Thanks for sharing and for consistently putting this out week after week. I really like the combination of the summary + your personal thoughts.

Comment by lifelonglearner on What will the long-term future of employment look like? · 2018-10-24T22:51:25.691Z · score: 4 (2 votes) · LW · GW

Specifically for the example of social influencers, I wonder if roles like this might be limited in capacity because, by virtue of their role, they require a large following. Given the way popularity and status tends to be non-uniformly distributed across social groups, it might not be a very large role for most people to fit into.

But overall, I think the underlying theme of humans-as-valuable-because-they-are-human and the examples to Etsy charging a premium on human-made things hits the nail on the head as for where residual value might lie in an automated future.

I suppose one corollary, too, is that it will be human desires which comprise the demand for these goods. If it turns out that automation also shifts the demand for certain services, via machines sending orders to machines, then this might also limit future options for humans?

Comment by lifelonglearner on LW Update 2018-10-01 – Private Messaging Works · 2018-10-02T04:41:06.023Z · score: 4 (2 votes) · LW · GW

Bug report: I can no longer view old messages.

I clicked on my profile, went to private messages, and then clicked on something from the conversations sidebar.

Every message had the following red error:

"Error: TypeError: Cannot read property '_id' of null"

Comment by lifelonglearner on Thoughts on tackling blindspots · 2018-09-27T14:50:37.150Z · score: 10 (6 votes) · LW · GW

You might already be aware, but there's the Unofficial CFAR Canon List which compiles a lot of their earlier material (though some things have now changed) that someone put together a while back.

If you're looking for more derivative content written by people who have gone to CFAR workshops:

There's also the Hammertime sequence alkjash wrote and the Instrumental Rationality sequence I wrote.

Comment by lifelonglearner on Owen's short-form blog · 2018-09-16T04:27:02.248Z · score: 2 (1 votes) · LW · GW

Yup! I just set up one up here. I just grabbed an off-the-shelf Python package so let me know if there are any issues.

Comment by lifelonglearner on Changing main content font to Valkyrie? · 2018-08-25T02:02:20.917Z · score: 3 (2 votes) · LW · GW

Note: I am on Windows and find the font fine. I am wondering how many people on Windows find it hard to read?

Comment by lifelonglearner on Rationalist Community Hub in Moscow: 3 Years Retrospective · 2018-08-25T02:00:18.314Z · score: 9 (6 votes) · LW · GW

Thanks for giving us this English overview! I was very curious when someone mentioned Kocherga in a comment a while back, but the site was in Russian and not very clear.

This is really cool to read about!

Comment by lifelonglearner on Is there a practitioner's guide for rationality? · 2018-08-13T17:26:52.582Z · score: 6 (4 votes) · LW · GW

Ikaxas has already linked to alkjash's sequence, which is excellent. I also wrote a sequence, mostly on habits and planning here.

Comment by lifelonglearner on A Rationalist's Guide to... · 2018-08-09T23:09:38.266Z · score: 10 (3 votes) · LW · GW

I think this is largely correct and points at where some of the larger bottlenecks are.

It's not about finding a list of good resources. There are a lot of those already. It's about what happens next. Things like:

  • Getting yourself to actually read said resources.
  • Figuring out ways of making the material stick.
  • Looking for applications, tracking your progress.
  • Repeating all of the above, over and over.
Comment by lifelonglearner on Open Thread August 2018 · 2018-08-09T21:11:02.231Z · score: 2 (1 votes) · LW · GW

Also interested!

Comment by lifelonglearner on Hammertime Day 8: Sunk Cost Faith · 2018-08-04T21:59:16.071Z · score: 6 (4 votes) · LW · GW

The original author had them removed. You can find quite a few of them now on Medium instead, here: https://medium.com/@ThingMaker

Comment by lifelonglearner on Generating vs Recognizing · 2018-07-14T14:29:04.586Z · score: 2 (1 votes) · LW · GW

Oh, right, yup. The P vs NP analog is a very good parallel.

Comment by lifelonglearner on The Craft And The Codex · 2018-07-10T00:53:36.299Z · score: 15 (4 votes) · LW · GW

Here's a comment by Montfort that I liked from the SSC comments:

I can sort of buy the idea of the comments as a dojo – but there’s no personalized instruction or membership fees or belts or anything. The learning here is extremely self-directed, success is hard to judge objectively, and commenters come and go all the time. Still, even just an empty building with some mats where people can show up and practice is something.

That is, if you wanted to design a place to practice “rationality” skills from the ground up, I’m not sure it would look like this. But the blog and comment section can serve multiple purposes at once.

I have competing intuitions about just how much structure and scaffolding is useful for practice. I don't think schools do a lot of things right, but at least here in the US, we can count on them getting people to be literate and able to do basic arithmetic. I'm wondering what a "minimum set" of rationality techniques would look like, things you could teach to people at scale.

Comment by lifelonglearner on RAISE is looking for full-time content developers · 2018-07-10T00:48:57.741Z · score: 10 (5 votes) · LW · GW

It's actually European convention vs US: https://www.quora.com/Why-do-some-countries-use-a-period-and-others-use-a-comma-to-separate-large-numbers

Comment by lifelonglearner on Problem Solving with Mazes and Crayon · 2018-06-24T13:50:24.318Z · score: 6 (1 votes) · LW · GW

I really like the maze analogy! The pictures help a whole bunch in explaining the concepts!

Related: I don't know if this has a direct analog to a CS algorithm, but one other heuristic which seemed like it might have been good to try was drawing a line from both start and finish incrementally to see how they could touch in the middle.