Three Stories for How AGI Comes Before FAI 2019-09-17T23:26:44.150Z · score: 24 (7 votes)
How to Make Billions of Dollars Reducing Loneliness 2019-08-30T17:30:50.006Z · score: 59 (26 votes)
Response to Glen Weyl on Technocracy and the Rationalist Community 2019-08-22T23:14:58.690Z · score: 52 (26 votes)
Proposed algorithm to fight anchoring bias 2019-08-03T04:07:41.484Z · score: 10 (2 votes)
Raleigh SSC/LW/EA Meetup - Meet MealSquares People 2019-05-08T00:01:36.639Z · score: 12 (3 votes)
The Case for a Bigger Audience 2019-02-09T07:22:07.357Z · score: 69 (27 votes)
Why don't people use formal methods? 2019-01-22T09:39:46.721Z · score: 21 (8 votes)
General and Surprising 2017-09-15T06:33:19.797Z · score: 3 (3 votes)
Heuristics for textbook selection 2017-09-06T04:17:01.783Z · score: 8 (8 votes)
Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas 2016-06-12T07:38:58.557Z · score: 24 (29 votes)
Zooming your mind in and out 2015-07-06T12:30:58.509Z · score: 8 (9 votes)
Purchasing research effectively open thread 2015-01-21T12:24:22.951Z · score: 12 (13 votes)
Productivity thoughts from Matt Fallshaw 2014-08-21T05:05:11.156Z · score: 13 (14 votes)
Managing one's memory effectively 2014-06-06T17:39:10.077Z · score: 14 (15 votes)
OpenWorm and differential technological development 2014-05-19T04:47:00.042Z · score: 6 (7 votes)
System Administrator Appreciation Day - Thanks Trike! 2013-07-26T17:57:52.410Z · score: 70 (71 votes)
Existential risks open thread 2013-03-31T00:52:46.589Z · score: 10 (11 votes)
Why AI may not foom 2013-03-24T08:11:55.006Z · score: 23 (35 votes)
[Links] Brain mapping/emulation news 2013-02-21T08:17:27.931Z · score: 2 (7 votes)
Akrasia survey data analysis 2012-12-08T03:53:35.658Z · score: 13 (14 votes)
Akrasia hack survey 2012-11-30T01:09:46.757Z · score: 11 (14 votes)
Thoughts on designing policies for oneself 2012-11-28T01:27:36.337Z · score: 80 (80 votes)
Room for more funding at the Future of Humanity Institute 2012-11-16T20:45:18.580Z · score: 18 (21 votes)
Empirical claims, preference claims, and attitude claims 2012-11-15T19:41:02.955Z · score: 5 (28 votes)
Economy gossip open thread 2012-10-28T04:10:03.596Z · score: 23 (30 votes)
Passive income for dummies 2012-10-27T07:25:33.383Z · score: 17 (22 votes)
Morale management for entrepreneurs 2012-09-30T05:35:05.221Z · score: 9 (14 votes)
Could evolution have selected for moral realism? 2012-09-27T04:25:52.580Z · score: 4 (14 votes)
Personal information management 2012-09-11T11:40:53.747Z · score: 18 (19 votes)
Proposed rewrites of LW home page, about page, and FAQ 2012-08-17T22:41:57.843Z · score: 18 (19 votes)
[Link] Holistic learning ebook 2012-08-03T00:29:54.003Z · score: 10 (17 votes)
Brainstorming additional AI risk reduction ideas 2012-06-14T07:55:41.377Z · score: 12 (15 votes)
Marketplace Transactions Open Thread 2012-06-02T04:31:32.387Z · score: 29 (30 votes)
Expertise and advice 2012-05-27T01:49:25.444Z · score: 17 (22 votes)
PSA: Learn to code 2012-05-25T18:50:01.407Z · score: 34 (39 votes)
Knowledge value = knowledge quality × domain importance 2012-04-16T08:40:57.158Z · score: 8 (13 votes)
Rationality anecdotes for the homepage? 2012-04-04T06:33:32.097Z · score: 3 (8 votes)
Simple but important ideas 2012-03-21T06:59:22.043Z · score: 20 (25 votes)
6 Tips for Productive Arguments 2012-03-18T21:02:32.326Z · score: 30 (45 votes)
Cult impressions of Less Wrong/Singularity Institute 2012-03-15T00:41:34.811Z · score: 34 (59 votes)
[Link, 2011] Team may be chosen to receive $1.4 billion to simulate human brain 2012-03-09T21:13:42.482Z · score: 8 (15 votes)
Productivity tips for those low on motivation 2012-03-06T02:41:20.861Z · score: 7 (12 votes)
The Singularity Institute has started publishing monthly progress reports 2012-03-05T08:19:31.160Z · score: 21 (24 votes)
Less Wrong mentoring thread 2011-12-29T00:10:58.774Z · score: 31 (34 votes)
Heuristics for Deciding What to Work On 2011-06-01T07:31:17.482Z · score: 20 (23 votes)
Upcoming meet-ups: Auckland, Bangalore, Houston, Toronto, Minneapolis, Ottawa, DC, North Carolina, BC... 2011-05-21T05:06:08.824Z · score: 5 (8 votes)
Being Rational and Being Productive: Similar Core Skills? 2010-12-28T10:11:01.210Z · score: 18 (31 votes)
Applying Behavioral Psychology on Myself 2010-06-20T06:25:13.679Z · score: 53 (60 votes)
The Math of When to Self-Improve 2010-05-15T20:35:37.449Z · score: 6 (16 votes)
Accuracy Versus Winning 2009-04-02T04:47:37.156Z · score: 12 (21 votes)


Comment by john_maxwell on [Site Feature] Link Previews · 2019-09-18T19:53:40.563Z · score: 2 (1 votes) · LW · GW

Nice! I know people have complained about jargon use on LW in the past. Have you thought about an option new users could activate which autolinks jargon to the corresponding wiki entry/archive post?

Comment by john_maxwell on Meetups as Institutions for Intellectual Progress · 2019-09-17T23:55:00.358Z · score: 4 (3 votes) · LW · GW

Exciting stuff!

On the other hand, many of the times that I've previously tried to publicly take action in this space, I got shot down pretty harshly. So I'd love to hear why you think my proposals are naïve / misguided / going to pollute the commons / going to crash and burn before I launch anything this time. Concrete critiques and proposals of alternatives would be greatly appreciated.

I think it's unfortunate that you were shot down this way. I think caution of this sort would be well-justified if we were in the business of operating a nuclear reactor or something like that. But as things are, I expect that even if one of your meetup experiments failed, it would give us useful data.

Maybe it's not that people are against trying new things, it's just that those who disagree are more likely to comment than those who agree.

One activity which I think could be fun, useful, and a good fit for the meetup format is brainstorming. You could have one or several brainstorming prompts every month (example prompt: "How can Moloch be defeated?") and ask meetups to brainstorm based on those prompts and send you their ideas, and then you could assemble those into a global masterlist which credits the person who originated each idea (a bit like the Junto would gather the best ideas from each subgroup, I think). You could go around to various EA organizations and ask them for prompt ideas, for topics that EA organization wants more ideas on. For example, maybe Will MacAskill would request ideas for what Cause X might be. Maybe Habryka would ask for feature ideas for LW. You could offer brainstorming services publicly--maybe Mark Zuckerberg would ask for ideas on how to improve Facebook (secretly, through Julia Galef). You could have a brainstorming session for brainstorming prompts. You could suggest brainstorming protocols or give people a video to play or have a brainstorming session for brainstorming protocols (recursive self-improvement FTW).

Comment by john_maxwell on Distance Functions are Hard · 2019-09-15T17:59:21.824Z · score: 2 (1 votes) · LW · GW

^ I don't see how?

No human labor: Just compute the function. Fast experiment loop: Computers are faster than humans. Reproducible: Share the code for your function with others.

I'm talking about interactive training

I think for a sufficiently advanced AI system, assuming it's well put together, active learning can beat this sort of interactive training--the AI will be better at the task of identifying & fixing potential weaknesses in its models than humans.

Adversarial examples suggest we should be worried that apparently similar concepts will actually be wildly different in non-obvious ways.

I think the problem with adversarial examples is that deep neural nets don't have the right inductive biases. I expect meta-learning approaches which identify & acquire new inductive biases (in order to determine "how to think" about a particular domain) will solve this problem and will also be necessary for AGI anyway.

BTW, different human brains appear to learn different representations (previous discussion), and yet we are capable of delegating tasks to each other.

I'm cautiously optimistic, since this could make things a lot easier.


My problem with that argument is that it seems like we will have so many chances to fuck up that we would need 1) AI systems to be extremely reliable, or 2) for catastrophic mistakes to be rare, and minor mistakes to be transient or detectable. (2) seems plausible to me in many applications, but probably not all of the applications where people will want to use SOTA AI.

Maybe. But my intuition is that if you can create a superintelligent system, you can make one which is "superhumanly reliable" even in domains which are novel to it. I think the core problems for reliable AI are very similar to the core problems for AI in general. An example is the fact that solving adversarial examples and improving classification accuracy seem intimately related.

I think algorithms the significant features of RL here are: 1) having the goal of understanding the world and how to influence it, and 2) doing (possibly implicit) planning.

In what sense does RL try to understand the world? It seems very much not focused on that. You essentially have to hand it a reasonably accurate simulation of the world (i.e. a world that is already fully understood, in the sense that we have a great model for it) for it to do anything interesting.

If the planning is only "implicit", RL sounds like overkill and probably not a great fit. RL seems relatively good at long sequences of actions for a stateful system we have a great model of. If most of the value can be obtained by planning 1 step in advance, RL seems like a solution to a problem you don't have. It is likely to make your system less safe, since planning many steps in advance could let it plot some kind of treacherous turn. But I also don't think you will gain much through using it. So luckily, I don't think there is a big capabilities vs safety tradeoff here.

I think having general knowledge will be very valuable, and hard to replicate with a network of narrow systems.

Agreed. But general knowledge is also not RL, and is handled much more naturally in other frameworks such as transfer learning, IMO.

So basically I think daemons/inner optimizers/whatever you want to call them are going to be the main safety problem.

Comment by john_maxwell on Distance Functions are Hard · 2019-09-14T02:26:57.918Z · score: 2 (1 votes) · LW · GW

They're a pain because they involve a lot of human labor, slow down the experiment loop, make reproducing results harder, etc.

I see. How about doing active learning of computable functions? That solves all 3 problems.

Instead of standard benchmarks, you could offer an API which provides an oracle for some secret functions to be learned. You could run a competition every X months and give each competition entrant a budget of Y API calls over the course of the competition.

RE self-supervised learning: I don't see why we needed the rebranding (of unsupervised learning).

Well I don't see why neural networks needed to be rebranded as "deep learning" either :-)

When I talk about "self-supervised learning", I refer to chopping up your training set into automatically created supervised learning problems (predictive processing), which feels different from clustering/dimensionality reduction. It seems like a promising approach regardless of what you call it.

I don't see why it would make alignment straightforward (ETA: except to the extent that you aren't necessarily, deliberately building something agenty).

In order to make accurate predictions about reality, you need to understand humans, because humans exist in reality. So at the very least, a superintelligent self-supervised learning system trained on loads of human data would have a lot of conceptual building blocks (developed in order to make predictions about its training data) which could be tweaked and combined to make predictions about human values (analogous to fine-tuning in the context of transfer learning). But I suspect fine-tuning might not even be necessary. Just ask it what Gandhi would do or something like that.

Re: gwern's article, RL does not seem to me like a good fit for most of the problems he describes. I agree active learning/interactive training protocols are powerful, but that's not the same as RL.

Autonomy is also nice (and also not the same as RL). I think the solution for autonomy is (1) solve calibration/distributional shift, so the system knows when it's safe to act autonomously (2) have the system adjust its own level of autonomy/need for clarification dynamically depending on the apparent urgency of its circumstances. I have notes for a post about (2), let me know if you think I should prioritize writing it.

Comment by john_maxwell on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-09-13T01:21:16.186Z · score: 2 (1 votes) · LW · GW

I know the contest is over, but this idea for a low-bandwidth oracle might be useful anyhow: Given a purported FAI design, what is the most serious flaw? Then highlight lines from the FAI design description, plus given a huge corpus of computer science papers, LW/AF posts, etc. highlight relevant paragraphs from those as well (perhaps using some kind of constraint like "3 or fewer paragraphs highlighted in their entirety") that, taken together, come closest to pinpointing the issue. We could even give it a categorization scheme for safety problems we came up with, and it could tell us which category this particular problem comes closest to falling under. Or offer it categories a particular hint could fall under to choose from, such as "this is just an analogy", "keep thinking along these lines", etc. Then do the same and ask it to highlight text which leads to a promising solution. The rationale being that unforseen difficulties are the hardest part of alignment, but if there's a flaw, it will probably be somehow analogous to a problem we've seen in the past, or will be addressable using methods which have worked in the past, or something. But it's hard to fit "everything we've seen in the past" into one human head.

Comment by john_maxwell on Distance Functions are Hard · 2019-09-13T00:14:47.515Z · score: 2 (1 votes) · LW · GW

Why are highly interactive training protocols a massive pain?

Do you have any thoughts on self-supervised learning? That's my current guess for how we'll get AGI, and it's a framework that makes the alignment problem seem relatively straightforward to me.

Comment by john_maxwell on The Power to Judge Startup Ideas · 2019-09-05T01:41:53.054Z · score: 9 (4 votes) · LW · GW

Great post. I think you have put your finger on something important and under-appreciated.

However, I just want to note that being on stage can sometimes cause peoples' brains to freeze up. So it may very well be that particular founder had a compelling USP and just wasn't able to articulate it well under pressure.

I'm also not sure how I feel about the Golden example. I think improving on Wikipedia is possible, but you first have to reach parity with Wikipedia, which is a big project. And I can imagine extracting some stories out of what Jude Gomila tweeted, e.g. someone doesn't trust Wikipedia and wants a more factually accurate source, especially on an obscure topic, or just finds that Golden articles are generally higher quality. I think these kind of "breadth-first" startup ideas, where you try to provide small amounts of added value to a large number of people and use cases, can be harder to get off the ground. But they can also work really well. It could potentially be the case that Golden will be to Wikipedia what Google was to AltaVista. Although yes, maybe you are better off moving vertical by vertical instead--for example, maybe Google should have initially focused on providing great search results in some particular underserved domain where their algorithms were really outperforming existing solutions. But in the end it seems to have worked out.

Anyway, I'd be curious to hear how you think this all applies to Quixey, as I feel like this post kind of gets at why I was never very optimistic :) [To be a bit more concrete, I remember you saying that Quixey would be like Google for apps because apps are the future, and I just found it totally implausible that people will ever search for apps at a similar volume that they search for webpages--my phone has room for ~100 apps tops on it, but I probably have 100 pages in my browser history over just the past few days. So even if there's a specific story about searching for an app, it didn't seem compelling enough to generate substantial ad revenue, and obviously getting people to pay for search capabilities is gonna be tough.] Do you feel like your ability to judge startup ideas has improved since selecting the idea for Quixey? (Relationship Hero seems like a much better idea to me, for whatever that's worth.)

Comment by john_maxwell on Self-supervised learning & manipulative predictions · 2019-09-04T10:14:19.027Z · score: 2 (1 votes) · LW · GW

Thanks for the thoughts!

This description seems rather different than your original beam search story, no? In your original story, you were describing an incentive the system had to direct the world in order to make it easier to predict. I don't see how this incentive arises here.

I'm not entirely convinced that predictions should be made in a way that's completely divorced from their effects on the world. For example, the prediction "You aren't going to think about ice cream" would appear to be self-falsifying. It seems like the most useful AI system would be one whose predictions tend to remain true even after being made.

(By the way, I hope I'm not coming across as antagonistic in this thread--I'm still replying because I think this is a really important topic and I'm hoping we can hammer it out together! And I think a crisp description of a problem is frequently the first step to solving it.)

Comment by john_maxwell on Tiddlywiki for organizing notes and research · 2019-09-02T22:20:34.891Z · score: 3 (2 votes) · LW · GW


I've found the way my knowledge management workflow works is that I have particular future situations I anticipate being in, and I want to keep a list of things I'd like to be reminded of in that particular situation. (This is very broad and could include stuff like: I decide I want to write post X, or start company Y, or I'm struggling with problem Z, etc.) Usually this doesn't require a lot of nonlinear structure, I'm basically just looking at a bunch of lists. The most important feature is a hotkey for jumping to the list in question. Here is an old post with more details (which also has comments from others on personal knowledge management):

The thing I'm least satisfied with is when bits of text belong in multiple lists, but the intersection of the two list domains doesn't feel broad enough to justify the overhead of a dedicated list. It sounds like the tiddler concept handles this situation pretty well. I wonder if autocompleting tags could be an effective substitute for my jump-to-note hotkey...

Comment by john_maxwell on How Much is Your Time Worth? Or Why You Should Buy an AC · 2019-09-02T22:06:08.117Z · score: 3 (2 votes) · LW · GW

I think the case for an AC is stronger than some of the other ideas, because an AC increases energy as well as time. But even though doing laundry takes time, it doesn't feel to me like it takes a lot of energy and might even increase it (light physical activity through the day=good for productivity?)

Comment by john_maxwell on Self-supervised learning & manipulative predictions · 2019-09-02T21:52:10.278Z · score: 3 (2 votes) · LW · GW

Hm, I think we're talking past each other a bit. What I was trying to get at was: When we're doing self-supervised learning, we're optimizing an objective function related to the quality of system's internal knowledge representations. My suggestion was that this internal objective function should have a term for the accuracy with which the system is able to predict masked bits of existing knowledge, but not a term for the accuracy of hypothesized future predictions a la beam search. Then we can use the system interactively as follows:

  1. Give it some data.
  2. Do self-supervised learning on the data, optimizing the quality of internal knowledge representations with a "short-sighted" objective function like I described.
  3. Use these knowledge representations to make predictions of interest.
  4. Repeat as needed.

What I'm looking for is a crisp description of why accurate self-knowledge (including knowledge of the interaction loop) is dangerous in this framework.

Comment by john_maxwell on How to Make Billions of Dollars Reducing Loneliness · 2019-09-01T06:27:52.208Z · score: 2 (1 votes) · LW · GW

Nice! For anyone who found my world improvement/money-making pitch persuasive, it looks like Bungalow is hiring, especially in the Toronto area:

Comment by john_maxwell on Self-supervised learning & manipulative predictions · 2019-09-01T03:16:28.792Z · score: 2 (1 votes) · LW · GW

Can you be crisper about why you think 1C & 1D are necessary?

Comment by john_maxwell on How to Make Billions of Dollars Reducing Loneliness · 2019-09-01T02:32:09.276Z · score: 2 (1 votes) · LW · GW


Comment by john_maxwell on How to Make Billions of Dollars Reducing Loneliness · 2019-08-31T23:48:26.809Z · score: 3 (3 votes) · LW · GW

Interesting points. However, consider the possibility that your perspective on this is colored by the fact that you are from Russia :P

Comment by john_maxwell on How to Make Billions of Dollars Reducing Loneliness · 2019-08-31T18:45:06.967Z · score: 2 (1 votes) · LW · GW

By the way, people live in bubbles, so it's hard to estimate how many have the "loneliness crisis".

There's also a selection effect: Lonely people have fewer friends, so you're less likely to know them.

Comment by john_maxwell on How to Make Billions of Dollars Reducing Loneliness · 2019-08-31T01:03:21.553Z · score: 6 (4 votes) · LW · GW

I wonder why nobody I know uses them, but dating apps are very popular?

I think you should be careful with this style of reasoning, because you can use it to explain away any sort of exploitable market inefficiency. For example, prior to Airbnb, we could ask: Why don't people offer spare rooms in their house to strangers for cash on a temporary basis? And in response, we could generate many valid answers: People are uncomfortable with strangers staying in their house. The strangers could steal something or break something. There may be relevant zoning laws, and established players (hotel chains) who will try to shut you down using regulatory force. Etc. I believe these are all, in fact, issues that Airbnb has faced. Y Combinator hesitated to fund Airbnb due to the "uncomfortable with strangers staying in their house" thing, but in the end, they decided to go forward because they liked how scrappy the founders were. Of course, it paid off massively, and Airbnb is now one of the most valuable companies in Y Combinator's portfolio.

I agree there are challenges in the roommate matching space which aren't present in the dating space. But there are also challenges in the dating space which aren't present to the same degree in the roommate matching space. (Who's going to want to date weirdos who need the internet for romance? How do you deal with user harassment? How are online profiles supposed to predict that ineffable romantic spark when so much depends on behavioral and perhaps pheromone-related factors? Etc.)

But there's a big reason why this opportunity is a lot more attractive than online dating: The possibility of making a lot more money per user. Remember most OKCupid users aren't paying, and paying users only bring in $10-20 per month. Furthermore, you can keep making money even after a match has been made, whereas with online dating, once you successfully match someone and they delete your app, you're done. If people are willing to pay say an extra $100/month for great roommates, which seems totally plausible given they are already paying that to live in a somewhat more hip part of their city, it becomes worth your while to overcome a lot of problems.

Many of your points are barriers to shared housing in general, not barriers to this service in particular. I would argue that if done right, the service I'm proposing actually offers the potential to greatly mitigate a lot of the hurdles you describe.

  • You write: "There are more than two people involved, and the difficulty of finding communal compatibility complexifies geometrically with the number of roommates." Yes, determining compatibility among N roommates is an O(N-squared) operation. But the constant factor matters a great deal. With the existing roommate matching system, each pair of potential roommates has to meet and spend substantial time together before moving in, or there's incompatibility risk. With the thing I'm describing, we could have a statistical model which gets rid of a lot of the uncertainty up front, in a few seconds of computer time at most.

  • You write: "By the same token, people moving in and out happens more frequently with larger numbers of room mates, often with short notice, making it hard to keep a stable equilibrium of preferences." The number of roommates stays the same whether or not we are using the internet to coordinate: In each case, it's the number of rooms in the house. Since a hypothetical roommate matching service has a birds-eye view of everyone who's searching for rooms, it can deal better with managing house cultural changes in a sensible way when multiple rooms open up at once. You could even do sophisticated stuff which is infeasible in the current system, like identifying roommate compatibility clusters and matching them with houses of the right size.

  • I agree "dating housemates" has to be handled carefully. However, note that first, this isn't even possible with the existing system because there's too much paperwork involved in signing & breaking leases in order to try out different houses. (Or more realistically, the unpaid volunteer group house leaders in the existing system don't want to deal with the necessary overhead.) Anyway, worst case you can just leave it out of the plan. Second, I think it could work just fine if you had 3 established houses which each had an empty room and someone wanted to try living in each house for 1 month. I don't think the residents of the 2 houses which weren't chosen would have their feelings hurt to a significant degree.

  • I agree you'd want to talk to a lawyer and figure out the best legal arrangement for the service to use. But better roommate compatibility predictions means this should be less of an issue to start with. Same for the higher effort and commitment barrier point. These are points in favor of a more sophisticated matching system, not against.

  • I don't think a new dream is a prerequisite. People who move to a new city already want to make new friends, and living in various houses for brief periods can help with that goal. And people are already having fewer kids. That said, the baugruppe idea probably does require a cultural shift.

  • I tried to start a group house recently, but gave up after realizing it would be too much work to find people and stuff. I wish a service like what I describe existed.

Shared housing is already a multi-billion dollar market despite the hurdles you mention. People find roommates on Craigslist etc. all the time. If you can capture 10% of the existing shared housing market, that's a multi-billion dollar opportunity right there, which should easily put you among the top ten Effective Altruist donors. If you're able to expand the shared housing market, that's gravy from a financial perspective.

Finally, I want point out that I actually did posit a shift in the underlying economic factors which could make this a profitable opportunity now even if it wasn't in the past: The increase in loneliness. In other words, a billion-dollar bill may have appeared on the sidewalk just recently. See also this blog post.

Comment by john_maxwell on A new rationality YouTube channel emerges · 2019-08-30T16:23:56.637Z · score: 4 (2 votes) · LW · GW

Ah I didn't see he was asking for criticism, that's fair.

Comment by john_maxwell on A new rationality YouTube channel emerges · 2019-08-30T04:19:56.216Z · score: -4 (9 votes) · LW · GW

Also, in general, if you observe someone doing something that seems like a suboptimal use of time to you, before being critical, it might be worthwhile to take a guess as to what this person will do with their time if they abandon this project. Like, it could very well be that alternatives are not "YouTube video creation" vs "super optimized project bendini would approve of" so much as "YouTube video creation" vs "YouTube video consumption" ;-)

And if you want to convince someone to adopt a superior alternative, I humbly suggest putting on your salesperson hat and finding a way to create desire instead of scolding. After all, for today's discount price of acquiring some salesperson skills, you could help launch 100 high-impact projects ;-)

Comment by john_maxwell on A new rationality YouTube channel emerges · 2019-08-30T04:11:26.442Z · score: 4 (3 votes) · LW · GW

Here's another successful one:

Is there a reason to think the success rate for rationality channels is below the success rate of new channels in general?

Comment by john_maxwell on The Missing Math of Map-Making · 2019-08-30T03:52:30.479Z · score: 2 (1 votes) · LW · GW

For instance, suppose Google collects a bunch of photos from the streets of New York City, then produces a streetmap from it. The vast majority of the information in the photos is thrown away in the process - how do we model that mathematically? How do we say that the map is “accurate”, despite throwing away all that information? More generally, maps/beliefs tend to involve some abstraction - my beliefs are mostly about macroscopic objects (trees, chairs, etc) rather than atoms. What does it mean for a map to be “accurate” at an abstract level, and what properties should my map-making process have in order to produce accurate abstracted maps/beliefs?

Representation learning might be worth looking into. The quality of a representation is typically measured using its reconstruction error, I think. However, there is some complexity here, I'd argue, because in real-world applications, some aspects of the reconstruction usually matter much more than others. I care about reconstructing the navigability of the streets, but not the advertisements on roadside billboards.

This actually presents a challenge to standard rationalist tropes about the universality of truth in a certain way, because my friend the advertising executive might care more about minimizing reconstruction error on roadside billboards, and select a representation scheme which is optimized according to that metric. As Stuart Russell puts it in Artificial Intelligence: A Modern Approach:

We should say up front that the enterprise of general ontological engineering has so far had only limited success. None of the top AI applications (as listed in Chapter 1) make use of a shared ontology—they all use special-purpose knowledge engineering. Social/political considerations can make it difficult for competing parties to agree on an ontology. As Tom Gruber (2004) says, “Every ontology is a treaty—a social agreement—among people with some common motive in sharing.” When competing concerns outweigh the motivation for sharing, there can be no common ontology.

(Emphasis mine. As far as I can tell, "ontology" is basically GOFAI talk for "representation".)

Comment by john_maxwell on LessWrong Updates - September 2019 · 2019-08-30T03:34:00.995Z · score: 28 (7 votes) · LW · GW

We have a planned overhaul of LessWrong’s subscription system which will allow you to subscribe to posts, comments, users, and private messages thereby receiving notifications and/or emails.

Sweet! Not sure how mature development is, but if it's still in an immature state, here are some feature ideas:

  • A view of the number of subscribers to a particular thread, so I can figure out if it's worth my time to leave a comment even if the thread is pretty old.
  • A user configuration option that allows me to subscribe to what I read by default, and opt out if I'm uninterested in following the discussion going forward.

I think there are some benefits of moving to more of an equilibrium where people are subscribing to discussions by default, so then there is less pressure to check the forum frequently if you want to write comments people will actually see.

Comment by john_maxwell on Self-supervised learning & manipulative predictions · 2019-08-28T18:02:24.503Z · score: 2 (1 votes) · LW · GW

Glad you are thinking about this!

How about putting the system in an "interactive environment" in the sense that it sometimes gets new data, but not asking it to predict what new data it will get? (Or, for an even looser constraint, maybe in some cases it makes predictions about new data it will get, but it doesn't factor these predictions into things like the sentence completion task.)

Comment by john_maxwell on Response to Glen Weyl on Technocracy and the Rationalist Community · 2019-08-23T02:09:17.705Z · score: 3 (2 votes) · LW · GW is not working in Google Chrome for me. I assumed it was because of one of my many Chrome extensions, but maybe it's an issue with the site itself? Works in Firefox/Opera.

Comment by john_maxwell on Forum participation as a research strategy · 2019-08-20T05:57:49.380Z · score: 2 (1 votes) · LW · GW

Thanks for the example!

Comment by john_maxwell on Forum participation as a research strategy · 2019-08-18T21:53:07.964Z · score: 2 (1 votes) · LW · GW

Bayesian updating does not work well when you don't have the full hypothesis space.

Do you have any links related to this? Technically speaking, the right hypothesis is almost never in our hypothesis space ("All models are wrong, but some are useful"). But even if there's no "useful" model in your hypothesis space, it seems Bayesian updating fails gracefully if you have a reasonably wide prior distribution for your noise parameters as well (then the model fitting process will conclude that the value of your noise parameter must be high).

Comment by john_maxwell on Coherence arguments do not imply goal-directed behavior · 2019-08-17T09:40:15.285Z · score: 2 (1 votes) · LW · GW

The argument Rohin is responding to also rests on leaky abstractions, I would argue.

At the end of the day sometimes the best approach, if there aren't any good abstractions in a particular domain, is to set aside your abstractions and look directly at the object level.

If there is a fairly simple, robust FAI design out there, and we rule out the swath of design space it resides in based on an incorrect inference from a leaky abstraction, that would be a bad outcome.

Comment by john_maxwell on Coherence arguments do not imply goal-directed behavior · 2019-08-17T09:35:56.318Z · score: 2 (1 votes) · LW · GW

Diamond maximization seems pretty different from winning at chess. In the chess case, we've essentially hardcoded a particular ontology related to a particular imaginary universe, the chess universe. This isn't a feasible approach for the diamond problem.

In any case, the reason this discussion is relevant, from my perspective, is because it's related to the question of whether you could have a system which constructs its own superintelligent understanding of the world (e.g. using self-supervised learning), and engages in self-improvement (using some process analogous to e.g. neural architecture search) without being goal-directed. If so, you could presumably pinpoint human values/corrigibility/etc. in the model of the world that was created (using labeled data, active learning, etc.) and use that as an agent's reward function. (Or just use the self-supervised learning system as a tool to help with FAI research/make a pivotal act/etc.)

It feels to me as though the thing I described in the previous paragraph is amenable to the same general kind of ontological whitelisting approach that we use for chess AIs. (To put it another way, I suspect most insights about meta-learning can be encoded without referring to a lot of object level content about the particular universe you find yourself building a model of.) I do think there are some safety issues with the approach I described, but they seem fairly possible to overcome.

Comment by john_maxwell on Coherence arguments do not imply goal-directed behavior · 2019-08-17T09:11:31.625Z · score: 2 (1 votes) · LW · GW

we won't see examples like this if the algorithms that produce this kind of behavior take longer to produce the behavior than the amount of time we've let them run.

Are you suggesting that Deep Blue would behave in this way if we gave it enough time to run? If so, can you explain the mechanism by which this would occur?

I think Stuart Armstrong and Tom Everrit are the main people who've done work in this area, and their work on this stuff seems quite under appreciated.

Can you share links?

Comment by john_maxwell on Distance Functions are Hard · 2019-08-15T01:56:24.294Z · score: 4 (2 votes) · LW · GW

Here we apparently need transparent, formally-specified distance function if we have any hope of absolutely proving the absence of adversarial examples.

Well, a classifier that is 100% accurate would also do the job ;) (I'm not sure a 100% accurate classifier is feasible per se, but a classifier which can be made arbitrarily accurate given enough data/compute/life-long learning experience seems potentially feasible.)

Also, small perturbations aren't necessarily the only way to construct adversarial examples. Suppose I want to attack a model M1, which I have access to, and I also have a more accurate model M2. Then I could execute an automated search for cases where M1 and M2 disagree. (Maybe I use gradient descent on the input space, maximizing an objective function corresponding to the level of disagreement between M1 and M2.) Then I hire people on Mechanical Turk to look through the disagreements and flag the ones where M1 is wrong. (Since M2 is more accurate, M1 will "usually" be wrong.)

This is actually one way to look at what's going on with traditional small perturbation adversarial examples. M1 is a deep learning model and M2 is a 1-nearest-neighbor model--not very good in general, but quite accurate in the immediate region of data points with known labels. The problem is that deep learning models don't have a very strong inductive bias towards mapping nearby inputs to nearby outputs (sometimes called "Lipschitzness"). L2 regularization actually makes deep learning models more Lipschitz because smaller coefficients=smaller eigenvalues for weight matrices=less capacity to stretch nearby inputs away from each other in output space. I think maybe that's part of why L2 regularization works.

Hoping to expand the previous two paragraphs into a paper with Matthew Barnett before too long--if anyone wants to help us get it published, please send me a PM (neither of us has ever published a paper before).

Comment by john_maxwell on Self-Supervised Learning and AGI Safety · 2019-08-14T02:58:25.935Z · score: 2 (1 votes) · LW · GW

Can you be more specific about the daemons you're thinking about? I had tried to argue that daemons wouldn't occur under certain circumstances, or at least wouldn't cause malign failures...

Which part are you referring to?

Anyway, I'm worried that a daemon will arise while searching for models which do a good job of predicting masked bits. As it was put here: "...we note that trying to predict the output of consequentialist reasoners can reduce to an optimisation problem over a space of things that contains consequentialist reasoners." Initially daemons seemed implausible to me, but then I thought of a few ways they could happen--hoping to write posts about this before too long. I encourage others to brainstorm as well, so we can try & think of all the plausible ways daemons could get created.

I started my own list of pathological things that might happen with self-supervised learning systems, maybe I'll show you when it's ready and we can compare notes...?


Comment by john_maxwell on Distance Functions are Hard · 2019-08-13T22:10:39.369Z · score: 15 (10 votes) · LW · GW

Learning a distance function between pictures of human faces has been used successfully to train deep learning based face recognition systems.

My takeaway from your examples is not that "distance functions are hard" so much as "hardcoding is brittle". The general approach of "define a distance function and train a model based on it" has been pretty successful in machine learning.

Comment by john_maxwell on Open & Welcome Thread - August 2019 · 2019-08-13T05:26:45.619Z · score: 8 (6 votes) · LW · GW

When people talk about the efficient market hypothesis, the usual takeaway is that you should invest in index funds. The idea with the efficient market hypothesis is that stocks are already close to being priced perfectly, so it doesn't make sense to pay someone to pick stocks for you.

However, there's another possible takeaway that I haven't ever heard: Because stocks are already close to being priced perfectly, it doesn't matter which stocks you buy. Even if you're relatively uninformed about the markets, you might as well play the market and buy stocks at random, or based on vague intuitions, or whatever. Because stocks are close to being priced perfectly, on average you will do about as well as the market does.

Some arguments in favor of index funds:

  • By buying lots of stocks, your portfolio is less vulnerable to fluctuations in the price of particular stocks.

  • Save money on trading fees?

  • Save time and attention.

Some arguments in favor of buying individual stocks:

  • Buying individual stocks could act as rationality training, getting you in the habit of betting your beliefs and testing your predictions against reality.

  • It's entertaining.

  • If you have private information or insights that the market hasn't priced in (i.e. the efficient market hypothesis isn't completely true), you could beat an index fund. (Note that the efficient market hypothesis is a self-refuting prophecy.)

  • Due to the "no one ever got fired for buying IBM" principle, managing other peoples' money arguably incentivizes herd behavior over the long term. That suggests that markets could provide financial rewards for patient contrarians investing their own money, if most money being invested is other peoples' money.

Thoughts? Note: I don't think I would want individual stocks to comprise more than 10% of my portfolio.

Comment by john_maxwell on AI Alignment Open Thread August 2019 · 2019-08-11T04:12:24.629Z · score: 2 (1 votes) · LW · GW

The idea is that if the conference is run by people who are interested in safety, they can preferentially accept papers which are good from a differential technological development point of view.

Comment by john_maxwell on Verification and Transparency · 2019-08-10T06:03:56.190Z · score: 2 (1 votes) · LW · GW

Interesting insight!

I suppose an intermediate approach would be noticing that a system comes close to satisfying some compact formal property, generating an input which violates this property, and running it by the user.

Comment by john_maxwell on Self-Supervised Learning and AGI Safety · 2019-08-10T05:54:17.379Z · score: 3 (2 votes) · LW · GW

Agreed this is a neglected topic. My personal view is self-supervised learning is much more likely to lead to AGI than reinforcement learning. I think this is probably good from a safety point of view, although I've been trying to brainstorm possible risks. Daemons seem like the main thing.

Comment by john_maxwell on AI Alignment Open Thread August 2019 · 2019-08-10T05:51:20.930Z · score: 2 (1 votes) · LW · GW

I saw this thread complaining about the state of peer review in machine learning. Has anyone thought about trying to design a better peer review system, then creating a new ML conference around it and also adding in a safety emphasis?

Comment by john_maxwell on Which of these five AI alignment research projects ideas are no good? · 2019-08-10T04:37:10.391Z · score: 2 (1 votes) · LW · GW

Can you explain this one a bit more? It seems to me that if the human is giving inconsistent answers, in the sense that the human says A > B and B > C and C > A, then the thing to do is to flag this and ask them to resolve the inconsistency instead of trying to find a way to work around it. Interpretability > Magic, I say.

Comment by john_maxwell on Why Gradients Vanish and Explode · 2019-08-10T04:15:19.205Z · score: 3 (2 votes) · LW · GW

I think this is quite strong evidence that I was not taught the correct usage of vanishing gradients.

I'm very confused. The way I'm reading the quote you provided, it says ReLu works better because it doesn't have the gradient vanishing effect that sigmoid and tanh have.

Comment by john_maxwell on Project Proposal: Considerations for trading off capabilities and safety impacts of AI research · 2019-08-07T03:59:54.790Z · score: 3 (2 votes) · LW · GW

For example, I think the current ML community is very unlikely to ever produce AGI (<10%)

I'd be interested to hear why you think this.

BTW, I talked to one person with experience in GOFAI and got the impression it's essentially a grab bag of problem-specific approaches. Curious what "other parts of AI" you're optimistic about.

Comment by john_maxwell on What does Optimization Mean, Again? (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 2) · 2019-08-06T23:35:30.376Z · score: 6 (3 votes) · LW · GW

This seems related to a comment Rohin made recently. It sounds like you are working from Rohin's "normative claim", not his "empirical claim"? (From an empirical perspective, holding arguments for ¬A to a higher standard than arguments for A is obviously a great way to end up with false beliefs :P)

Anyway, just like Rohin, I'm uncertain re: the normative claim. But even if one believes the normative claim, I think in some cases a concern can be too vague to be useful.

Here's an extreme example to make the point. Biotech research also presents existential risks. Suppose I object to your biotech strategy, on the grounds that you don't have a good argument that your strategy is robust against adversarial examples.

What does it even mean for a biotech strategy to be robust against adversarial examples?

Without further elaboration, my concern re: your biotech strategy is too vague. Trying to come up with a good argument against my concern would be a waste of your time.

Maybe there is a real problem here. But our budget of research hours is limited. If we want to investigate this further, the thing to do make the concern less vague, and get more precise about the sense in which your biotech strategy is vulnerable to adversarial examples.

I agree vague concerns should be taken seriously. But I think in some cases, we will ultimately dismiss the concern not because we thought of a strong argument against it, but because multiple people thought creatively about how it might apply and just weren't able to find anything.

You can't prove things about something which hasn't been formalized. And good luck formalizing something without any concrete examples of it! Trying to offer strong arguments against a concern that is still vague seems like putting the cart before the horse.

I don't think FAI work should be overly guided by vague analogies, not because I'm unconcerned about UFAI, but because vague analogies just don't provide much evidence about the world. Especially if there's a paucity of data to inform our analogizing.

It's possible that I'm talking past you a bit in this comment, so to clarify: I don't think instrumental convergence is too vague to be useful. But for some other concerns, such as daemons, I would argue that the most valuable contribution at this point is trying to make the concern more concrete.

Comment by john_maxwell on Forum participation as a research strategy · 2019-08-03T02:55:00.742Z · score: 2 (1 votes) · LW · GW

Interesting. I think you're probably right that our model should have a parameter for "researcher quality", and if a researcher is able to correctly predict the outcome of an experiment, that should cause an update in the direction of that researcher being more knowledgable (and their prior judgements should therefore carry more weight--including for this particular experiment!)

But the story you're telling doesn't seem entirely compatible with your comment earlier in this thread. Earlier you wrote: "However, it is often the case that you could get a lot more high-quality evidence that basically settles the question, if you put in many hours of work." But in this recent comment you wrote: "the experiment provides the last little bit of evidence needed to confirm [the hypothesis]". In the earlier comment, it sounds like you're talking about a scenario where most of the evidence comes in the form of data; in the later comment, it sounds like you're talking about a scenario where most of the evidence was necessary "just to think of the correct answer - to promote it to your attention" and the experiment only provides "the last little bit" of evidence.

So I think the philosophical puzzle is still unsolved. A few more things to ponder if someone wants to work on solving it:

  • If Bob is known to be an excellent researcher, can we trust HARKing if it comes from him? Does the mechanism by which hindsight bias works matter? (Here is one possible mechanism.)

  • In your simplified model above, there's no possibility of a result that is "just noise" and not explained by any particular hypothesis. But noise appears to be a pretty big problem (see: replication crisis?) In current scientific practice, the probability that a result could have been obtained through noise is a number of great interest that's almost always calculated (the p-value). How should this number be factored in, if at all?

    • Note that p-values can be used in Bayesian calculations. For example, in a simplified universe where either the null is true or the alternative is true, p(alternative|data) = p(data|alternative)p(alternative) / (p(data|alternative)p(alternative) + p(data|null)p(null))

    • My solution was focused on a scenario where we're considering relatively obvious hypotheses and subject to lots of measurement noise, but you convinced me this is inadequate in general.

  • I'm unsatisfied with the discussion around "Alice didn't think of all of them". I know nothing about relativity, but I imagine a big part of Einstein's contribution was his discovery of a relatively simple hypothesis which explained all the data available to him. (By "relatively simple", I mean a hypothesis that didn't have hundreds of free parameters.) Presumably, Einstein had access to the same data as other contemporary physicists, so it feels weird to explain his contribution in terms of having access to more evidence.

    • In other words, it feels like the task of searching hypothesis space should be factored out from the task of Bayesian updating. This seems closely related to puzzles around "realizability"--through your search of hypothesis space, you're essentially "realizing" a particular hypothesis on the fly, which isn't how Bayesian updating is formally supposed to work. (But it is how deep learning works, for example.)
Comment by john_maxwell on Forum participation as a research strategy · 2019-08-02T00:50:13.715Z · score: 5 (3 votes) · LW · GW

I actually think there is an interesting philosophical puzzle around this that has not fully been solved...

If I show you the code I'm going to use to run my experiment, can you be confident in guessing which hypothesis I aim to test?

  • If yes, then HARKing should be easily detectable. By looking at my code, it should be clear that the hypothesis I am actually testing is not the one that I published.

  • If no, then the resulting data could be used to prove multiple different hypotheses, and thus doesn't necessarily constitute stronger evidence for any one of the particular hypotheses it could be used to prove (e.g. the hypothesis I preregistered).

To put it another way, in your first paragraph you say "there are likely many hypotheses that explain the data", but in the second paragraph, you talk as though there's a particular set of data such that if we get that data, we know there's only one hypothesis which it can be used to support! What gives?

My solution to the puzzle: Pre-registration works because it forces researchers to be honest about their prior knowledge. Basically, prior knowledge unencumbered by hindsight bias ("armchair reasoning") is underrated. Any hypothesis which has only the support of armchair reasoning or data from a single experiment is suspect. You really want both.

In Bayesian terms, you have to look at both the prior and the likelihood. Order shouldn't matter (multiplication is commutative), but as I said--hindsight bias.

Curious to hear your thoughts.

[There are also cases where given the data, there's only one plausible hypothesis which could possibly explain it. A well-designed experiment will hopefully produce data like this, but I think it's a bit orthogonal to the HARKing issue, because we can imagine scenarios where post hoc data analysis suggests there is only one plausible hypothesis for what's going on... although we should still be suspicious in that case because (presumably) we didn't have prior beliefs indicating this hypothesis was likely to be true. Note that in both cases we are bottlenecked on the creativity of the experiment designer/data analyst in thinking up alternative hypotheses.]

[BTW, I think "armchair reasoning" might have the same referent as phrases with a more positive connotation: "deconfusion work" or "research distillation".]

Comment by john_maxwell on Forum participation as a research strategy · 2019-07-31T23:29:07.205Z · score: 2 (1 votes) · LW · GW

Hm, you think data soundly beats theory in ML? Why is HARKing a problem then?

Comment by john_maxwell on Forum participation as a research strategy · 2019-07-31T05:54:42.796Z · score: 18 (5 votes) · LW · GW

Maybe I remember the conversation differently than Romeo, because I remember being on the pro side.

Participation inequality is a thing. Here is one estimate for LW. Here is a thread from 2010 where Kevin asked people to delurk which has over 600 comments in it. Anecdotally, I'm no longer surprised by experiences like:

  • Friend I didn't know very well recently came to visit. Says my ideas about AI are interesting. Friend is just learning to program. I didn't even know they read LW, much less that they knew anything about my AI ideas.

  • Write a comment on the EA Forum. Go to an event that evening. I talk to someone and they're like "oh I saw your comment on the EA Forum and it was good".

  • I visited Europe and met an EA from the Czech Republic. He says: "I've probably read more words by you than by William MacAskill because you post to the EA Forum so much."

My impression is this is a contrast to academia, where virtually no one reads most academic publications. I suspect acquiring an online following allows for greater total influence than ascending the academic ladder, though it's probably a different kind of influence (people are less likely to cite your work years later? In the spirit of that, here is a LW thread from years ago on papers vs forums.)


  • With great power comes great responsibility. If you are speaking to a big audience, spreading bad info is harmful. And contra XKCD, debunking bad info is valuable. (Note that almost everything started going to shit online within 1 decade of this comic becoming famous.)

  • I agree that the "anchor of scholarship" is valuable. I view FP as a guilty pleasure/structured procrastination. I don't necessarily endorse it as the highest value activity, but if I'm going to be goofing off anyway it's a relatively useful way to goof off, and sometimes it feels like my contributions are pretty valuable/I get valuable ideas from others. (I suspect the best forum writing is not only academically rigorous and innovative, but also entertaining to read, with clickbaity titles and interesting anecdotes and such, so you can capture the cognitive surplus of forum users as effectively as possible. Additionally, the best questions to ask forums are probably those that benefit from crowdsourcing/multiple perspectives/creativity/etc.)

Seems like the ideal is a balance between FP and scholarship. If you're all FP, you're likely just a clueless person spreading cluelessness. If you're all scholarship, others don't get to share in your wisdom much.

Comment by john_maxwell on Forum participation as a research strategy · 2019-07-31T05:28:45.289Z · score: 2 (1 votes) · LW · GW

In many research areas, ideas are common, and it isn't clear which ideas are most important. The most useful contributions come from someone taking an idea and demonstrating that it is viable and important, which often requires a lot of solitary work that can't be done in the typical amount of time it takes to write a comment or post.

Interesting. Definitely not an expert here, but I could imagine FP being a good tool in this case... if the forum is an efficient "marketplace of ideas", where perspectives compete and poke holes in each other and adapt to critics, and the strongest perspectives emerge victorious, then this seems like it could be a good way to figure out which ideas are the best? Some say AI alignment is like software security, and there's that saying "with enough eyes all bugs are shallow". If security flaws tend to be a result of software designers relying on faulty abstractions or otherwise falling prey to blind spots, then I would expect that withstanding a bunch of critics, each critic using their own set of abstractions, is a stronger indicator of quality than anything one person is able to do in solitude.

(It's possible that you're using "important" in a way that's different than how I used it in the preceding paragraph.)

Comment by john_maxwell on What is our evidence that Bayesian Rationality makes people's lives significantly better? · 2019-07-30T08:19:22.873Z · score: 3 (6 votes) · LW · GW

Fair enough.

CFAR has some data about participants in their workshops: BTW, I think the inventor of Cohen's d said 0.2 is a "small" effect size.

I think some LW surveys have collected data on the amount people have read LW and checked to see if that was predictive of e.g. being well-calibrated on things (IIRC it wasn't.) You could search for "survey [year]" on LW to find that data, and you could analyze it yourself if you want. Of course, it's hard to infer causality.

I think LW is one of the best online communities. But if reading a great online community is like reading a great book, even the best books are unlikely to produce consistent measurable changes in the life outcomes of most readers, I would guess.

Supposedly education research has shown that transfer learning isn't really a thing, which could imply, for example, that reading about Bayesianism won't make you better calibrated. Specifically practicing the skill of calibration could make you better calibrated, but we don't spend a lot of time doing that.

I think Bryan Caplan discusses transfer learning in his book The Case Against Education, which also talks about the uselessness of education in general. LW could be better for your human capital than a university degree and still be pretty useless.

The usefulness of reading LW has long been a debate topic on LW. Here are some related posts:

You can also do keyword searches for replies people have made, e.g.

Comment by john_maxwell on What does Optimization Mean, Again? (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 2) · 2019-07-30T07:44:17.478Z · score: 10 (2 votes) · LW · GW

Lots of search time alone does NOT indicate extremal results - it indicates lots of things about your domain, and perhaps the inefficiency of your search, but not overoptimization.

Thoughts on early stopping? (Maybe it works because if you keep optimizing long enough, you're liable to find yourself on a tall and narrow peak which generalizes poorly? However it seems like maybe the phenomenon doesn't just apply to gradient descent?)

BTW, I suspect there is only so much you can do with abstractions like these. At the end of the day, any particular concrete technique may not exhibit the flaws you predicted based on the abstract category you placed it in, and it may exhibit flaws which you wouldn't have predicted based on its abstract category. Maybe abstract categories are best seen as brainstorming tools for finding flaws in techniques.

Comment by john_maxwell on Is this info on zinc lozenges accurate? · 2019-07-29T18:49:02.673Z · score: 9 (3 votes) · LW · GW

My model is that viruses grow exponentially in your body, so it's best to nip them in the bud as soon as possible. I've tried to become really good at noticing symptoms of a cold very early on, including subtle body sensations which aren't listed as standard cold symptoms. Then I aggressively respond by: taking it easy, drinking lots of gatorade/water (especially in the middle of the night if I wake up briefly while sleeping), gargling warm salt water, sucking zinc lozenges, etc.

Before I did this, I would get sick and it would take me forever to recover. Since I started doing this, it feels like I'm usually able to nip it in the bud if I'm sufficiently proactive and keep doing this stuff for a bit even when it feels like the cold is probably gone.

It's hard to separate the effects of different things, but intuitively, vitamin C/ordinary supplemental zinc don't feel as helpful as theanine and ashwagandha, and gargling warm salt water seems to help right away. The zinc lozenges feel more helpful than any other supplement, but I wouldn't say they are the most important tool in my arsenal either.

Comment by john_maxwell on What is our evidence that Bayesian Rationality makes people's lives significantly better? · 2019-07-29T05:28:13.719Z · score: 14 (6 votes) · LW · GW

Here's some evidence that "Bayesian Rationality" doesn't work: The fact that you have written the bottom line first with this question, instead of asking a question like "What evidence do we have about impacts of being part of the rationality community?" or some similar question that doesn't get you filtered info :)