Posts

How to evaluate (50%) predictions 2020-04-10T17:12:02.867Z · score: 117 (56 votes)
UML final 2020-03-08T20:43:58.897Z · score: 23 (5 votes)
UML XIII: Online Learning and Clustering 2020-03-01T18:32:03.584Z · score: 7 (2 votes)
What to make of Aubrey de Grey's prediction? 2020-02-28T19:25:18.027Z · score: 24 (9 votes)
UML XII: Dimensionality Reduction 2020-02-23T19:44:23.956Z · score: 9 (3 votes)
UML XI: Nearest Neighbor Schemes 2020-02-16T20:30:14.112Z · score: 15 (4 votes)
A Simple Introduction to Neural Networks 2020-02-09T22:02:38.940Z · score: 25 (9 votes)
UML IX: Kernels and Boosting 2020-02-02T21:51:25.114Z · score: 13 (3 votes)
UML VIII: Linear Predictors (2) 2020-01-26T20:09:28.305Z · score: 9 (3 votes)
UML VII: Meta-Learning 2020-01-19T18:23:09.689Z · score: 15 (4 votes)
UML VI: Stochastic Gradient Descent 2020-01-12T21:59:25.606Z · score: 13 (3 votes)
UML V: Convex Learning Problems 2020-01-05T19:47:44.265Z · score: 13 (3 votes)
Excitement vs childishness 2020-01-03T13:47:44.964Z · score: 18 (8 votes)
Understanding Machine Learning (III) 2019-12-25T18:55:55.715Z · score: 17 (5 votes)
Understanding Machine Learning (II) 2019-12-22T18:28:07.158Z · score: 25 (7 votes)
Understanding Machine Learning (I) 2019-12-20T18:22:53.505Z · score: 47 (9 votes)
Insights from the randomness/ignorance model are genuine 2019-11-13T16:18:55.544Z · score: 7 (2 votes)
The randomness/ignorance model solves many anthropic problems 2019-11-11T17:02:33.496Z · score: 10 (7 votes)
Reference Classes for Randomness 2019-11-09T14:41:04.157Z · score: 8 (4 votes)
Randomness vs. Ignorance 2019-11-07T18:51:55.706Z · score: 5 (3 votes)
We tend to forget complicated things 2019-10-20T20:05:28.325Z · score: 51 (19 votes)
Insights from Linear Algebra Done Right 2019-07-13T18:24:50.753Z · score: 53 (23 votes)
Insights from Munkres' Topology 2019-03-17T16:52:46.256Z · score: 40 (12 votes)
Signaling-based observations of (other) students 2018-05-27T18:12:07.066Z · score: 12 (4 votes)
A possible solution to the Fermi Paradox 2018-05-05T14:56:03.143Z · score: 10 (3 votes)
The master skill of matching map and territory 2018-03-27T12:06:53.377Z · score: 36 (11 votes)
Intuition should be applied at the lowest possible level 2018-02-27T22:58:42.000Z · score: 29 (10 votes)
Consider Reconsidering Pascal's Mugging 2018-01-03T00:03:32.358Z · score: 14 (4 votes)

Comments

Comment by sil-ver on What is a “Good” Prediction? · 2020-05-04T11:10:29.834Z · score: 2 (1 votes) · LW · GW

I agree with your final paragraph – I'm fine with assuming there is a true probability. That said, I think there's an important difference between how accurate a prediction was, which can be straight-forwardly defined as its similarity to the true probability, and how good of a job the predictor did.

If we're just talking about the former, then I don't disagree with anything you've said, except that I would question calling it an "epistemically good" prediction – "epistemically good" sounds to me like it refers to performance. Either way, mere accuracy seems like the less interesting thing of the two.

If we're talking about the latter, then using the true probability as a comparison is problematic even in principle because it might not correspond to any intuitive notion of a good prediction. I see two separate problems:

  • There could be hidden variables. Suppose there is an election between candidate A and candidate B. Unbeknownst to everyone, candidate A has a brain tumor that will dramatically manifest itself three days before election day. Given this, the true probability that A wins is very low. But that can't mean people who assign low probabilities to A winning all did a good job – by assumption, their prediction was unrelated to the reason the probability was low.
  • Even if there are no hidden variables, it might be that accuracy doesn't monotonically increase with improved competence. Say there's another election (no brain tumor involved). We can imagine that all of the following is true:
    • Naive people will assign about 50/50 odds
    • Smart people will recognize that candidate A will have better debate performance and will assign 60/40 odds
    • Very smart people will recognize that B's poor debate performance will actually help them because it makes them relatable, so they will assign 30/70 odds
    • Extremely smart people will recognize that the economy is likely to crash before election day which will hurt B's chances more than everything else and will assign 80/20 odds. This is similar to the true probability.

In this case, going from smart to very smart actually makes your prediction worse, even though you picked up on a real phenomenon.

I personally think it might be possible to define the quality of a single prediction in a way that includes the true probability, but but I don't think it's straight-forward.

Comment by sil-ver on Meditation: the screen-and-watcher model of the human mind, and how to use it · 2020-05-02T21:21:49.953Z · score: 3 (2 votes) · LW · GW

I have never used Headspace, but I can say that I found it highly valuable to repeat the introductory course on Waking Up, which does fit your assessment that it moves too fast to learn the concepts the first time.

Comment by sil-ver on How to evaluate (50%) predictions · 2020-04-26T17:16:12.087Z · score: 15 (5 votes) · LW · GW

Also, I apologize for the statement that I "understand you perfectly" a few posts back. It was stupid and I've edited it out.

Comment by sil-ver on How to evaluate (50%) predictions · 2020-04-26T16:14:40.487Z · score: 2 (1 votes) · LW · GW
Ok this confirms you haven't understood what I'm claiming.

I'm arguing against this claim:

I don't think there is any difference in those lists!

I'm saying that it is harder to make a list where all predictions seem obviously false and have half of them come true than it is to make a list where half of all predictions seem obviously false and half seem obviously true and have half of them come true. That's the only thing I'm claiming is true. I know you've said other things and I haven't addressed them; that's because I wanted to get consensus on this thing before talking about anything else.

Comment by sil-ver on How to evaluate (50%) predictions · 2020-04-26T15:32:12.593Z · score: 2 (1 votes) · LW · GW

A list of predictions that all seem extremely unlikely to come true according to common wisdom.

Comment by sil-ver on Post/meta-skepticism I · 2020-04-26T08:03:04.000Z · score: 3 (2 votes) · LW · GW

I agree that for the examples you're naming (e.g., demanding strong evidence/resisting social pressure), there is a failure mode that looks like you're going too far (e.g., being excessively dogmatic/being contrarian).

However, I don't think that this failure mode actually results from identifying the underlying principle and then taking it to the extreme, and I think that's an important point to clarify. For example, in the first case, the principle I see is something like "demand strong evidence for strongly held beliefs" or even more generally "believe things only as strongly as evidence suggests." I don't think it's obvious that this principle can be taken too far. In particular, I think the following

A famous spoof article jokes that we don't know parachutes are reliable because we don't have a randomised controlled trial.

is not an example of doing that. Rather, the mistake here is something like, "equating rationality with academic science." We don't have a formally conducted study on the effectiveness of parachutes, and if you think that's the only evidence that counts, you might mistrust parachutes. But, as a matter of fact, we have excellent evidence to believe that parachutes work, and believing this evidence is perfectly rational. So you cannot arrive at a mistrust of parachutes by having high standards for evidence, you can only arrive at it by being wrong about what kind of evidence does and doesn't count.

Again, I only mean this as a clarification, not as a counterpoint. It is still absolutely possible to go wrong in the ways you describe, and avoiding that is important.

Comment by sil-ver on How to evaluate (50%) predictions · 2020-04-25T18:26:51.420Z · score: 2 (1 votes) · LW · GW

Well, now you've changed what you're arguing for. You initially said that it doesn't matter which way predictions are stated, and then you said that both lists are the same.

Comment by sil-ver on How to evaluate (50%) predictions · 2020-04-25T16:23:26.927Z · score: 0 (2 votes) · LW · GW

(Edit: deleted a line based on tone. Apologies.)

Everything except your last two paragraphs argues that a single 50% prediction can be flipped, which I agree with. (Again: for every predictions, there are ways to phrase them and precisely 2 of them are maximally bold. If you have a single prediction, then . There are only two ways, both are maximally bold and thus equally bold.)

When it comes to a list of 50% predictions, it's impossible to evaluate the impressiveness only by looking at how many came true, since it's arbitrary which way they are phrased

I have proposed a rule that dictates how they are phrased. If this rule is followed, it is not arbitrary how they are phrased. That's the point.

Again, please consider the following list:

  • The price of a barrel of oil at the end of 2020 will be between $50.95 and $51.02 (50%)
  • Tesla's stock price at the end of the year 2020 is between 512$ and 514$ (50%)
  • ...

You have said that there is no difference between both lists. But this is obviously untrue. I hereby offer you 2000$ if you provide me with a list of this kind and you manage to have, say, at least 10 predictions where between 40% and 60% come true. Would you offer me 2000$ if I presented you with a list of this kind:

  • The price of a barrel of oil at the end of 2020 will be between $50.95 and $51.02 (50%)
  • Tesla's stock price at the end of the year 2020 is below 512$ or above 514$ (50%)

and between 40% and 60% come true? If so, I will PM you one immediately.

I think you're stuck at the fact that a 50% prediction also predicts the negated statement with 50%, therefore you assume that the entire post must be false, and therefore you're not trying to understand the point the post is making. Right now, you're arguing for something that is obviously untrue. Everyone can make a list of the second kind, no-one can make a list of the first kind. Again, I'm so certain about this that I promise you 2000$ if you prove me wrong.

Comment by sil-ver on How to evaluate (50%) predictions · 2020-04-25T07:55:40.990Z · score: 2 (1 votes) · LW · GW
As has been noted, the impressiveness of the predictions has nothing to do with which way round they are stated; predicting P at 50% is exactly as impressive as predicting ¬P at 50% because they are literally the same.

If that were true, then the list

  • The price of a barrel of oil at the end of 2020 will be between $50.95 and $51.02 (50%)
  • Tesla's stock price at the end of the year 2020 is between 512$ and 514$ (50%)
  • ⋯ (more extremely narrow 50% predictions)

and the list

  • The price of a barrel of oil at the end of 2020 will be between $50.95 and $51.02 (50%)
  • Tesla's stock price at the end of the year 2020 is below 512$ or above 514$ (50%)
  • ⋯ (more extremely narrow 50% predictions where every other one is flipped)

would be equally impressive if half of them came true. Unless you think that's the case, it immediately follows that the way predictions are stated matters for impressiveness.

It doesn't matter in case of a single 50% prediction, because in that case, one of the phrasings follows the rule I propose, and the other follows the inverse of the rule, which is the other way to maximize boldness. As soon as you have two 50% predictions, there are four possible phrasings and only two of them maximize boldness. (And with predictions, possible phrasings and only 2 of them maximize boldness.)

The person you're referring to left an addendum in a second comment (as a reply to the first) acknowledging that phrasing matters for evaluation.

Comment by sil-ver on My experience with the "rationalist uncanny valley" · 2020-04-24T09:15:35.145Z · score: 4 (2 votes) · LW · GW
I'm very competitive and my self-worth is mostly derived from social comparison, a trait which at worst can cause me to value winning over maintaining relationships, or cause me to avoid people who have higher status than me to avoid upward comparison. In reading LW and rationalist blogs, I think I've turned away from useful material that takes longer for me to grasp because it makes me feel inferior. I sometimes binge on low-quality material, sometimes even seeking out highly downvoted posts; I suspect I do this because it allows me to mentally jeer at people or ideas I know are incorrect.

I want to share that I have done this as well. In my case, I would be slightly more charitable and claim that the motivation was not to jeer at people who say incorrect things but to derive a feeling that I myself am doing okay. LessWrong has very high standards and there are a lot of impressive people here, which can make it terrifying for those of us who have the deeply rooted instinct to compare ourselves to whatever people we see around us. So if I see something downvoted, it gives me reassurance that I at least must be above some vaguely defined bar.

Comment by sil-ver on How to evaluate (50%) predictions · 2020-04-11T19:56:57.115Z · score: 2 (1 votes) · LW · GW

Fixed. And thanks!

Comment by sil-ver on How to evaluate (50%) predictions · 2020-04-11T09:07:54.033Z · score: 7 (6 votes) · LW · GW

I might have been unclear, but I didn't mean to conflate them. The post is meant to be just about impressiveness. I've stated in the end that impressiveness is boldness accuracy (which I probably should have called calibration). It's possible to have perfect accuracy and zero boldness by making predictions about random number generators.

I disagree that 50% predictions can't tell you anything about calibration. Suppose I give you 200 statements with baseline probabilities, and you have to turn them into predictions by assigning them your own probabilities while following the rule. Once everything can be evaluated, the results on your 50% group will tell me something about how well calibrated you are.

(Edit: I've changed the post to say impressiveness = calibration boldness)

Comment by sil-ver on How to evaluate (50%) predictions · 2020-04-11T08:57:15.716Z · score: 4 (3 votes) · LW · GW
"Always phrase predictions such that the confidence is above the baseline probability" - This really seems like it should not matter. I don't have a cohesive argument against it at this stage, but reversing should fundamentally be the same prediction.

So I've thought about this a bit more. It doesn't matter how someone states their probabilities. However, in order to use your evaluation technique we just need to transform the probabilities so that all of them are above the baseline.

Yes, I think that's exactly right. Statements are symmetric: 50% that happens 50% that happens. But evaluation is not symmetric. So you can consider each prediction as making two logically equivalent claims ( happens with probability and happens with probability) plus stating on which one of the two you want to be evaluated on. But this is important because the two claims will miss the "correct" probability in different directions. If 50% confidence is too high for (Tesla stock price is in narrow range) then 50% is too low for (Tesla stock price outside narrow range).

(Plu in any case it's not clear that we can always agree on a baseline probability)

I think that's the reason why calibration is inherently impressive to some extent. If it was actually boldness multiplied by calibration, then you should not be impressed at all whenever the boldness pile and confidence pile have identical height. And I think that's correct in theory; if I just make predictions about dice all day, you shouldn't be impressed at all regardless of the outcome. But since it takes some skill to estimate the baseline for all practical purposes, boldness doesn't go to zero.

Comment by sil-ver on How to evaluate (50%) predictions · 2020-04-11T08:44:08.428Z · score: 4 (3 votes) · LW · GW

Oh, sorry! I've taken the reference to your prediction out and referred only to BetFair as the baseline.

Comment by sil-ver on How to evaluate (50%) predictions · 2020-04-10T18:32:13.269Z · score: 2 (2 votes) · LW · GW

Yes, and in particular, by Scott saying that 50% predictions are "technically meaningless."

Comment by sil-ver on Implications of the Doomsday Argument for x-risk reduction · 2020-04-03T10:06:36.185Z · score: 2 (2 votes) · LW · GW

I confidently reject the Doomsday argument, so it doesn't have any implications.

Comment by sil-ver on How special are human brains among animal brains? · 2020-04-01T18:13:18.766Z · score: 7 (2 votes) · LW · GW

I might be confused here, but it seems to me that it's easy to interpret the arguments in this post as evidence in the wrong direction.

I see the following three questions as relevant:

1. How much sets human brains apart from other brains?

2. How much does the thing that humans have and animals don't matter?

3. How much does better architecture matter for AI?

Questions #2 and #3 seem positively correlated – if the thing that humans have is important, it's evidence that architectural changes matter a lot. However, holding #2 constant, #1 and #3 seem negatively correlated – the less stuff there is that makes humans special, the smaller the improvements to architecture that are required to achieve greater performance.

Since this post is arguing primarily about #1, the way it affects #3 is potentially confusing.

Comment by sil-ver on April Fools: Announcing LessWrong 3.0 – Now in VR! · 2020-04-01T14:31:02.343Z · score: 28 (9 votes) · LW · GW

Strong upvote from me. This new technology has helped me view the existing content from a different angle.

Comment by sil-ver on Open & Welcome Thread - March 2020 · 2020-03-12T17:40:11.068Z · score: 1 (1 votes) · LW · GW

Is there a reason why it wouldn't be strongly correlated?

Your "serious" modifier sounds to me like you're envisioning the consensus among masses to change while smart people are more sober. I was largely assuming that, in the worlds where Aubrey's prediction is true, actual life expectancy does, in fact, increase along with the awareness shift. Note that it's expectancy rather than actual life span.

Pensions might be a good pointer.

Comment by sil-ver on Open & Welcome Thread - March 2020 · 2020-03-12T14:27:14.935Z · score: 1 (1 votes) · LW · GW

Won't there be more indirect consequences? If we suddenly expect people to live longer, even if it the technology will take a while to be around, wouldn't that benefit some companies relative to others?

Comment by sil-ver on Open & Welcome Thread - March 2020 · 2020-03-11T13:11:42.781Z · score: 3 (2 votes) · LW · GW

A while ago ago I asked LW about their take on Aubrey's prediction (forecasting a massive and sudden shift in public awareness on the feasibility of anti-aging technology) and about how, if it is accurate, one could make money out of knowing that.

I got several answers on the first question (summary: people aren't buying it) but none on the second. Is there any way to bet on this? I know very little about pubic trading, but it feels like such a big event ought to affect the stock market somehow. Am I wrong, or is it too difficult to estimate how?

Comment by sil-ver on Credibility of the CDC on SARS-CoV-2 · 2020-03-08T12:41:51.325Z · score: 5 (3 votes) · LW · GW

Right, so you set the standard higher than simply talking about it. That wasn't clear to me from your previous post but it makes sense.

Comment by sil-ver on Credibility of the CDC on SARS-CoV-2 · 2020-03-08T09:30:27.749Z · score: 3 (7 votes) · LW · GW
If you post it anyway (maybe a top-level post for visibility?), I'll strong-upvote it. I vehemently disagree with you, but even more vehemently than that, I disagree with allowing this class of expense to conceal potentially-useful information, like big critiques.

I think you're ignoring the harms from posting something uncivil. Civility is an extremely important norm. I would not support something that is directly insulting, even if it is an important critique.

However, I did strong-upvote this comment (meaning sirjackholland's comment on this post) and I applaud them both for not publishing their original critique and for expressing their position anyway.

Comment by sil-ver on Credibility of the CDC on SARS-CoV-2 · 2020-03-08T09:24:01.773Z · score: 9 (5 votes) · LW · GW

I might be misunderstanding you, but doesn't Elizabeth explicitly state that this discussion did take place here?

Comment by sil-ver on Winning vs Truth – Infohazard Trade-Offs · 2020-03-08T09:14:08.399Z · score: 4 (3 votes) · LW · GW

I reject the framing of truth vs winning. Instead, I propose that only winning matters. Truth has no terminal value.

It does, however, have enormous instrumental value. For this reason, I support the norm always to tell the truth even if it appears as if the consequences are net negative – with the reasoning that they probably aren't, at least not in expectation. This is so in part because truth feels extremely important to many of us, which means that having such a norm in place is highly beneficial.

The other response is much more interesting, arguing that appeals to consequences are generally bad, and that meta-level considerations mean we should generally speak the truth even if the immediate consequences are bad. I find this really interesting because it is ultimately about infohazards: those rare cases where there is a conflict between epistemic and instrumental rationality. Typically, we believe that having more truth (via epistemic rationality) is a positive trait that allows you to “win” more (thus aligning with instrumental rationality). But when more truth becomes harmful, which do we preference: truth, or winning?

The keyword here is "immediate" [emphasis added], which you drop by the end. I agree with the first part of this paragraph but disagree with the final sentence. Instead, my question would have been, "but when more truth appears to become harmful, how do we balance the immediate consequences against the long term/fuzzy/uncertain but potentially enormous consequences of violating the truth norm?"

I read jimrandomh's comment as reasoning from this framework (rather than arguing that we should assign truth terminal value), but this might be confirmation bias.

Comment by sil-ver on How did the Coronavirus Justified Practical Advice Thread Change Your Behavior, if at All? · 2020-03-06T09:14:52.892Z · score: 5 (3 votes) · LW · GW

Specific to the thread: I bought copper tape, a pulse oximeter, and I'm much more careful with packages.

Partially due to LW but not that thread in particular: I've stocked up food and am getting rid of the habit of constantly touching my face

Comment by sil-ver on Coronavirus: Justified Practical Advice Thread · 2020-03-03T13:12:47.670Z · score: 8 (2 votes) · LW · GW

I'm wondering how well viruses stick to/survive on clothing. In trying to avoid touching my face, I've occasionally resorted to using sleeves of my hoodie instead – which I also use to touch surfaces like door knobs or light switches. Should I use the elbow for those instead?

Comment by sil-ver on Open & Welcome Thread - February 2020 · 2020-02-29T14:06:14.988Z · score: 4 (3 votes) · LW · GW
signaling wisdom

I see this problem all the time with regard to things that can be classified as "childish". Beside pandemics, the most striking examples in my mind are risk of nuclear war and risk of AI, but I expect there are lots of others. I don't exactly think of it as signaling wisdom, but as signaling being a serious-person-who-undestands-that-unserious-problems-are-low-status (the difference being that it doesn't necessitate thinking of yourself as particularly "smart" or "wise").

Comment by sil-ver on Open & Welcome Thread - February 2020 · 2020-02-29T13:53:44.163Z · score: 1 (1 votes) · LW · GW

https://predictionbook.com/predictions/198261

Comment by sil-ver on What to make of Aubrey de Grey's prediction? · 2020-02-28T21:05:57.386Z · score: 3 (2 votes) · LW · GW
You can check out my attempt on Metaculus to capture the essence of his claim, though it's debatable whether I succeeded. Right now Metaculus says there's a 75% chance of something culturally significant happening in anti-aging research in the 2020s.

This is good. However, you did set the bar for positive resolution a lot lower than I would have based on what he claimed this time around.

Comment by sil-ver on What to make of Aubrey de Grey's prediction? · 2020-02-28T20:51:15.776Z · score: 1 (1 votes) · LW · GW

My bad for being unclear. We're strictly talking about the perception of it being a fact. I did not intend to include any factual claim into my paraphrasing.

I added an "alleged" in there. I think it's a pretty safe bet that Aubrey considers it a fact.

Comment by sil-ver on Quarantine Preparations · 2020-02-25T16:59:32.280Z · score: 2 (2 votes) · LW · GW

I don't think there is a UDT-idea that prescribes cooperating with non-UDT agents. UDT is sufficiently formalized that we know what happens if a UDT agent plays a prisoner's dilemma with a CDT agent and both parties know each other's algorithm/code: they both defect.

If you want to cooperate out of altruism, I think the solution is to model the game differently. The outputs that go into the game theory model should be whatever your utility function says, not your well-being. So if you value the other person's well-being as much as yours, then you don't have a prisoner's dilemma because cooperate/defect is a better outcome for you than defect/defect.

by buying food they are limiting other's chances to buy it.

But they're only doing that if there will, in fact, be a supply shortage. That was my initial point – it depends on how many other people will stockpile food.

Comment by sil-ver on Quarantine Preparations · 2020-02-25T13:22:55.758Z · score: 4 (3 votes) · LW · GW

Thanks; fixed & will try to remember.

Comment by sil-ver on Quarantine Preparations · 2020-02-25T12:58:19.659Z · score: 11 (3 votes) · LW · GW
If we use some variant of UDT, the same line of reasoning is experienced by many other minds and we should reason as if we have causal power over all these minds.

As I understand UDT, this isn't right. UDT 1.1 chooses an input-output mapping that maximizes expected utility. Even assuming that all people who read LW run UDT 1.1, this choice still only determines the input-output behavior of a couple of programs (humans). The outputs of programs that don't depend on our outputs because those programs aren't running UDT are held constant. Therefore, if you formalized this problem, UDT's output could be "stockpile food" even if [every human doing that] would lead to a disaster.

I think "pretend as if everyone runs UDT" was neither intentioned by Wei Dei nor is it a good idea.
Differently put, UDT agents don't cooperate in a one-shot prisoner's dilemma if they play vs. CDT agents.

Also: if a couple of people stockpile food, but most people don't, that seems like a preferable outcome to everyone doing nothing (provided stockpiling food is worth doing). It means some get to prepare, and the food market isn't significantly affected. So this particular situation actually doesn't seem to be isomorphic to the prisoner's dilemma (if modeled via game theory).

Comment by sil-ver on Quarantine Preparations · 2020-02-25T12:11:04.943Z · score: 6 (4 votes) · LW · GW

The advice of this post seems to be advice on the margin (i.e., assuming everything else is held constant), which seems reasonable given that this one post won't change collective behavior by much.

So the question isn't "what happens if everyone stockpiles food?" but rather, "do we expect enough people to stockpile food that stockpiling more food will lead to bad consequences?". I don't know the answer to that one.

Comment by sil-ver on UML XI: Nearest Neighbor Schemes · 2020-02-17T08:26:38.480Z · score: 1 (1 votes) · LW · GW

That sounds interesting. Can you share an example other than decision trees?

Comment by sil-ver on When None Dare Urge Restraint · 2020-02-15T17:36:34.440Z · score: 1 (1 votes) · LW · GW

I'm not sure EY meant to imply that the response is factually correct. Smarter-than-expected could just mean "not a totally vapid applause light." A wrong but genuine response could meet that standard.

Comment by sil-ver on UML VIII: Linear Predictors (2) · 2020-02-13T17:24:04.542Z · score: 2 (2 votes) · LW · GW

It's supposed to be inf (the infimum). Which is the same as the minimum whenever the minimum exists, but sometimes it doesn't exist.

Suppose is , i.e. and the point is 3. Then the set doesn't have a smallest element. Something like is pretty close but you can always find a pair that's even closer. So the distance is defined as the largest lower-bound on the set , which is the infimum, in this case 2.

Comment by sil-ver on A Simple Introduction to Neural Networks · 2020-02-10T22:55:17.146Z · score: 7 (4 votes) · LW · GW

Okay – since I don't actually know what is used in practice, I just added a bit paraphrasing your correction (which is consistent with a quick google search), but not selling it as my own idea. Stuff like this is the downside of someone who is just learning the material writing about it.

Comment by sil-ver on A Simple Introduction to Neural Networks · 2020-02-10T08:31:28.940Z · score: 4 (2 votes) · LW · GW
What's the "ℓ"? (I'm unclear on how one iterates from L to 2.)

is the number of layers. So if it's 5 layers, then . It's one fewer transformation than the number of layers because there is only one between each pair of layers.

Absolute value, because bigger errors are quadratically worse, it was tried and it worked better, or tradition?

I genuinely don't know. I've wondered forever why squaring is so popular. It's not just in ML, but everywhere.

My best guess is that it's in some fundamental sense more natural. Suppose you want to guess a location on a map. In that case, the obvious error would be the straight-line distance between you and the target. If your guess is and the correct location is , then the distance is – that's just how distances are computed in 2-dimensional space. (Draw a triangle between both points and use the Pythagorean theorem.) Now there's a square root, but actually the square root doesn't matter for the purposes of minimization – the square root is minimal if and only if the thing under the root is minimal, so you might as well minimize . The same is true in 3-dimensional space or -dimensional space. So if general distance in abstract vector spaces works like the straight-line distance does in geometric space, then squared error is the way to go.

Also, thanks :)

Comment by sil-ver on Some quick notes on hand hygiene · 2020-02-09T10:02:23.935Z · score: 2 (2 votes) · LW · GW

Good post. I would actually argue that the cost of many second activities is much lower than the cost of one block of seconds, because taking small breaks in between work isn't zero value.

Comment by sil-ver on Some quick notes on hand hygiene · 2020-02-06T15:21:20.148Z · score: 1 (1 votes) · LW · GW
Have you been doing something that puts stuff on you hands that is no already spread everywhere you are and will touch or for some reason has caused a significantly higher concentration on your hands versus the environment?

Don't think so.

Bite your fingernails, or stick you fingers, hands on/in you mouth a lot? Stop or be aware of what you've been touching since the last cleaning.

That's not at all practical, though. Changing a habit such as biting fingernails is extremely difficult, and definitely not worth it to reduce the risk of getting a virus.

Comment by sil-ver on Meta-Preference Utilitarianism · 2020-02-06T11:28:01.023Z · score: 1 (1 votes) · LW · GW

To make Wei Dai's answer more concrete, suppose something like the symmetry theory of valence is true; in that case, there's a crisp, unambiguous formal characterization of all valence. Then add open individualism to the picture, and it suddenly becomes a lot more plausible that many civilizations converge not just towards similar ethics, but exactly identical ethics.

Comment by sil-ver on Some quick notes on hand hygiene · 2020-02-06T11:18:24.246Z · score: 11 (11 votes) · LW · GW

What's missing for me here is a quantitative argument for why this is actually worth doing. Washing your hands more often would reduce risk, but is it actually worth the effort? (And for me there's also the problem that my doctor literally instructed me to wash my hands less often because of a skin infection thing.)

Comment by sil-ver on Category Theory Without The Baggage · 2020-02-04T17:24:55.429Z · score: 1 (1 votes) · LW · GW

I believe query and target category are the same here, but after reading it again, I see that I don't fully understand the respective paragraph.

Comment by sil-ver on Category Theory Without The Baggage · 2020-02-04T13:24:40.528Z · score: 1 (1 votes) · LW · GW

I think the query category is the pattern, as you say, and the target category is [original category + copy + edges between them]. That way, if the matching process returns a match, that match corresponds to a path-that-is-equivalent-to-the-path-in-the-query-category.

Comment by sil-ver on Category Theory Without The Baggage · 2020-02-04T13:20:57.603Z · score: 5 (3 votes) · LW · GW
e.g. “colou*r” matches “color” or “colour” but not “pink”.

Is this correct? I'd have thought "colo*r" matches to both "color" and "colour", but "colou*r" only to "colour".

Next-most complicated

Least complicated?

I'm very likely to read every post you write on this topic – I've gotten this book a while ago, and while it's not a priority right now, I do intend to read it, and having two different sources explaining the material from two explicitly different angles is quite nice. (I'm mentioning this to give you a an idea of what kind of audience gets value out of your post; I can't judge whether it's an answer to your category resource question, although it seems very good to me.)

I initially thought that the clouds were meant to depict matches and was wondering why it wasn't what I thought it should be, before realizing that they always depict the same stuff and were meant to depict "all stuff" before we figure out what the matches are.

Comment by sil-ver on REVISED: A drowning child is hard to find · 2020-01-31T19:26:31.322Z · score: 3 (6 votes) · LW · GW

(This is a general comment about the argument, not about the revisions.)

Neither scenario suggests that small donors should try to fill this funding gap. If they trust big donors, they should just give to the big donors. If they don't, why should they believe a story clearly meant to extract money from them?

Because some people are trustworthy and others aren't.

The reason why I believe the EA claims is pretty simple: I trust the people making them. The fact that there is a lot of altruistic value sort of lying on the sidewalks may be a-priori surprising, but we have so much evidence that maximizing altruism is extremely rare that I don't see much of an argument left at this point. EY made this point in Inqadequate Equiliria:

Eliezer: Well, mostly I’m implying that maximizing altruism is incredibly rare, especially when you also require sufficiently precise reasoning that you aren’t limited to cases where the large-scale, convincing study has already been done; and then we’re demanding the executive ability to start a new project on top of that. But yes, I’m also saying that here on Earth we have much more horrible problems to worry about.
Comment by sil-ver on Open & Welcome Thread - December 2019 · 2020-01-08T10:43:14.477Z · score: 1 (1 votes) · LW · GW
When I learned probability, we were basically presented with a random variable X, told that it could occupy a bunch of different values, and asked to calculate what the average/expected value is based on the frequencies of what those different values could be. So you start with a question like "we roll a die. here are all the values it could be and they all happen one-sixth of the time. Add each value multiplied by one-sixth to each other to get the expected value." This framing naturally leads to definition (1) when you expand to continuous random variables.

That's a strong steelman of the status quo in cases where random variables are introduced as you describe. I'll concede that (1) is fine in this case. I'm not sure it applies to cases (lectures) where probability spaces are formally introduced – but maybe it does; maybe other people still don't think of RVs as functions, even if that's what they technically are.

Comment by sil-ver on [AN #80]: Why AI risk might be solved without additional intervention from longtermists · 2020-01-04T21:25:56.219Z · score: 5 (3 votes) · LW · GW
value-conditioned probabilities

Is this a thing or something you just coined? "Probability" has a meaning, I'm totally against using it for things that aren't that.

I get why the argument is valid for deciding what we should do – and you could argue that's the only important thing. But it doesn't make it more likely that our world is robust, which is what the post was claiming. It's not about probability, it's about EV.