Posts

Comments

Comment by Charlie Sanders (charlie-sanders-1) on AGI Ruin: A List of Lethalities · 2022-06-07T17:17:24.987Z · LW · GW

I'd like to propose a test to objectively quantify the average observer's reaction with regards to skepticism of doomsday prophesizing present in a given text. My suggestion is this: take a text, swap the subject of doom (in this case AGI) with another similar text spelling out humanity's impending doom - for example, a lecture on Scientology and Thetans or the Jonestown massacre - and present these two texts to independent observers, in the same vein as a Turing test. 
 

If an outside independent observer cannot reliably identify which subject of doom corresponds to which text, then that could serve as an effective way of benchmarking when a specific text has transitioned away from effectively conveying information and towards fearmongering. 

Comment by Charlie Sanders (charlie-sanders-1) on Science in a High-Dimensional World · 2022-01-18T20:18:12.326Z · LW · GW

I think this post would be stronger if it covered at least basic metrology and statistics. 

It's incorrect to say that billions of variables aren't affecting a sled sliding down a hill - of course they're affecting the speed, even if most are only by a few planck-lengths per hour. But, crucially, they're mostly not affecting it to a detectable amount. The detectability threshold is the key to the argument. 

For detectability, whether you notice the effects of outside variables is going to come down to the precision of the instrument that you're using to measure your output. If you're using a radar gun that gives readings to the nearest MPH, for example, you won't perceive a difference between 10.1 and 10.2 MPH, and so to you the two are equivalent. Nonetheless, outside variables have absolutely influenced the two readings differently. 

Equally critical is the number of measurements that you're taking. For example, if you're taking repeated measurements after controlling a certain set of variables, you may be able to say with a certain confidence/reliability that no other variables are causing enough variations in speed to register an output that's outside of the parameters that you've set. But that is a very different thing than saying that those other variables simply don't exist! One is a statement of probability, another is a statement of certainty. Maybe there's a confluence of variables that only occur once every thousand times, which you won't pick up when doing an initial evaluation. 

Comment by Charlie Sanders (charlie-sanders-1) on Biology-Inspired AGI Timelines: The Trick That Never Works · 2021-12-09T21:08:37.081Z · LW · GW
  1. The size of the community working on the alignment problem can be assumed to be at least somewhat proportional to the likelihood of successfully solving the alignment problem.
  2. Eliezer, being the most public face of the alignment problem community, wields outsized influence in shaping public perception of the community.
  3. Eliezer's writing is distinctly condescending and polemical, and has at least a hypothetical possibility of causing reputational harm to the community (as evidenced by your comment).

Based on this, there absolutely exists a hypothetical point where, based purely on writing style, the net effect of a post like this could fully undermine the post's ostensible aim. Whether this post crosses that point is a subjective evaluation, and I don't know of any rigorous way to evaluate this.

I'm fully aware that this could be construed as "tone policing", but ignorance of the impacts of writing tone seems like a blind spot to Eliezer and the community overall, so I think the topic is worthy of discussion.

Comment by Charlie Sanders (charlie-sanders-1) on Whole Brain Emulation: No Progress on C. elegans After 10 Years · 2021-10-04T15:05:37.834Z · LW · GW

Imagine you have two points, A and B. You're at A, and you can see B in the distance. How long will it take you to get to B?

Well, you're a pretty smart fellow. You measure the distance, you calculate your rate of progress, maybe you're extra clever and throw in a factor of safety to account for any irregularities during the trip. And you figure that you'll get to point B in a year or so.

Then you start walking.

And you run into a wall. 

Turns out, there's a maze in between you and point B. Huh, you think. Well that's ok, I put a factor of safety into my calculations, so I should be fine. You pick a direction, and you keep walking. 

You run into more walls.

You start to panic. You figured this would only take you a year, but you keep running into new walls! At one point, you even realize that the path you've been on is a dead end — it physically can't take you from point A to point B, and all of the time you've spent on your current path has been wasted, forcing you to backtrack to the start.

Fundamentally, this is what I see happening, in various industries: brain scanning, self-driving cars, clean energy, interstellar travel, AI development. The list goes on.

Laymen see a point B in the distance, where we have self-driving cars run on green energy powered by AGI's. They see where we are now. They figure they can estimate how long it'll take to get to that point B, slap on a factor of safety, and make a prediction. 

But the real world of problem solving is akin to a maze. And there's no way to know the shape or complexity of that maze until you actually start along the path. You can think you know the theoretical complexity of the maze you'll encounter, but you can't. 

Comment by Charlie Sanders (charlie-sanders-1) on Strong Evidence is Common · 2021-03-17T06:35:50.615Z · LW · GW

One implication of the Efficient Market Hypothesis (EMH) is that is it difficult to make money on the stock market. Generously, maybe only the top 1% of traders will be profitable. 

Nitpick: it's incredibly easy to make money on the stock market: just put your money in it, ideally in an index fund. It goes up by an average of 8% a year. Almost all traders will be profitable, although many won't beat that 8% average. 

The entire FIRE movement is predicated on it being incredibly simple to make money on the stock market. It takes absolutely zero skill to be a sufficiently profitable trader, given a sizeable enough initial investment. 

I get that you're trying to convey above-market-rate returns here, but your wording is imprecise. 

Comment by Charlie Sanders (charlie-sanders-1) on The case for aligning narrowly superhuman models · 2021-03-10T17:44:42.428Z · LW · GW

Right, but I'm not sure how you'd "test" for success in that scenario. Usefulness to humanity, as demonstrated by effective product use, seems to me like the only way to get a rigorous result. If you can't measure the success or failure of an idea objectively, then the idea probably isn't going to matter much. 

Comment by Charlie Sanders (charlie-sanders-1) on The case for aligning narrowly superhuman models · 2021-03-09T21:42:43.581Z · LW · GW

On fuzzy tasks: I think the appropriate frame of comparison is neither an average subset (Mechanical Turk) or the ideal human (Go), but instead the median resource that someone would be reasonably likely to seek out. To use healthcare as an example, you'd want your AI to beat the average family doctor that most people would reach out to, as opposed to either a layman's opinion or the preeminent doctor in the field. 

Comment by Charlie Sanders (charlie-sanders-1) on In Addition to Ragebait and Doomscrolling · 2020-12-06T07:12:23.989Z · LW · GW

https://www.youtube.com/watch?v=vRBsaJPkt2Q

If you’re interested in this topic more and have an hour and a half to burn, there’s worse ways to spend it.

Comment by Charlie Sanders (charlie-sanders-1) on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-06T17:23:52.793Z · LW · GW

The world would undoubtedly be better if more Data Scientists became monks.