Posts

Using smart thermometer data to estimate the number of coronavirus cases 2020-03-23T04:26:32.890Z
Case Studies Highlighting CFAR’s Impact on Existential Risk 2017-01-10T18:51:53.178Z
Results of a One-Year Longitudinal Study of CFAR Alumni 2015-12-12T04:39:46.399Z
The effect of effectiveness information on charitable giving 2014-04-15T16:43:24.702Z
Practical Benefits of Rationality (LW Census Results) 2014-01-31T17:24:38.810Z
Participation in the LW Community Associated with Less Bias 2012-12-09T12:15:42.385Z
[Link] Singularity Summit Talks 2012-10-28T04:28:54.157Z
Take Part in CFAR Rationality Surveys 2012-07-18T23:57:52.193Z
Meetup : Chicago games at Harold Washington Library (Sun 6/17) 2012-06-13T04:25:05.856Z
Meetup : Weekly Chicago Meetups Resume 5/26 2012-05-16T17:53:54.836Z
Meetup : Weekly Chicago Meetups 2012-04-12T06:14:54.526Z
[LINK] Being proven wrong is like winning the lottery 2011-10-29T22:40:12.609Z
Harry Potter and the Methods of Rationality discussion thread, part 8 2011-08-25T02:17:00.455Z
[SEQ RERUN] Failing to Learn from History 2011-08-09T04:42:37.325Z
[SEQ RERUN] The Modesty Argument 2011-04-23T22:48:04.458Z
[SEQ RERUN] The Martial Art of Rationality 2011-04-19T19:41:19.699Z
Introduction to the Sequence Reruns 2011-04-19T19:39:41.706Z
New Less Wrong Feature: Rerunning The Sequences 2011-04-11T17:01:59.047Z
Preschoolers learning to guess the teacher's password [link] 2011-03-18T04:13:23.945Z
Harry Potter and the Methods of Rationality discussion thread, part 7 2011-01-14T06:49:46.793Z
Harry Potter and the Methods of Rationality discussion thread, part 6 2010-11-27T08:25:52.446Z
Harry Potter and the Methods of Rationality discussion thread, part 3 2010-08-30T05:37:32.615Z
Harry Potter and the Methods of Rationality discussion thread 2010-05-27T00:10:57.279Z
Open Thread: April 2010, Part 2 2010-04-08T03:09:18.648Z
Open Thread: April 2010 2010-04-01T15:21:03.777Z

Comments

Comment by Unnamed on sarahconstantin's Shortform · 2024-10-11T18:01:01.854Z · LW · GW

This post reads like it's trying to express an attitude or put forward a narrative frame, rather than trying to describe the world.

Many of these claims seem obviously false, if I take them at face value at take a moment to consider what they're claiming and whether it's true.

e.g., On the first two bullet points it's easy to come up with counterexamples. Some successful attempts to steer the future, by stopping people from doing locally self-interested & non-violent things, include: patent law ("To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries") and banning lead in gasoline. As well as some others that I now see that other commenters have mentioned.

Comment by Unnamed on Why Large Bureaucratic Organizations? · 2024-08-28T05:20:19.770Z · LW · GW

In America, people shopped at Walmart instead of local mom & pop stores because it had lower prices and more selection, so Walmart and other chain stores grew and spread while lots of mom & pop stores shut down. Why didn't that happen in Wentworld?

Comment by Unnamed on Monthly Roundup #19: June 2024 · 2024-06-26T00:41:39.620Z · LW · GW

I made a graph of this and the unemployment rate, they're correlated at r=0.66 (with one data point for each time Gallup ran the survey, taking the unemployment rate on the closest day for which there's data). You can see both lines spike with every recession.

Comment by Unnamed on Monthly Roundup #19: June 2024 · 2024-06-25T19:33:43.425Z · LW · GW

Are you telling me 2008 did actual nothing?

It looks like 2008 led to about a 1.3x increase in the number of people who said they were dissatisfied with their life.

Comment by Unnamed on [Linkpost] Transcendence: Generative Models Can Outperform The Experts That Train Them · 2024-06-18T20:25:14.260Z · LW · GW

It's common for much simpler Statistical Prediction Rules, such as linear regression or even simpler models, to outperform experts even when they were built to predict the experts' judgment.

Comment by Unnamed on Underrated Proverbs · 2024-06-13T15:47:22.531Z · LW · GW

Or "Defense wins championships."

Comment by Unnamed on D&D.Sci Alchemy: Archmage Anachronos and the Supply Chain Issues · 2024-06-08T01:10:10.410Z · LW · GW

With the ingredients he has, he has gotten a successful Barkskin Potion:

1 of the 1 times (100%) he brewed together Crushed Onyx, Giant's Toe, Ground Bone, Oaken Twigs, Redwood Sap, and Vampire Fang. 

19 of the 29 times (66%) he brewed together Crushed Onyx, Demon Claw, Ground Bone, and Vampire Fang.

Only 2 other combinations of the in-stock ingredients have ever produced Barkskin Potion, both at under a 50% rate (4/10 and 18/75).

The 4-ingredient, 66% success rate potion looks like the best option if we're just going to copy something that has worked. That's what I'd recommend if I had to make the decision right now.

Many combinations that used currently-missing ingredients reliably (100%) produced Barkskin Potion many times (up to 118/118). There may be a variant on one of those, which he has never tried, that could work better than 66% of the time using ingredients that he has. Or there may be information in there about the reliability of the 6-ingredient combination which worked once.

Comment by Unnamed on Ideas for Next-Generation Writing Platforms, using LLMs · 2024-06-04T22:17:02.061Z · LW · GW

Being Wrong on the Internet: The LLM generates a flawed forum-style comment, such that the thing you've been wanting to write is a knockdown response to this comment, and you can get a "someone"-is-wrong-on-the-internet drive to make the points you wanted to make. You can adjust how thoughtful/annoying/etc. the wrong comment is.

Target Audience Personas: You specify the target audience that your writing is aimed at, or a few different target audiences. The LLM takes on the persona of a member of that audience and engages with what you've written, with more explicit explanation of how that persona is reacting and why than most actual humans would give. The structure could be like comments on google docs.

Heat Maps: Color the text with a heat map of how interested the LLM expects the reader to be at each point in the text, or how confused, how angry, how amused, how disagreeing, how much they're learning, how memorable it is, etc. Could be associated with specific target audiences.

Comment by Unnamed on Value Claims (In Particular) Are Usually Bullshit · 2024-05-30T08:28:07.241Z · LW · GW

I don't think that the key element in the aging example is 'being about value claims'. Instead, it's that the question about what's healthy is a question that many people wonder about. Since many people wonder about that question, some people will venture an answer. Even if humanity hasn't yet built up enough knowledge to have an accurate answer.

Thousands of years ago many people wondered what the deal is with the moon and some of them made up stories about this factual (non-value) question whose correct answer was beyond them. And it plays out similarly these days with rumors/speculation/gossip about the topics that grab people's attention. Where curiosity & interest exceeds knowledge, speculation will fill the gaps, sometimes taking on a similar presentation to knowledge.

Note the dynamic in your aging example: when you're in a room with 5+ people and you mention that you've read a lot about aging, someone asks the question about what's healthy. No particular answer needs to be memetic because it's the question that keeps popping up and so answers will follow. If we don't know a sufficiently good/accurate/thorough answer then the answers that follow will often be bullshit, whether that's a small number of bullshit answers that are especially memetically fit or whether it's a more varied and changing froth of made-up answers.

There are some kinds of value claims that are pretty vague and floaty, disconnected from entangled truths and empirical constraints. But that is not so true of instrumental claims about things like health, where (e.g.) the claim that smoking causes lung cancel is very much empirical & entangled. You might still see a lot of bullshit about these sorts of instrumental value claims, because people will wonder about the question even if humanity doesn't have a good answer. It's useful to know (e.g.) what foods are healthy, so the question of what foods are healthy is one that will keep popping up when there's hope that someone in the room might have some information about it.

Comment by Unnamed on robo's Shortform · 2024-05-23T16:02:26.159Z · LW · GW

7% of the variance isn't negligible. Just look at the pictures (Figure 1 in the paper):

Comment by Unnamed on D&D.Sci (Easy Mode): On The Construction Of Impossible Structures · 2024-05-17T01:54:27.454Z · LW · GW

 I got the same result: DEHK.

I'm not sure that there are no patterns in what works for self-taught architects, and if we were aiming to balance cost & likelihood of impossibility then I might look into that more (since I expect A,L,N to be the the cheapest options with a chance to work), but since we're prioritizing impossibility I'll stick with the architects with the competent mentors.

Comment by Unnamed on elifland's Shortform · 2024-05-16T21:36:02.186Z · LW · GW

 Moore & Schatz (2017) made a similar point about different meanings of "overconfidence" in their paper The three faces of overconfidence. The abstract:

Overconfidence has been studied in 3 distinct ways. Overestimation is thinking that you are better than you are. Overplacement is the exaggerated belief that you are better than others. Overprecision is the excessive faith that you know the truth. These 3 forms of overconfidence manifest themselves under different conditions, have different causes, and have widely varying consequences. It is a mistake to treat them as if they were the same or to assume that they have the same psychological origins.

Though I do think that some of your 6 different meanings are different manifestations of the same underlying meaning.

Calling someone "overprecise" is saying that they should increase the entropy of their beliefs. In cases where there is a natural ignorance prior, it is claiming that their probability distribution should be closer to the ignorance prior. This could sometimes mean closer to 50-50 as in your point 1, e.g. the probability that the Yankees will win their next game. This could sometimes mean closer to 1/n as with some cases of your points 2 & 6, e.g. a 1/30 probability that the Yankees will win the next World Series (as they are 1 of 30 teams).

In cases where there isn't a natural ignorance prior, saying that someone should increase the entropy of their beliefs is often interpretable as a claim that they should put less probability on the possibilities that they view as most likely. This could sometimes look like your point 2, e.g. if they think DeSantis has a 20% chance of being US President in 2030, or like your point 6. It could sometimes look like widening their confidence interval for estimating some quantity.

Comment by Unnamed on D&D.Sci Long War: Defender of Data-mocracy · 2024-05-06T17:27:57.778Z · LW · GW

You can go ahead and post.

I did a check and am now more confident in my answer, and I'm not going to try to come up with an entry that uses fewer soldiers.

Comment by Unnamed on D&D.Sci Long War: Defender of Data-mocracy · 2024-05-06T05:13:19.216Z · LW · GW

Just got to this today. I've come up with a candidate solution just to try to survive, but haven't had a chance yet to check & confirm that it'll work, or to try to get clever and reduce the number of soldiers I'm using.

10 Soldiers armed with: 3 AA, 3 GG, 1 LL, 2 MM, 1 RR

I will probably work on this some more tomorrow.

Comment by Unnamed on [deleted post] 2024-04-08T22:14:04.988Z

Building a paperclipper is low-value (from the point of view of total utilitarianism, or any other moral view that wants a big flourishing future) because paperclips are not sentient / are not conscious / are not moral patients / are not capable of flourishing. So filling the lightcone with paperclips is low-value. It maybe has some value for the sake of the paperclipper (if the paperclipper is a moral patient, or whatever the relevant category is) but way less than the future could have.

Your counter is that maybe building an aligned AI is also low-value (from the point of view of total utilitarianism, or any other moral view that wants a big flourishing future) because humans might not much care about having a big flourishing future, or might even actively prefer things like preserving nature. 

If a total utilitarian (or someone who wants a big flourishing future in our lightcone) buys your counter, it seems like the appropriate response is: Oh no! It looks like we're heading towards a future that is many orders of magnitude worse than I hoped, whether or not we solve the alignment problem. Is there some way to get a big flourishing future? Maybe there's something else that we need to build into our AI designs, besides "alignment". (Perhaps mixed with some amount of: Hmmm, maybe I'm confused about morality. If AI-assisted humanity won't want to steer towards a big flourishing future then maybe I've been misguided in having that aim.)

Whereas this post seems to suggest the response of: Oh well, I guess it's a dice roll regardless of what sort of AI we build. Which is giving up awfully quickly, as if we had exhausted the design space for possible AIs and seen that there was no way to move forward with a large chance at a big flourishing future. This response also doesn't seem very quantitative - it goes very quickly from the idea that an aligned AI might not get a big flourishing future, to the view that alignment is "neutral" as if the chances of getting a big flourishing future were identically small under both options. But the obvious question for a total utilitarian who does wind up with just 2 options, each of which is a dice roll, is Which set of dice has better odds?

Comment by Unnamed on How Often Does ¬Correlation ⇏ ¬Causation? · 2024-04-02T20:11:06.270Z · LW · GW

Is this calculation showing that, with a big causal graph, you'll get lots of very weak causal relationships between distant nodes that should have tiny but nonzero correlations? And realistic sample sizes won't be able to distinguish those relationships from zero.

Andrew Gelman often talks about how the null hypothesis (of a relationship of precisely zero) is usually false (for, e.g., most questions considered in social science research).

Comment by Unnamed on [April Fools' Day] Introducing Open Asteroid Impact · 2024-04-01T20:54:21.149Z · LW · GW

A lot of people have this sci-fi image, like something out of Deep Impact, Armageddon, Don't Look Up, or Minus, of a single large asteroid hurtling towards Earth to wreak massive destruction. Or even massive vengeance, as if it was a punishment for our sins.

But realistically, as the field of asteroid collection gradually advances, we're going to be facing many incoming asteroids which will interact with each other in complicated ways, and whose forces will to a large extent balance each other out.

Yet doomers are somehow supremely confident in how the future will go, foretelling catastrophe. And if you poke at their justifications, they won't offer precise physical models of these many-body interactions, just these mythic stories of Earth vs. a monolithic celestial body.

Comment by Unnamed on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-01T08:28:51.683Z · LW · GW

Try here or here or here.

Comment by Unnamed on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T01:36:36.930Z · LW · GW

They're critical questions, but one of the secret-lore-of-rationality things is that a lot of people think criticism is bad, because if someone criticizes you, it hurts your reputation. But I think criticism is good, because if I write a bad blog post, and someone tells me it was bad, I can learn from that, and do better next time.

I read this as saying 'a common view is that being criticized is bad because it hurts your reputation, but as a person with some knowledge of the secret lore of rationality I believe that being criticized is good because you can learn from it.'

And he isn't making a claim about to what extent the existing LW/rationality community shares his view.

Comment by Unnamed on Evolution did a surprising good job at aligning humans...to social status · 2024-03-11T05:59:06.452Z · LW · GW

Seems like the main difference is that you're "counting up" with status and "counting down" with genetic fitness.

There's partial overlap between people's reproductive interests and their motivations, and you and others have emphasized places where there's a mismatch, but there are also (for example) plenty of people who plan their lives around having & raising kids. 

There's partial overlap between status and people's motivations, and this post emphasizes places where they match up, but there are also (for example) plenty of people who put tons of effort into leveling up their videogame characters, or affiliating-at-a-distance with Taylor Swift or LeBron James, with minimal real-world benefit to themselves.

And it's easier to count up lots of things as status-related if you're using a vague concept of status which can encompass all sorts of status-related behaviors, including (e.g.) both status-seeking and status-affiliation. "Inclusive genetic fitness" is a nice precise concept so it can be clear when individuals fail to aim for it even when acting on adaptations that are directly involved in reproduction & raising offspring.

Comment by Unnamed on Shortform · 2024-03-01T22:09:47.068Z · LW · GW

The economist RH Strotz introduced the term "precommitment" in his 1955-56 paper "Myopia and Inconsistency in Dynamic Utility Maximization".

Thomas Schelling started writing about similar topics in his 1956 paper "An essay on bargaining", using the term "commitment".

Both terms have been in use since then.

Comment by Unnamed on 2023 Survey Results · 2024-02-22T02:10:19.489Z · LW · GW

On one interpretation of the question: if you're hallucinating then you aren't in fact seeing ghosts, you're just imagining that you're seeing ghosts. The question isn't asking about those scenarios, it's only asking what you should believe in the scenarios where you really do see ghosts.

Comment by Unnamed on D&D.Sci(-fi): Colonizing the SuperHyperSphere · 2024-01-17T19:37:47.878Z · LW · GW

My updated list after some more work yesterday is

96286, 9344, 107278, 68204, 905, 23565, 8415, 62718, 83512, 16423, 42742, 94304

which I see is the same as simon's list, with very slight differences in the order

More on my process:

I initially modeled location just by a k nearest neighbors calculation, assuming that a site's location value equals the average residual of its k nearest neighbors (with location transformed to Cartesian coordinates). That, along with linear regression predicting log(Performance), got me my first list of answers. I figured that list was probably good enough to pass the challenge: the sites' predicted performance had a decent buffer over the required cutoff, the known sites with large predicted values did mostly have negative residuals but they were only about 1/3 the size of the buffer, there were some sites with large negative residuals but none among the sites with high predicted values and I probably even had a big enough buffer to withstand 1 of them sneaking in, and the nearest neighbors approach was likely to mainly err by giving overly middling values to sites near a sharp border (averaging across neighbors on both sides of the border) which would cause me to miss some good sites but not to include any bad sites. So it seemed fine to stop my work there.

Yesterday I went back and looked at the residuals and added some more handcrafted variables to my model to account for any visible patterns. The biggest was the sharp cutoff at Latitude +-36. I also changed my rescaling of Murphy's Constant (because my previous attempt had negative residuals for low Murphy values), added a quadratic term to my rescaling of Local Value of Pi (because the dropoff from 3.15 isn't linear), added a Shortitude cutoff at 45, and added a cos(Longitude-50) variable. Still kept the nearest neighbors calculation to account for any other location relevance (there is a little but much less now). That left me with 4 nines of correlation between predicted & actual performance, residuals near zero for the highest predicted sites in the training set, and this new list of sites. My previous lists of sites still seem good enough, but this one looks better.

Comment by Unnamed on D&D.Sci(-fi): Colonizing the SuperHyperSphere · 2024-01-16T01:38:58.597Z · LW · GW

Did a little robustness check, and I'm going to swap out 3 of these to make it:

96286, 23565, 68204, 905, 93762, 94408, 105880, 9344, 8415, 62718, 80395, 65607

To share some more:

I came across this puzzle via aphyer's post, and got inspired to give it a try.

Here is the fit I was able to get on the existing sites (Performance vs. Predicted Performance). Some notes on it:

Seems good enough to run with. None of the highest predicted existing sites had a large negative residual, and the highest predicted new sites give some buffer.

Three observations I made along the way. 

First (which is mostly redundant with what aphyer wound up sharing in his second post):

Almost every variable is predictive of Performance on its own, but none of the continuous variables have a straightforward linear relationship with Performance.

Second:

Modeling the effect of location could be tricky. e.g., Imagine on Earth if Australia and Mexico were especially good places for Performance, or on a checkerboard if Performance was higher on the black squares.

Third:

The ZPPG Performance variable has a skewed distribution which does not look like what you'd get if you were adding a bunch of variables, but does look like something you might get if you were multiplying several variables. And multiplication seems plausible for this scenario, e.g. perhaps such-and-such a disturbance halves Performance and this other factor cuts performance by a quarter.

Comment by Unnamed on D&D.Sci(-fi): Colonizing the SuperHyperSphere · 2024-01-15T00:06:58.534Z · LW · GW

My current choices (in order of preference) are

96286, 23565, 68204, 905, 93762, 94408, 105880, 8415, 94304, 42742, 92778, 62718

Comment by Unnamed on Prediction markets are consistently underconfident. Why? · 2024-01-11T06:17:40.790Z · LW · GW

What's "Time-Weighted Probability"? Is that just the average probability across the lifespan of the market? That's not a quantity which is supposed to be calibrated.

e.g., Imagine a simple market on a coin flip, where forecasts of p(heads) are made at two times: t1 before the flip and t2 after the flip is observed. In half of the cases, the market forecast is 50% at t1 and 100% at t2, for an average of 75%; in those cases the market always resolves True. The other half: 50% at t1, 0% at t2, avg of 25%, market resolves False. The market is underconfident if you take this average, but the market is perfectly calibrated at any specific time.

Comment by Unnamed on Bayesians Commit the Gambler's Fallacy · 2024-01-07T19:14:27.058Z · LW · GW

Have you looked at other ways of setting up the prior to see if this result still holds? I'm worried that they way you've set up the prior is not very natural, especially if (as it looks at first glance) the Stable scenario forces p(Heads) = 0.5 and the other scenarios force p(Heads|Heads) + p(Heads|Tails) = 1. Seems weird to exclude "this coin is Headsy" from the hypothesis space while including "This coin is Switchy".

Thinking about what seems most natural for setting up the prior: the simplest scenario is where flips are serially independent. You only need one number to characterize a hypothesis in that space, p(Heads). So you can have some prior on this hypothesis space (serial independent flips), and some prior on p(Heads) for hypotheses within this space. Presumably that prior should be centered at 0.5 and symmetric. There's some choice about how spread out vs. concentrated to make it, but if it just puts all the probability mass at 0.5 that seems too simple.

The next simplest hypothesis space is where there is serial dependence that only depends on the most recent flip. You need two numbers to characterize a hypothesis in this space, which could be p(Heads|Heads) and p(Heads|Tails). I guess it's simplest for those to be independent in your prior, so that (conditional on there being serial dependence), getting info about p(Heads|Heads) doesn't tell you anything about p(Heads|Tails). In other words, you can simplify this two dimensional joint distribution to two independent one-dimensional distributions. (Though in real-world scenarios my guess is that these are positively correlated, e.g. if I learned that p(Prius|Jeep) was high that would probably increase my estimate of p(Prius|Prius), even assuming that there is some serial dependence.) For simplicity you could just give these the same prior distribution as p(Heads) in the serial independence case.

I think that's a rich enough hypothesis space to run the numbers on. In this setup, Sticky hypotheses are those where p(Heads|Heads)>p(Heads|Tails), Switchy are the reverse, Headsy are where p(Heads|Heads)+p(Heads|Tails)>1, Tails are the reverse, and Stable are where p(Heads|Heads)=p(Heads|Tails) and get a bunch of extra weight in the prior because they're the only ones in the serial independent space of hypotheses.

Comment by Unnamed on Techniques to fix incorrect memorization? · 2024-01-01T23:39:30.426Z · LW · GW

Try memorizing their birthdates (including year).

That might be different enough from what you've previously tried to memorize (month & day) to not get caught in the tangle that has developed.

Comment by Unnamed on AI Views Snapshots · 2023-12-14T20:50:38.397Z · LW · GW

My answer to "If AI wipes out humanity and colonizes the universe itself, the future will go about as well as if humanity had survived (or better)" is pretty much defined by how the question is interpreted. It could swing pretty wildly, but the obvious interpretation seems ~tautologically bad.

Agreed, I can imagine very different ways of getting a number for that, even given probability distributions for how good the future will be conditional on each of the two scenarios.

A stylized example: say that the AI-only future has a 99% chance of being mediocre and a 1% chance of being great, and the human future has a 60% chance of being mediocre and a 40% chance of being great. Does that give an answer of 1% or 60% or something else?

I'm also not entirely clear on what scenario I should be imagining for the "humanity had survived (or better)" case.

Comment by Unnamed on Lying Alignment Chart · 2023-11-29T21:47:45.404Z · LW · GW

The time on a clock is pretty close to being a denotative statement.

Comment by Unnamed on Lying Alignment Chart · 2023-11-29T21:41:51.433Z · LW · GW

Batesian mimicry is optimized to be misleading, "I"ll get to it tomorrow" is denotatively false, "I did not have sexual relations with that woman" is ambiguous as to its conscious intent to be denotatively false.

Comment by Unnamed on Lying Alignment Chart · 2023-11-29T21:19:36.285Z · LW · GW

Structure Rebel, Content Purist: people who disagree with me are lying (unless they say "I think that", "My view is", or similar)

Structure Rebel, Content Neutral: people who disagree with me are lying even when they say "I think that", "My view is", or similar

Structure Rebel, Content Rebel: trying to unlock the front door with my back door key is a lie

Comment by Unnamed on Neither Copernicus, Galileo, nor Kepler had proof · 2023-11-23T23:39:34.337Z · LW · GW

How do you get a geocentric model with ellipses? Venus clearly does not go in an ellipse around the Earth. Did Riccioli just add a bunch of epicycles to the ellipses?

Googling... oh, it was a Tychonic model, where Venus orbits the sun in an ellipse (in agreement with Kepler), but the sun orbits the Earth.

Kepler's ellipses wiped out the fully geocentric models where all the planets orbit around the Earth, because modeling their orbits around the Earth still required a bunch of epicycles and such, while modeling their orbits around the sun now involved a simple ellipse rather than just slightly fewer epicycles. But it didn't straightforwardly, on its own wipe out the geoheliocentric/Tychonic models where most planets orbit the sun but the sun orbits the Earth.

Comment by Unnamed on When did Eliezer Yudkowsky change his mind about neural networks? · 2023-11-15T23:43:22.194Z · LW · GW

Here is Yudkowsky (2008) Artificial Intelligence as a Positive and
Negative Factor in Global Risk:

Friendly AI is not a module you can instantly invent at the exact moment when it is first needed, and then bolt on to an existing, polished design which is otherwise completely unchanged.

The field of AI has techniques, such as neural networks and evolutionary programming, which have grown in power with the slow tweaking of decades. But neural networks are opaque—the user has no idea how the neural net is making its decisions—and cannot easily be rendered unopaque; the people who invented and polished neural networks were not thinking about the long-term problems of Friendly AI. Evolutionary programming (EP) is stochastic, and does not precisely preserve the optimization target in the generated code; EP gives you code that does what you ask, most of the time, under the tested circumstances, but the code may also do something else on the side. EP is a powerful, still maturing technique that is intrinsically unsuited to the demands of Friendly AI. Friendly AI, as I have proposed it, requires repeated cycles of recursive self-improvement that precisely preserve a stable optimization target.

The most powerful current AI techniques, as they were developed and then polished and improved over time, have basic incompatibilities with the requirements of Friendly AI as I currently see them. The Y2K problem—which proved very expensive to fix, though not global-catastrophic—analogously arose from failing to foresee tomorrow’s design requirements. The nightmare scenario is that we find ourselves stuck with a catalog of mature, powerful, publicly available AI techniques which combine to yield non-Friendly AI, but which cannot be used to build Friendly AI without redoing the last three decades of AI work from scratch.

Comment by Unnamed on Thinking By The Clock · 2023-11-08T19:40:21.095Z · LW · GW

Also, chp 25 of HPMOR is from 2010 which is before CFAR.

Comment by Unnamed on Thinking By The Clock · 2023-11-08T19:01:10.081Z · LW · GW

It came up in the sequences, e.g. here, here, and here.

Comment by Unnamed on Mirror, Mirror on the Wall: How Do Forecasters Fare by Their Own Call? · 2023-11-08T08:42:23.357Z · LW · GW

There are a couple errors in your table of interpretations. For "actual score = subjective expected", the second half of the interpretation "prediction = 0.5 or prediction = true probability" got put on a new line in the "Comparison score" column instead of staying together in the "Interpretation" column, and similarly for the next one.

Comment by Unnamed on Mirror, Mirror on the Wall: How Do Forecasters Fare by Their Own Call? · 2023-11-07T20:02:20.587Z · LW · GW

I posted a brainstorm of possible forecasting metrics a while back, which you might be interested in. It included one (which I called "Points Relative to Your Expectation") that involved comparing a forecaster's (Brier or other) score with the score that they'd expect to get based on their probability.

Comment by Unnamed on AI #34: Chipping Away at Chip Exports · 2023-10-19T18:57:35.931Z · LW · GW

Sign error: "Tyler Cowen fires back that not only is this inevitable"

--> "not inevitable"

Comment by Unnamed on [deleted post] 2023-10-02T23:58:09.557Z

I don't understand why it took so long to seriously considered the possibility that orbits are ellipses.

It seems that a circle is the simplest, most natural, most elegant hypothesis for the shape of an orbit, and an ellipse is the second-most simple/natural/elegant hypothesis. But instead of checking if an ellipse fit the data, everyone settled for 'a lot like a circle, but you have to include a bunch of fudge factors to match the observational data'.

Apparently Kepler had a similar view. April 1605 is when he figured out that the orbit of Mars was an ellipse around the sun; two years earlier when he was already in the process of trying to figure out what sort of ovalish shape fit the data, he said that he had considered and rejected the ellipse hypothesis because if the answer was that simple then someone else would've figured it out already. This incorrect inadequacy analysis is from a July 1603 letter that Kepler wrote to David Fabricius: "I lack only a knowledge of the geometric generation of the oval or face-shaped curve. [...] If the figure were a perfect ellipse, then Archimedes and Apollonius would be enough."

I could make some guesses about why it didn't happen sooner (e.g. the fact that it happened right after Brahe collected his data suggests that poor data quality was a hindrance), but it feels pretty speculative. I wonder if there have been / could be more quantitative analyses of this question, e.g. do we have the data sets that the ancient Greeks used to fit their models, and can we see how well ellipses fit those data sets?

Comment by Unnamed on Linkpost: They Studied Dishonesty. Was Their Work a Lie? · 2023-10-02T18:40:09.560Z · LW · GW

The suspects have since retreated to attempting to sue datacolada (the investigators).

It's just one of the suspects (Gino) who is suing, right?

Comment by Unnamed on Aumann-agreement is common · 2023-09-30T01:35:37.302Z · LW · GW

I like the examples of quickly-resolved disagreements.

They don't seem that Aumannian to me. They are situations where one person's information is a subset of the other person's information, and they can be quickly resolved by having the less-informed person adopt the belief of the more-informed person. That's a special case of Aumann situations which is much easier to resolve in practice than the general case.

Aumann-in-general involves reaching agreement even between people who have different pieces of relevant information, where the shared posterior after the conversation is different from either person's belief at the start of the conversation. The challenge is: can they use all the information they have to reach the best possible belief, when that information is spread across multiple people? That's much less of a challenge when one person starts out with the full combined set of information.

(Note: my understanding of Aumann's theorem is mostly secondhand, from posts like this and this.)

Comment by Unnamed on Bids To Defer On Value Judgements · 2023-09-29T19:59:11.320Z · LW · GW

I initially misread the title because "defer judgment" is often used to mean putting off the judgment until later. But here you meant "defer" as in taking on the other person's value judgment as your own (and were arguing against that), rather than as waiting to make the value judgment later (and arguing for that).

I guess "defer" is an autoantonym, in some contexts. When someone makes a claim which you aren’t in a good position to evaluate immediately, then to decide what you think about it you can either defer (to them) or defer (till later).

Comment by Unnamed on Precision of Sets of Forecasts · 2023-09-22T22:20:52.278Z · LW · GW

I don't understand what question you're trying to answer here. e.g., I could imagine questions of the form "If I see someone forecast 39.845%, should I treat this nearly the same as if they had forecast 40%?" Most mathematical ways of dealing with forecasts do (e.g. scoring rules, formulas for aggregating multiple people's forecasts, decision rules based on EV). But that doesn't seem like what you're interested in here.

Also not sure if/how these graphs differ from what they'd look like if they were based on forecasts which were perfectly calibrated with arbitrary precision.

Also, the description of Omar sounds impossible. If (e.g.) his 25% forecasts come true more often than his 5% forecasts, then his 15% forecasts must differ from at least one of those two (though perhaps you could have too little data to be able to tell).

Comment by Unnamed on Would You Work Harder In The Least Convenient Possible World? · 2023-09-22T22:02:51.024Z · LW · GW

Multiple related problems with Alice's behavior (if we treat this as a real conversation):

  1. Interfering with Bob's boundaries/autonomy, not respecting the basic background framework where he gets to choose what he does with his life/money/etc.
  2. Jumping to conclusions about Bob, e.g., insisting that the good he's been doing is just for "warm fuzzies", or that Bob is lying
  3. Repeatedly shifting her motive for being in the conversation / her claim about the purpose of the conversation (e.g., from trying to help Bob act on his values, to "if you’re lying then it’s my business", to what sorts of people should be accepted in the rationalist community) 
  4. Cutting off conversational threads once Bob starts engaging with them to jump to new threads, in ways that are disorienting and let her stay on the attack, and don't leave Bob space to engage with the things that have already come up

These aren't merely impolite, they're bad things to do, especially when combined and repeated in rapid succession. It seems like an assault on Bob's ability to orient & think for himself about himself.

Comment by Unnamed on Newcomb Variant · 2023-09-04T18:23:23.287Z · LW · GW

The spoiler tag only works for me if I type it, not if I copy-paste it.

In this scenario, Omega is described as predicting your actions and choosing how much money goes in the boxes, which is different from HPMOR where timelines settle into a self-consistent equilibrium in more complicated ways. And with this plan it should be straightforward for Omega to carry out its usual process, since you're not trying to generate a contradiction or force events through a weird narrow path, you're just entangling your box-opening decision with an observation about the world which you haven't made yet, but which omniscient Omega should know. For Omega, this is no different than you deciding that your second-box-opening decision will be contingent on the results of yesterday's Yankees game, which you haven't heard yet but which you will look up on your phone after you open the first box.

Comment by Unnamed on Newcomb Variant · 2023-08-29T07:43:23.651Z · LW · GW

Extra credit

If you make your decision about whether to open the second box contingent on some fact about the future, then the contents of the first box let you predict that fact in advance. e.g., If the New York Yankees win their baseball game today then I will open the second box after their game, and if they do not win then I will not open the second box. Then if I open the first box and see $100, I can bet a lot of money on the Yankees to win, and if I see $0 then I can bet against the Yankees winning. If Omega is less patient then you'll need to pick something that resolves quicker.

Comment by Unnamed on A Theory of Laughter · 2023-08-23T22:19:14.020Z · LW · GW

Actively searching for counterexamples to the post's danger+safety theory... What about when people who are in love with each other get giggly around each other? (As an example of laughter, not humor.) I mean the sickly-sweet they're-in-their-own-world-together kind of giggling, not the nervous unsure-of-oneself laughter. Doesn't seem like there's danger.

Similarly, people laughing more when they're on drugs such as weed. (Though that seems more open to alternative physiological explanations.)

Do the stoners and giggly lovers fit the sketch of a theory that I'm maybe building towards? There is playing around with frames in both of them, I think. People on drugs see things fresh, or from a different perspective from usual; lovers-in-their-own-world have stepped out of the default social framework for interacting with the social world into their own little world. There is definitely syncing on frames with the lovers, though often not with the people on drugs. And there's an importance thing going on in the relationship, and a perception-of-importance thing with drug use. So mostly fits, except with an exception on the syncing bit.

Comment by Unnamed on A Theory of Laughter · 2023-08-23T22:16:01.343Z · LW · GW

I think I'm much more of an incongruity theorist than this (though haven't read the literature). More specifically: Humor generally involves playing around with frames / scripts, often crashing different frames together in interesting ways. Laughter involves distancing yourself from a frame you're engaging with, so that you're kinda acting with that frame but not fully immersed in it.

This fits the playfighting evolutionary context, which is engaging largely within the "fight" frame while keeping some detachment from it.

There is also a thing about social syncing on a shared frame, when the frame isn't the obvious default (we both understand that this is not a real fight, just a play fight, hahaha). Which seems related to how "getting the joke" is such a central aspect of humor. And to how humor is involved in social bonding.

My experience of coming up with in-context jokes matches this theory much more than your theory. It generally starts by noticing some alternative interpretation or frame on something that just happened (e.g. something that someone said), then gets refined into a specific thing to say by some combination of trying to find the most interesting interplay / biggest tension between the two frames, and trying to best bring in the new frame, and fitting it all to the people around me and what they'll understand & like. I'm not trying to track the amount of danger or the amount of safety. (Though maybe sometimes I'm tracking something like the level of physiological arousal - at some point you swapped that in for "danger" and it does resonate more.)

For a concrete example to talk about, I looked through recent SMBC comics and picked out the one I found funniest - this one. It is a good example of crashing two frames together, the Disneyfied frame on wild animals interacting with humans and a more realistic one, brought together by a picture that can be seen from either viewpoint. The phrase "tickborne diseases" really makes the realistic frame pop. Though there's also definitely some danger vs safety stuff happening here, so it's not a counterexample to your theory.

This other recent SMBC is also funny to me, and also has the playing around with frames thing without any obvious danger. So maybe is a counterexample? Though not sure if trying to find counterexamples is an important exercise; your theory seems more like it's incomplete than like it's totally wrong.

Trying my own hand at theorizing about this danger/arousal/whatever thing... Seems like there's something about the content (at least one of the frames) being important to the person. So danger, sex, taboo things. Pulling on some sort of relevance / importance / salience system in the brain.

Comment by Unnamed on Viliam's Shortform · 2023-08-20T21:28:42.051Z · LW · GW

Rationalists: If you write your bottom line first, it doesn't matter what clever arguments you write above it, the conclusion is completely useless as evidence.

The classic take is that once you've written your bottom line, then any further clever arguments that you make up afterwards won't influence the entanglement between your conclusion and reality. So: "Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts."

That is not saying that "the conclusion is completely useless as evidence."