Is January AI a real thing? 2021-03-20T00:10:17.253Z


Comment by Ericf on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T18:20:12.984Z · LW · GW

As of August 2021 in the USA, "the right hand is beating down the black man" is an accurate (if metaphorical) statement about the territory.

What White Fragility (and many other sources) are saying is that the people who have power need to first use that power to stop the beatings. And it helps to note who the victims are, because it is ~5x more efficient to focus on the ~20% of the population that is being "beaten down" than to make race-neutral changes.

Comment by Ericf on A lost 80s/90s metaphor: playing the demo · 2021-09-03T11:01:10.439Z · LW · GW

I, too have seen this Idea referred to as "leading the parade" - by my boomer gen parents.

I didn't realize other people thought they were exerting control when they were standing 8n front of a demo and pressing buttons - I would just be trying to follow along with the animation to make it look like I was playing. No comment on why I thought that was a good thing to do.

Comment by Ericf on Introduction to Reducing Goodhart · 2021-08-27T14:05:22.613Z · LW · GW

You know, I feel like trying to avoid Goodhart divergences may be neglecting the underlying principle/agent alignment problem in pursuit of better results on one specific metric.

Comment by Ericf on 18 possible meanings of "I Like Red" · 2021-08-24T23:58:05.542Z · LW · GW

Atheists say "God bless you" to other Atheists and nobody bats an eye or questions thier disbelief. People say "f u" all the time without any expectation of an difficult anatomical act. Some phrases are just arbitrary mouth noises that signal membership in "the tribe of people who use that phrase"

Comment by Ericf on 18 possible meanings of "I Like Red" · 2021-08-24T13:54:32.132Z · LW · GW

5, 6, 7, 9, 10, 11, and 12 are all variations on the same theme of "I want to be associated with a particular sub-set of humans" That is Simulacra level 3 behavior. And I don't think they really count as separate meanings.

8 (where the mouth noises "I like red" are just a thing our tribe does, like "ghesudheit") is a separate "meaning" from that (and is kind of a wrap-around Level 1 simulacra: you are accurately stating that you are a member of the tribe, and it is common knowledge that the mouth noise "I like red" carries no information relating to the speakers opinions about "red")

Comment by Ericf on 18 possible meanings of "I Like Red" · 2021-08-24T13:44:28.280Z · LW · GW

Multiply all of the above by all the possible definitions of "like" and "red" and any context relevant counterfactuals.

For example:

  1. Speaker could have said love instead of Like, so they don't love red. And every other point on the spectrum.
  2. Like can also mean "am similar to" (not grammatically correct usage here... but that a whole 'nuther can of worms)
  3. Red is a color, but could also be referring to a person (usually one with red hair)
  4. There may be other options, and "I like red" is expressing an ordinal preference among them.

Also, too, maybe the speaker actually said "I, like, read" meaning that they viewed written material in a casual way and derived meaning from it, and it was mis-heard.

Comment by Ericf on A Better Time until Sunburn Calculator · 2021-08-18T13:10:58.314Z · LW · GW

Does the UV index already account for Lattitude effects? It is much quicker to burn in San Diego than New York, even at the same time of year.

Comment by Ericf on Covid 8/12: The Worst Is Over · 2021-08-13T13:38:28.307Z · LW · GW

There are around 250,000 schools in the US. So, that is a .01% chance of an event at your school each year. With little evidence that the active shooter behaviors actually work.

Comment by Ericf on Predictions about the state of crypto in ten years · 2021-08-08T17:31:53.046Z · LW · GW

Seconded. After the header "here are my claims" I read through two scroll-downs before coming to something phrased as a claim (rather than background assumption)

Comment by Ericf on Handicapping competitive games · 2021-07-22T13:07:56.292Z · LW · GW

Re-stating your conclusion: To apply a handicap, you can change one (or more) of the following:

  1. The starting conditions
  2. The amount of out-of-game resources each player gets
  3. The ending victory point count

Taking the example of the 100 yard dash:

  1. Give one player a head start
  2. One player has less oxygen to use (eg by doing 50 jumping jacks right before the race)
  3. Add a fixed number of seconds to one player's time Or the example of Magic:The Gathering:
  4. Players have different decks
  5. One player has to do a distracting thing while playing (eg, a second game of Magic with a 3rd player)
  6. Play first-to-N wins, with different Ns.

Or you could change the game rules to something else, which is equivalent to playing a different (and hopefully more balanced) game.

Comment by Ericf on My Marriage Vows · 2021-07-22T01:35:06.510Z · LW · GW

Committing to a decision algorithm now implies that you expect to do worse in the future. Even though future you will have more information and experience. And, as you noted, potentially a different utility function. And, as a practical matter, are you even capable of making decisions as-if you were yourself in the past?

Comment by Ericf on My Marriage Vows · 2021-07-21T13:12:53.371Z · LW · GW

You and your spouse will each and together be different people 10 years from now. It will be impossible and undesirable to use "[the interpretation] which my [spouse] and I would agree on at the time of our wedding"

Comment by Ericf on The Mountaineer's Fallacy · 2021-07-21T11:19:14.380Z · LW · GW

Aside from the logistics issues of getting the rocket up there, the top of Everest is actually a great place to launch from. Less gravity, less air resistance, and reasonably close to the Equator (27⁰ - the same as Florida).

Comment by Ericf on Improving capital gains taxes · 2021-07-20T15:28:49.333Z · LW · GW

You are correct here. If this policy were to be actually made into a law, the baseline rate of return would be hotly debated, and would need to be defined in relation to some sort of metric.

It would likely end up being the retrospective 10-year T-bill rate, or some other minimal-risk rate of return (although have multiple independent sources of data that are averaged would be needed to avoid manipulation of the metric - eg if you just take the 10-year T-bill rate, all the big money can avoid that investment, thereby driving up the rate and giving them an huge tax advantage on their bitcoins or foreign bonds or wherever the money actually went)

Comment by Ericf on Happy paths and the planning fallacy · 2021-07-20T12:35:43.234Z · LW · GW

The 90th percentile pessimistic estimate is almost always "this feature is never completed."

Comment by Ericf on The Mountaineer's Fallacy · 2021-07-18T15:26:13.561Z · LW · GW

Kind of an extreme version of getting stuck at a local maximum?

Comment by Ericf on What are some examples from history where a scientific theory predicted a significant experimental observation in advance? · 2021-07-17T17:13:47.507Z · LW · GW

Newton's equations predict that objects fall at the same rate in the absence of atmosphere. Confirmed experimentally on Earth once vaccum chambers were constructive, and dramatically on the moon with a feather and hammer

Comment by Ericf on Fixing the arbitrariness of game depth · 2021-07-17T17:03:48.496Z · LW · GW

I don't think that solves the problem, but a good thought.

If you model each person as having two skill levels (min and max effort) and then look at (highest max - lowest min) / (median (max - min)) you end up with the "deepest" games being things where the inter-player differences dwarf the intra-player ones. But that also includes games like "who is the tallest?" Or situations like "who spent the most $ on their Pokémon CCG deck?"

Comment by Ericf on The Bullwhip Effect · 2021-07-14T17:38:50.386Z · LW · GW

But "people who are taking a training where that game would be used" are much closer.

And note that I didn't say I saw the solution when presented with the game rules alone - only when I had finished reading through to the conclusion, and then thought about it. Which is a much lower bar.

Comment by Ericf on The Bullwhip Effect · 2021-07-14T15:14:38.538Z · LW · GW

It's just a prediction, based on the first time I read about that "game" and thought about what I would do in it.

Comment by Ericf on The Bullwhip Effect · 2021-07-14T00:57:56.795Z · LW · GW

If anyone knows about the simulation, and they play to win, they will implement a dampening effect at whatever stage they are (assuming they don't just say something out loud before things start, and let everyone do it). Depending on the exact mechanics of the game, there are different ways to smooth out demand fluctuations at minimal cost.

Comment by Ericf on The Bullwhip Effect · 2021-07-13T22:01:38.640Z · LW · GW

That game doesn't work if any participant has heard of it, or the effect, just FYI

Comment by Ericf on For reducing caffeine dependence, does daily maximum matter more, or does total daily intake matter more? · 2021-07-09T18:44:52.013Z · LW · GW

Adosyne builds up over the course of the day.

Your brain makes more receptors when there is more of the applicable agent around.

So, if you have morning level of Adosyne + 120 mg caffeine in your blood in the morning, and afternoon levels of Adosyne + 80 mg caffeine + residual in the afternoon, that is going to keep the "need for receptors" high throughout the day.

Then, also, having caffeine in your system as you sleep will also keep more receptors around until the morning than happens for people without caffeine in their system.

Comment by Ericf on For reducing caffeine dependence, does daily maximum matter more, or does total daily intake matter more? · 2021-07-09T17:12:22.235Z · LW · GW


So, we see three vectors of caffeine dependence:

  1. You aren't getting enough sleep, and your adenosine level at 6 am is increasing from day to day, requiring a higher concentration of caffeine to out-compete it.
  2. Your brain makes more adenosine receptors to accommodate the large concentrations of both caffeine and adenosine in your system, so you again need a higher concentration of caffeine molecules to keep your brain from getting enough adenosine signals. This theory is supported by at least one paper:
  3. Your rate of adenosine production AND destruction goes up, so you need a higher dose of caffeine to compensate for the larger swings your body is generating. This is just an alternate mechanism for "building up resistance" and would have the same remedies as #2

For case #1, you just need to get more sleep. This could be a typical pattern for people who drink coffee T-F and then sleep in on the weekend - their body resets and a little boost is sufficient.

For case #2, it's about the concentration of caffeine + adenosine in the bloodstream, over time (since it takes ?hours? to form or deconstruct adenosine receptors - regardless it's not instantaneous). Taking more caffeine in the afternoon is counterproductive, since it is maintaining the same high blood concentrations of active molecules. To reduce resistance you need to give your body time at low levels of stimulator molecules so it gets rid of the excess receptor sites.

In conclusion - don't use caffeine in the afternoon if you are trying to reduce your need for it

Comment by Ericf on Should government set interest rates? · 2021-07-09T15:05:46.208Z · LW · GW

Considering the work of Krugman an others on Optimal Currency Areas (and taking the lessons of the Euro crisis into account) it looks like being able to depreciate currency in a stable way in a limited area is a useful tool. I would expect the current system to continue, with a slow transition towards fewer currencies as regions tie closer together (eg if N. and S. Korea unify they won't keep separate currencies).

Even post scarcity there will still need to be a unit of account to prevent trolling, so I don't see that replacing currencies.

Comment by Ericf on Improving capital gains taxes · 2021-07-09T14:55:01.020Z · LW · GW

tl;dr; - we (the economy) currently spend too much labor finding marginal investments because that activity is under-taxed. So less investment would be a good thing.


If I have $100,000 in a savings account, someone could spend X hours to invest that money and over a time t double it to $200,000. That value needs to be divided among:

The government (as taxes),

The X hours of work,

The time t of capital use (which also compensates capital for the risk).

The key fulcrum there is $/X - people won't spend time finding good investments unless they can make enough $ from that to justify not spending the time doing something else.

If it is easy for people to find things to invest in, they will pay more for capital, and returns for t go up. If it is hard (or there is just too much capital) then returns to t go down.  

When taxes go up, that reduces the pool available for X (and paying for t). Which would make marginal investments not happen. Which reduces the demand for capital, so first the return on capital will reduce to zero profit (after adjusting for inflation and risk), and then marginal investments won't be discovered and funded.

Now, note how this interacts with the proposed policy:

  1. Taxes on capital over time are set at 0. We are only taxing the excess returns above "safe" and refunding for losses as we go, so there is no tax collected on the portion of the profit allocated to capital
  2. The tax rate on the portion allocated to labor is set equal to that on other labor. Currently, spending time to find investment opportunities has a lower tax rate than spending time working as a dental hygienist. That is a distortion that is causing people to spend more time setting up tax shelters / analyzing stocks instead of doing other things that would also be productive. Plus doing things like shifting payments to executives and investment managers to the form of capital gains as a pure tax dodge.
Comment by Ericf on Agency and the unreliable autonomous car · 2021-07-09T14:17:31.033Z · LW · GW

Ah, so the error is back here:

3. Make a list of all possible logical sentences of the form IF route 1 is taken THEN I will arrive at such-and-such a time AND IF route 2 is taken THEN I will arrive at such-and-such a time AND ...

Because the algorithm was created without including the additional assumption (used later on in the "proof") that if route 1 route is taken, then route 2 would NOT be taken (and vice versa). If you include only that additional piece of information, then the statements generated in step 3 are "logically" equivalent to:

"IF route 1 is not taken AND IF route 2 is taken THEN I will arrive at Time" (or "IF route 1 is taken I will arrive at Time AND route 2 is not taken"). 

And that (again from our route 1 XOR route 2 assumption) is equivalent to a list of :

IF route # is taken, THEN I will arrive at time

for all possible combinations of route and time, with no conjunctions at all.

Comment by Ericf on Agency and the unreliable autonomous car · 2021-07-08T18:52:26.925Z · LW · GW

This was bothering me, but I think I found the logical flaw: Quote: NOT (route FAST is taken)

And then from this it deduced

IF route FAST is taken THEN I will arrive at 3pm This, I’m afraid dear reader, is also permitted by the laws of logic.

The statements "not P" and "not P OR Q" might have the same truth value, but they are not logically equivalent. At this point in the proof, saying "THEN I will arrive at 3pm" is arbitrary, and could have been "THEN pigs will fly." I think that's what's known as the principle of explosion, but I could be wrong about that term.

Comment by Ericf on In-group loyalty is social cement · 2021-07-06T13:25:31.646Z · LW · GW

While the salespeople cannot unilaterally recommend other firms products, the firm as a whole can have a strategy of recommending the best product, and use that reputation to land more customers (the Miracle on 34th Street / Progressive insurance model)

Comment by Ericf on Why did we wait so long for the threshing machine? · 2021-07-02T12:51:32.679Z · LW · GW

previous standards of quality (that can be achieved by manual labor) tend to set a quality bar that machines have to meet before they are adopted. People don't like reducing quality, even if the efficiency gain theoretically makes up for it. At least, that's how it seemed to be in the early days of mechanization.

That's how it still is, at least in some industries. People hate going backwards in any metric (or software feature)

Comment by Ericf on Why did we wait so long for the threshing machine? · 2021-06-29T22:21:01.768Z · LW · GW

@SarahTaber_bww would mention that agriculture has been uniquely resistant to innovations for thousands of years. The owners don't care about the efficiency of the operation, and the slaves/peons don't have any power. Or the "small farmer" has no capital, and is just burning time until they go bankrypt in a drought, blight, or injury.

Comment by Ericf on What precautions should fully-vaccinated people still be taking? · 2021-06-29T02:35:34.816Z · LW · GW

Don't French kiss people who are symptomatic and known to be infected.

Or, more reasonably, if you know someone is infected OR symptomatic avoid "Sharing their air."

Once you account for the lack of community cases (if 1 in 10,000 people are infected, as is currently approximately the case in the vaccinated parts of the world) then having a close interaction with 100 people at a gathering of any size has less than .1% chance of even including an infected individual.

Comment by Ericf on The Point of Trade · 2021-06-28T13:24:45.932Z · LW · GW

Investment is independent from efficiencies of scale. Example: Given a supply of 1-foot ropes and a scissors, producing one 1/2 foot rope takes the same amount of effort as producing two 1/2 foot ropes. Carving The David required immense human and other capital investment, but didn't have any economy of scale

Comment by Ericf on The Point of Trade · 2021-06-28T13:05:27.030Z · LW · GW

The whole point of making simplified models (economic or otherwise) is to reflect some underlying truth in a more grokkable form. But, if you remove the load bearing ideas when making the model it doesn't provide any insight.

If all goods are perfect substitutes, then there is no trade. That's all your model is saying. And that's the same thing I was saying, though my previous post was less elegant about it. It doesn't matter what the production functions look like: they key factor is the perfect substituiton on the demand side. And, as you said, redefining a Red point as 1/a Red points doesn't change that conclusion.

Comment by Ericf on The Point of Trade · 2021-06-26T17:53:15.877Z · LW · GW

Except, your example doesn't have comparative advantages because there is only one "good" available (points). There has to be some difference in value somewhere to have different goods.

And note the slight of hand in the original post where Elizer goes from "people like all goods the same" to "oh, but somehow people like laptops more than apples" - if everyone really did like all things equally, there would be no trade because having "a basket of apples" would be the same as having "one apple."

Comment by Ericf on The Point of Trade · 2021-06-26T06:32:53.524Z · LW · GW

Note that this world assumes away the fixed cost of living. In the real world, every person (even a computer simulated person) consumes and destroys some value to stay alive (either power lost to Entropy for a simulation, or food calories eaten and digested).

Also, too, that world doesn't have any diminishing marginal returns: somehow my optimum action is increasing whichever score I'm best at, with no variety to my actions at all. This doesn't model real preferences well, where a score of 101 Red + 1 Yellow + 1 Blue would never equal to 1, 101, & 1 and 51, 51, 1. The very definition of things being different implies that they cannot be perfectly substituted for each-other at all quantities.

If you relax either of those strange assumptions, you will see trade re-emerge.

Comment by Ericf on The Point of Trade · 2021-06-26T06:13:48.769Z · LW · GW

Didn't follow that link, but the conclusion is wrong. Youth should pursue their competitive advantage which is closer to "what has the best (pay+pleasure)/effort ratio" than "what pays the most."

Comment by Ericf on Knowledge is not just precipitation of action · 2021-06-19T00:32:32.945Z · LW · GW

So, your proposed definition of knowledge is information that pays rent in the form of anticipated experiences?

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T21:16:58.809Z · LW · GW

Does agency matter? There are 21 x 21 x 4 possible payoff matrixes for a 2x2 game if we use Ordinal payoffs. For the vast majority of them (all but about 7 x 7 x 4 of them) , one or both players can make a decision without knowing or caring what the other player's payoffs are, and get the best possible result. Of the remaining 182 arrangements, 55 have exactly one box where both players get their #1 payoff (and, therefore, will easily select that as the equilibrium).

All the interesting choices happen in the other 128ish arrangements, 6/7 of which have the pattern of the preferred (1st and 1st, or 1st and 2nd) options being on a diagonal. The most interesting one (for the player picking the row, and getting the first payoff) is:

1 / (2, 3, or 4) ; 4 / (any)

2 / (any) ; 3 / (any)

The optimal strategy for any interesting layout will be a mixed strategy, with the % split dependent on the relative Cardinal payoffs (which are generally not calculatable since they include Reputation and other non-quantifiable effects).

Therefore, you would want to weight the quality of any particular result by the chance of that result being achieved (which also works for the degenerate cases where one box gets 100% of the results, or two perfectly equivalent boxes share that) 

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T20:54:43.781Z · LW · GW

So, given this payoff matrix (where P1 picks a row and gets the first payout, P2 picks column and gets 2nd payout):

5 / 0 ; 5 / 100

0 / 100 ; 0 / 1

Would you say P1's action furthers the interest of player 2?

Would P2's action further the interest of player 1?

Where would you rank this game on the 0 - 1 scale?

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T17:54:13.982Z · LW · GW

Correlation between outcomes, not within them. If both players prefer to be in the same box, they are aligned. As we add indifference and opposing choices, they become unalienable. In your example, both people have the exact same ordering of outcome. In a classic PD, there is some mix. Totally unaligned (constant value) example: 0/2 2/0 2/0 0/2

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T17:21:57.555Z · LW · GW

Tabooing "aligned" what property are you trying to map on a scale of "constant sum" to "common payoff"?

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T17:16:53.544Z · LW · GW

Um... the definition of the normal form game you cited explicitly says that the payoffs are in the form of cardinal or ordinal utilities. Which is distinct from in-game payouts.

Also, too, it sounds like you agree that the strategy your counterparty uses can make a normal form game not count as a "stag hunt" or "prisoner's dillema" or "dating game"

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T15:55:07.729Z · LW · GW

It's a definitional thing. The definition of utility is "the thing people maximize." If you set up your 2x2 game to have utilities in the payout matrix, then by definition both actors will attempt to pick the box with the biggest number. If you set up your 2x2 game with direct payouts from the game that don't include phychic (eg "I just like picking the first option given") or reputational effects, then any concept of alignment is one of:

  1. assume the players are trying for the biggest number, how much will they be attempting to land on the same box?
  2. alignment is completely outside of the game, and is one of the features of function that converts game payouts to global utility

You seem to be muddling those two, and wondering "how much will people attempt to land on the same box, taking into account all factors, but only defining the boxes in terms of game payouts." The answer there is "you can't." Because people (and computer programs) have wonky screwed up utility functions (eg (spoiler alert)

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T15:31:38.719Z · LW · GW

Quote: Or maybe we're playing a game in which the stag hunt matrix describes some sort of payouts that are not exactly utilities. E.g., we're in a psychology experiment and the experimenter has shown us a 2x2 table telling us how many dollars we will get in various cases -- but maybe I'm a billionaire and literally don't care whether I get $1 or $10 and figure I might as well try to maximize your payout, or maybe you're a perfect altruist and (in the absence of any knowledge about our financial situations) you just want to maximize the total take, or maybe I'm actually evil and want you to do as badly as possible.


So, if the other player is "always cooperate" or "always defect" or any other method of determining results that doesn't correspond to the payouts in the matrix shown to you, then you aren't playing "prisoner's dillema" because the utilities to player B are not dependent on what you do. In all these games, you should pick your strategy based on how you expect your counterparty to act, which might or might not include the "in game" incentives as influencers of their behavior.

Comment by Ericf on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-16T12:51:41.055Z · LW · GW

The function should probably be a function of player A's alignment with player B; for example, player A might always cooperate and player B might always defect. Then it seems reasonable to consider whether A is aligned with B (in some sense), while B is not aligned with A (they pursue their own payoff without regard for A's payoff).

That seems to be confused reasoning. "Cooperate" and "defect" are labels we apply to a 2x2 matrix sometimes, and applying those labels changes the payouts. If I get $1 or $5 for picking "A" and $0 or $3 for picking "B" depending on a coin flip that leads me to a different choice than if A is labeled "defect" and B is labeled "cooperate" and the payout depends on another person, because I get psychic/reputational rewards for cooperating/defecting (which one is better depends on my peer group, but whichever is better the story equity is much higher than $5, so my choice is dominated by that, and the actual payout matrix is: pick S: 1000 util or 1001 util. Pick T: 2 util or 2 util.

None of which negates the original question of mapping the 8! possible arrangements of relative payouts in a 2x2 matrix game to some sort of linear scale.

Comment by Ericf on Shall we count the living or the dead? · 2021-06-14T18:21:26.695Z · LW · GW

Asking someone to watch a video is rude and filters your audience to "people with enough time to consume content slowly, and an environment that allows audio/streaming"

Comment by Ericf on Experiments with a random clock · 2021-06-14T00:30:52.125Z · LW · GW

Since this comment thread is apparently "share what you do to be on time" here's mine.

I consider it a test of estimation skills to arrive places exactly on time, so I get a little dopamine hit by arriving at the predicted moment. And I can set that target time according to the risk and importance of the event (ie, I aimed 5 minutes early for swim lessons yesterday, because I wasn't sure if the drive was 7 or 11 minutes long, and being late is bad, and I aim 30 minutes early to catch a plane, since missing late by 1 minute is extremely costly, but when going to visit a single counterparty (grandma, a friend) I aim at the time suggested)

Comment by Ericf on Survey on AI existential risk scenarios · 2021-06-11T15:32:11.458Z · LW · GW

But the action needed to avoid/mitigate in those cases is very different, so it doesn't seem useful to get a feeling for "how far off of ideal are we likely to be" when that is composed of:
1. What is the possible range of AI functionality (as constrained by physics)? - ie what can we do?

2. What is the range of desirable outcomes within that range? - ie what should we do?

3. How will politics, incumbent interests, etc. play out? - ie what will we actually do?

Knowing that experts think we have a (say) 10% chance of hitting the ideal window says nothing about what an interested party should do to improve those chances. It could be "attempt to shut down all AI research" or "put more funding into AI research" or "it doesn't matter because the two majority cases are "General AI is impossible - 40%" and "General AI is inevitable and will wreck us - 50%""

Comment by Ericf on Bad names make you open the box · 2021-06-11T00:44:36.741Z · LW · GW

Saying poor naming instead of bad names would be clearer, since it wouldn't call up the idea of "bad names" = swear words.

Saying "look in" instead of "open" would also distance from the AI concept.