Posts

Conjecture: A Roadmap for Cognitive Software and A Humanist Future of AI 2024-12-02T13:28:57.977Z
The Compendium, A full argument about extinction risk from AGI 2024-10-31T12:01:51.714Z
"Epistemic range of motion" and LessWrong moderation 2023-11-27T21:58:40.834Z
There is no IQ for AI 2023-11-27T18:21:26.196Z
For Civilization and Against Niceness 2023-11-20T10:56:20.352Z
On Lies and Liars 2023-11-17T17:13:03.726Z
Lying is Cowardice, not Strategy 2023-10-24T13:24:25.450Z
Cognitive Emulation: A Naive AI Safety Proposal 2023-02-25T19:35:02.409Z
Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes 2023-02-24T23:03:04.917Z
AGI in sight: our look at the game board 2023-02-18T22:17:44.364Z
Human decision processes are not well factored 2023-02-17T13:11:10.854Z
Assigning Praise and Blame: Decoupling Epistemology and Decision Theory 2023-01-27T18:16:43.025Z
Group-level Consequences of Psychological Problems 2023-01-19T09:27:13.920Z
Reification bias 2023-01-09T12:22:15.460Z
Mental acceptance and reflection 2022-12-22T14:32:15.589Z
Psychological Disorders and Problems 2022-12-12T18:15:49.333Z
Tradeoffs in complexity, abstraction, and generality 2022-12-12T15:55:19.064Z
Biases are engines of cognition 2022-11-30T16:47:58.318Z
The First Filter 2022-11-26T19:37:04.607Z
Conjecture Second Hiring Round 2022-11-23T17:11:42.524Z
Conjecture: a retrospective after 8 months of work 2022-11-23T17:10:23.510Z
Why are some problems Super Hard? 2022-08-24T17:58:41.031Z

Comments

Comment by Gabriel Alfour (gabriel-alfour-1) on Lying is Cowardice, not Strategy · 2023-10-24T15:22:34.570Z · LW · GW

By this logic, any instrumental action taken towards an altruistic goal would be "for personal gain".

I think you are making a genuine mistake, and that I could have been clearer.

There are instrumental actions that favour everyone (raising epistemic standards), and instrumental actions that favour you (making money).

The latter are for personal gains, regardless of your end goals.

 

Sorry for not getting deeper into it in this comment.  This is quite a vast topic.
I might instead write a longer post about the interactions of deontology & consequentialism, and egoism & altruism.

Comment by Gabriel Alfour (gabriel-alfour-1) on AGI in sight: our look at the game board · 2023-02-19T09:57:48.634Z · LW · GW

(I strongly upvoted the comment to signal boost it, and possibly let people who agree easily express their agreement to it directly if they don't have any specific meta-level observation to share)

Comment by Gabriel Alfour (gabriel-alfour-1) on Why are some problems Super Hard? · 2022-08-25T17:26:41.920Z · LW · GW

Suppose I take out a coin and flip it 100 times in front of your eyes, and it lands heads every time. Will you have no ability to predict how it lands the next 30 times? Will you need some special domain knowledge of coin aerodynamics to predict this?

  • Coin = problem
  • Flipping head = not being solved
  • Flipping tail = being solved
  • More flips = more time passing

Then, yes. Because you had many other coins that had started flipping tail at some point, and there is no easily discernable pattern.

By your interpretation, the Solomonoff induced prior for that coin is basically "it will never flip tail". Whereas, you do expect that most problems that have not been solved now will be solved at some point, which does mean that you are incorporating more knowledge.

"Experts have a precise model of the known subset of the concept-space of their domain,  and they can make vague high-level extrapolations on how that domain looks outside the known subset, and where in the wider domain various unsolved problems are located, and how distant they are from the known domain"

Experts from many different fields of Maths and CS have tried to tackle the Collatz' Conjecture and the P vs NP problem. Most of them agree that those problems are way beyond what they set out to prove. I mostly agree with you on the fact that each expert's intuition vaguely tracks one specific dimension of the problem.
But any simplicity prior tells you that it is more likely for there to be a general reason for why those problems are hard along all those dimensions, rather than a whole bunch of ad-hoc reasons.

The way I see it, that's it. This statement isn't reducible to something more neat and simple. 

For any given difficult problem, you can walk up to an expert and ask them why it's considered hard, but the answers they give you won't have any unifying theme aside from that. It's all ad hoc.

What makes you think that? I see you repeating this, but I don't see why that would be the case.

Why would you think there's something else?

Good question, thanks! I tried to hint at this in the Original Post, but I think I should have been more explicit. I will make a second edit that incorporates the following.

The first reason is that many different approaches have been tried. In the case where only a couple of specific approaches have been tried, I expect the reason for why it hasn't been solved to be ad-hoc and related to the specific approaches that have been tried. The more approaches are tried, the more I expect a general reason that applies to all those approaches.

The second reason is that the problems are simple. In the case of a complicated problem, I would expect the reason for why it hasn't been solved to be ad-hoc. I have much less of this expectation for simple problems.

Comment by Gabriel Alfour (gabriel-alfour-1) on Why are some problems Super Hard? · 2022-08-25T13:54:03.383Z · LW · GW

Isn't it about empirical evidence that these problems are hard, not "predictions"? They're considered hard because many people have tried to solve them for a long time and failed.

No, this is Preemption 1 in the Original Post.

"hard" doesn't mean "people have tried and failed", and you can only witness the latter after the fact. If you prefer, even if have empirical evidence for the problem being "level n hard" (people have tried up to level n), you;d still do not have empirical evidence for the problem being "level n+1 hard" (you'd need people to try more to state that if there's nothing you can say about it ahead of time). Ie, no predictive power.

An expert can estimate how hard a problem is by eyeballing how distant the abstractions needed to solve it feel from the known ones

Great! We're getting closer to what I care about.

Then what I am saying is that there is a heuristic that the experts are using to eyeball this, and I want to know what that is, start ingwith those 2 conjectures!

I am also saying that the more distant "the abstraction need to solve it feel from the known ones", the easier it should be to do so.

They're able to do this because they've developed strong intuitions for their professional domain: they roughly know what's possible, what's on the edge of the possible, and what's very much not.

Exactly, but those intuitions are implemented somehow. How?

Also, the more experts agree on a judgment, and the stronger their judgment, the easier you expect to be to explain that intuition.


But there's no objective property that makes these problems intrinsically hard, only subjectively hard from the point of view of our conceptual toolbox.

I was confused very confused when I read this. For instance, the part in bold is already reflected in the Original Post. 

There are many theoretical problems that are considered to be obviously far far outside our problem solving ability.

If you prefer, interpret "intrinsically hard" as "having an intrinsic property that makes it subjectively hard for us. To model how that would look, consider the following setup:

The space of problems is just the real plane, and our ability to solve problems is modeled by a unit disk in the plane (if in the disk, solved, and the closer it is to the disk, the easier it is to solve). Then, the difficulty of a problem is subjective, it depends on where the disk is.

But let's say the disk is somewhere on the x axis, then the intrinsic property of a problem being far having a high y coordinate, makes it subjectively hard.


I'll make some edits to the post. I thought most of this was clear because of Preemption #1, but it was not.

Comment by Gabriel Alfour (gabriel-alfour-1) on Why are some problems Super Hard? · 2022-08-25T09:52:42.406Z · LW · GW

Are these perhaps boring, because the difficulty is well understood?

They are not boring, I am simply asking about some specific cluster of problems, and none of them belong to that cluster.

Comment by Gabriel Alfour (gabriel-alfour-1) on Why are some problems Super Hard? · 2022-08-25T09:49:21.847Z · LW · GW

I think part of the explanation is that we don't have a model for distance from success.  We have no clue if the researchers who've made serious attempts on these problems got us closer to an answer/proof, or if they just spun their wheels. 

This post is about experts in the fields of number theory and complexity theory claiming to have a clue about this. 
If you think "We have no clue", you likely think they are wrong, and I would be interested in knowing why.

I added more details on this comment, given that someone else already shared a similar thought

Comment by Gabriel Alfour (gabriel-alfour-1) on Why are some problems Super Hard? · 2022-08-25T09:45:55.311Z · LW · GW

The post is about predictions made by experts in number theory and complexity theory.

If you think that this can not be predicted, and that they are thus wrong about their predictions, I would be interested in knowing why.

Namely:

  • Do you have mechanistic / gear-level / inside view reasons for why the difficulty of problems can not be predicted ahead of time, where you disagree with those experts?
  • Do you have empirical / outside view reasons for why those experts are badly calibrated?
Comment by Gabriel Alfour (gabriel-alfour-1) on Why are some problems Super Hard? · 2022-08-25T09:41:28.287Z · LW · GW

It is plausible that the actual Collatz system is one of these for our standard proof systems.

Why? Consider the following:

The Collatz Conjecture has a lot of structure to it:

  • It is false for 3n-1
  • It is undecidable for generalizations of it
  • It is true empirically, up to 10^20
  • It is true statistically, a result of Terrence Tao establishes that "almost all initial values of n eventually iterate to less than log(log(log(log(n))))" (or inverse Ackermann)

Additionally, if you look at the undecidable generalizations of the Collatz Conjecture, I expect that you will find them much stronger than the base system. And when people prove such results, they look for the weakest system such that you still have undecidability.

As a result of those considerations, I find it quite implausible that the Collatz Conjecture is undecidable, and I would be interested in what makes you think otherwise.


P vs NP is another candidate for an undecidable problem, dealing as it does with general behaviour of Turing machines that can run programs with rather weak bounds. There's a lot that we can't yet prove about about general computation systems, and we have theorems that say we should expect there to be a lot that we can't ever prove.

My answer is similar here to Scott Aaronson's:

More to the point, if you believe P vs. NP is undecidable, then you need to answer the question: why does whatever intuition tells you that, not also tell you that the P versus EXP, NL versus PSPACE, MAEXP versus P/poly, TC0 versus AC0, and NEXP versus ACC questions are similarly undecidable? (In case you don’t know, those are five pairs of complexity classes that have been proved different from each other, sometimes using very sophisticated ideas.)

 

It would be unsurprising if quite a lot of the problems we can't solve after trying very hard are actually not solvable.

I am not sure if you mean "unsurprising" in the literal or metaphorical way.

If you mean that it would literally be merely unsurprising, I agree, this is in the realm of possibilities. But I would not call this a "good reason to expect it to be difficult to solve".

If you mean it as very likely, as to make it a good reason, I disagree. I expect that if we went through Wikipedia's list of mathematical Conjectures solved since 1995 (which I believe is a good approximation of "Problems that were really hard, and we get to find out how they turned out"), we'd find that most of them were actually resolved, rather than found as undecidable.

Comment by Gabriel Alfour (gabriel-alfour-1) on Why are some problems Super Hard? · 2022-08-25T07:44:21.055Z · LW · GW

Interesting.

A nice way to do such a post-mortem would be to actually ask the people who were there if they thought the problem was Super Hard, why so, and how they did update after the solution was found.

Thanks!

Comment by Gabriel Alfour (gabriel-alfour-1) on Why are some problems Super Hard? · 2022-08-25T07:32:32.767Z · LW · GW

And Collatz is just a random-ass problem which doesn't seem to have any special structure to it.

The Collatz Conjecture has a lot of structure to it:

  • It is false for 3n-1
  • It is undecidable for generalizations of it
  • It is true empirically, up to 10^20
  • It is true statistically, a result of Terrence Tao establishes that "almost all initial values of n eventually iterate to less than log(log(log(log(n))))" (or inverse Ackermann)

In the case of Collatz, there might exist some special trick for it, but it's not any of the tricks we know.

I am not sure what you counts or doesn't count as a trick. Is it only specific proof techniques, or entire fields (eg, number theory) too?

P vs NP is asking "ok, but how hard is it to solve arbitrary problems without any special structure, other than the ability to check a solution efficiently?"

Isn't this true of most already proven theorems in complexity theory too? Among other things:

  • We can prove the separation of P vs EXPTIME
  • We can not prove the separation of P vs PSPACE (PSPACE! Never mind NP)
  • We can prove that PSPACE = APTIME = IP = QIP

If we want to make progress on arbitrary problems without any special structure, then we need insights into problem-solving in general, not just the myriad of special cases which we usually think about.

While I do not think this actually applies to P vs NP or the Collatz Conjecture, I think it actually applies to my meta-question, which is something like "Why has no one actually proven that those 2 problems are Super Hard? Or not even proven it, just gave strong heuristic arguments for why they should be Super Hard".

Sounds reasonable that to answer such a question, even if those problems do have some special structure, you need insights into problem-solving in general.

Comment by Gabriel Alfour (gabriel-alfour-1) on Why are some problems Super Hard? · 2022-08-25T07:11:20.560Z · LW · GW

I agree this establishes that the Collatz' and P vs NP Conjectures have longer chain length than everything that has been tried yet. But this looks to me like a restatement of "They are unsolved yet".

Namely, this does not establish any cause for them being much harder than other merely unsolved yet problems. I do not see how your model predicts that the Collatz' and P vs NP Conjectures are much harder than other theorems in their fields that have been proved in the last 15 years, which I believe other experts have expected.

Put differently, the way I understand it, your model explains things post-hoc (ie, "if some problems are not solved, then, they must have had long chains"), but does not make predictions (how long do you expect their chains to be? how does this translate in terms of difficulty?).