**jeremysalwen**on Reduced impact AI: no back channels · 2013-11-15T00:06:07.757Z · score: 0 (0 votes) · LW · GW

To me the part that stands out the most is the computation of P() by the AI.

This module comes in two versions: the module P, which is an idealised version, which has almost unlimited storage space and time with which to answer the question

From this description, it seems that P is described as essentially omniscient. It knows the locations and velocity of every particle in the universe, and it has unlimited computational power. Regardless of whether possessing and computing with such information is possible, the AI will model P as being literally omniscient. I see no reason that P could not hypothetically reverse the laws of physics and thus would always return 1 or 0 for any statement about reality.

Of course, you could add noise to the inputs to P, or put a strict limit on P's computational power, or model it as a hypothetical set of sensors which is very fine-grained but not omniscient, but this seems like another set of free variables in the model, in addition to lambda, which could completely undo the entire setup if any were set wrong, and there's no natural choice for any of them.

**jeremysalwen**on Probability, knowledge, and meta-probability · 2013-09-14T22:15:21.846Z · score: 4 (4 votes) · LW · GW

I guess my position is thus:

While there are sets of probabilities which by themselves are not adequate to capture the information about a decision, there always is a set of probabilities which *is* adequate to capture the information about a decision.

In that sense I do not see your article as an argument against using probabilities to represent decision information, but rather a reminder to use the correct set of probabilities.

**jeremysalwen**on Probability, knowledge, and meta-probability · 2013-09-14T22:09:15.505Z · score: 2 (2 votes) · LW · GW

I don't think it's correct to equate probability with expected utility, as you seem to do here. The probability of a payout is the same in the two situations. The point of this example is that the probability of a particular event does not determine the optimal strategy. Because utility is dependent on your strategy, that also differs.

Hmmm. I was equating them as part of the standard technique of calculating the probability of outcomes from your actions, and then from there multiplying by the utilities of the outcomes and summing to find the expected utility of a given action.

I think it's just a question of what you think the error is in the original calculation. I find the error to be the conflation of "payout" (as in immediate reward from inserting the coin) with "payout" (as in the expected reward from your action including short term and long-term rewards). It seems to me that you are saying that you can't look at the immediate probability of payout

The point of this example is that the probability of a particular event does not determine the optimal strategy. Because utility is dependent on your strategy, that also differs.

which I agree with. But you seem to ignore the obvious solution of considering the probability of *total* payout, including considerations about your strategy. In that case, you really do have a single probability representing the likelihood of a single outcome, and you do get the correct answer. So I don't see where the issue with using a single probability comes from. It seems to me an issue with using the wrong single probability.

And especially troubling is that you seem to agree that using direct probabilities to calculate the single probability of each outcome and then weighing them by desirability will give you the correct answer, but then you say

probability by itself is not a fully adequate account of rationality.

which may be true, but I don't think is demonstrated at all by this example.

Thank you for further explaining your thinking.

**jeremysalwen**on Probability, knowledge, and meta-probability · 2013-09-14T21:46:14.714Z · score: 3 (5 votes) · LW · GW

The subtlety is about what numerical data can formally represent your full state of knowledge. The claim is that a mere probability of getting the $2 payout does not.

However, a single probability for each outcome given each strategy *is* all the information needed. The problem is not with using single probabilities to represent knowledge about the world, it's the straw math that was used to represent the technique. To me, this reasoning is equivalent to the following:

"You work at a store where management is highly disorganized. Although they precisely track the number of days you have worked since the last payday, they never remember when they last paid you, and thus every day of the work week has a 1/5 chance of being a payday. For simplicity's sake, let's assume you earn $100 a day.

You wake up on Monday and do the following calculation: If you go in to work, you have a 1/5 chance of being paid. Thus the expected payoff of working today is $20, which is too low for it to be worth it. So you skip work. On Tuesday, you make the same calculation, and decide that it's not worth it to work again, and so you continue forever.

I visit you and immediately point out that you're being irrational. After all, a salary of $100 a day clearly is worth it to you, yet you are not working. I look at your calculations, and immediately find the problem: You're using a single probability to represent your expected payoff from working! I tell you that using a meta-probability distribution fixes this problem, and so you excitedly scrap your previous calculations and set about using a meta-probability distribution instead. We decide that a Gaussian sharply peaked at 0.2 best represents our meta-probability distribution, and I send you on your way."

Of course, in this case, the meta-probability distribution doesn't change anything. You still continue skipping work, because I have devised the hypothetical situation to illustrate my point (*evil laugh*). The point is that in this problem the meta-probability distribution solves nothing, because the problem is not with a lack of meta-probability, but rather a lack of considering future consequences.

In both the OPs example and mine, the problem is that the math was done incorrectly, not that you need meta-probabilities. As you said, meta-probabilities are a method of screening off additional labels on your probability distributions *for a particular class of problems* where you are taking repeated samples that are entangled in a very particular sort of way. As I said above, I appreciate the exposition of meta-probabilities as a tool, and your comment as well has helped me better understand their instrumental nature, but I take issue with what sort of tool they are presented as.

If you do the calculations directly with the probabilities, your calculation will succeed if you do the math right, and fail if you do the math wrong. Meta-probabilities are a particular way of representing a certain calculation that succeed and fail on their own right. If you use them to represent the correct direct probabilities, you will get the right answer, but they are only an aid in the calculation, they *never* fix any problem with direct probability calculations. The fixing of the calculation and the use of probabilities are orthogonal issues.

To make a blunt analogy, this is like someone trying to plug an Ethernet cable into a phone jack, and then saying "when Ethernet fails, wifi works", conveniently plugging in the wifi adapter correctly.

The key of the dispute in my eyes is not whether wifi can work for certain situations, but whether there's anything actually wrong with Ethernet in the first place.

**jeremysalwen**on Probability, knowledge, and meta-probability · 2013-09-14T20:06:08.472Z · score: 21 (21 votes) · LW · GW

The exposition of meta-probability is well done, and shows an interesting way of examining and evaluating scenarios. However, I would take issue with the first section of this article in which you establish single probability (expected utility) calculations as insufficient for the problem, and present meta-probability as the solution.

In particular, you say

What’s interesting is that, when you have to decide whether or not to gamble your first coin, the probability is exactly the same in the two cases (p=0.45 of a $2 payout). However, the rational course of action is different. What’s up with that?

Here, a single probability value fails to capture everything you know about an uncertain event. And, it’s a case in which that failure matters.

I do not believe that this is a failure of applying a single probability to the situation, but merely calculating the probability wrongly, by ignoring future effects of your choice. I think this is most clearly illustrated by scaling the problem down to the case where you are handed a green box, and only two coins. In this simplified problem, we can clearly examine all possible strategies.

- Strategy 1 would be to hold on to your two dollar coins. There is a 100% chance of a $2.00 payout
- Strategy 2 would be to insert both of your coins into the box. There is a 50.5% chance of a $0.00 payout, 40.5% chance of a $4.00 payout and a 9% chance of a $2.00 payout.
- Strategy 3 would be to insert one coin, and then insert the second only if the first pays out. There is a 55% chance of $1.00 payout, a 4.5% chance of a $2.00 payout, and a 40.5% chance of a $4.00 payout.
- Strategy 4 would be to insert one coin, and then insert the second only if the first doesn't pay out. There is a 50.5% chance of a 0.00$ payout, a 4.5% chance of a $2.00 payout, and a 45% chance of a $3.00 payout.

When put in these terms, it seems quite obvious that your choice to open the box would depend on more than the expected payoff from only the first box, because quite clearly your choice to open the first box pays off (or doesn't pay off) when opening (or not opening) the other boxes as well. This seems like an error in calculating the payoff matrix rather than a flaw with the technique of single probability values itself. It ignores the fact that opening the first box not only pays you off immediately, but also pays you off in the future by giving you information about the other boxes.

This problem easily succumbs to standard expected value calculations if all actions are considered. The steps remain the same as always:

- Assign a utility to each dollar amount outcome
- Calculate the expected utility of all possible strategies
- Choose the strategy with the highest expected utility

In the case of two coins, we were able to trivially calculate the outcomes of all possible strategies, but in larger instances of the problem, it might be advisable to use shortcuts in the calculations. However, it still remains true that the best choice will still be the one you *would* have gotten if you had done out the full expected value calculation.

I think the confusion arises because a lot of the time problems are presented in a way that screens them off from the rest of the world. For example, you are given a box, and it either has $10.00 or $100.00. Once you open the box, the only effect it has on you is the amount of money you got. After you get the money, the box does not matter to the rest of the world. Problems are presented this way so that it is easy to factor out the decisions and calculations you have to make from every other decision you have to make. However, decision are not necessarily this way (in fact in real life, very few decisions are). In the choice of inserting the first coin or not, this is simply not the case, despite having superficial similarities to standard "box" problems.

Although you clearly understand that the payoffs from the boxes are entangled, you only apply this knowledge in your informal approach to the problem. The failure to consider the full effects of your actions in opening the first box may be psychologically encouraged by the technique of "single probability calculations", but it is certainly not a failure of the technique itself to capture such situations.

**jeremysalwen**on Doublethink (Choosing to be Biased) · 2013-01-12T07:39:44.869Z · score: 2 (2 votes) · LW · GW

It's also irrelevant to the point I was making. You can point to different studies giving different percentages, but however you slice it a significant portion of the men she interacts with would have sex with her if she offered. So maybe 75% is only true for a certain demographic, but replace it with 10% for another demographic and it doesn't make a difference.

**jeremysalwen**on Causal Universes · 2012-12-21T16:46:43.933Z · score: -1 (1 votes) · LW · GW

I was reading a lesswrong post and I found this paragraph which lines up with what I was trying to say

Some boxes you really can't think outside. If our universe really is Turing computable, we will never be able to concretely envision anything that isn't Turing-computable—no matter how many levels of halting oracle hierarchy our mathematicians can talk about, we won't be able to predict what a halting oracle would actually say, in such fashion as to experimentally discriminate it from merely computable reasoning.

**jeremysalwen**on 2012 Less Wrong Census/Survey · 2012-12-07T23:06:08.308Z · score: 1 (1 votes) · LW · GW

Analysis of the survey results seems to indicate that I was correct: http://lesswrong.com/lw/fp5/2012_survey_results/

**jeremysalwen**on Causal Universes · 2012-12-04T07:24:04.261Z · score: 1 (1 votes) · LW · GW

Yes, I agree. I can imagine some reasoning being concieving of things that are trans-turing complete, but I don't see how I could make an AI do so.

**jeremysalwen**on Causal Universes · 2012-12-04T02:28:36.163Z · score: 1 (1 votes) · LW · GW

As mentioned below, we you'd need to make infinitely many queries to the Turing oracle. But even if you could, that wouldn't make a difference.

Again, even if there was a module to do infinitely many computations, the code I wrote still couldn't tell the difference between that being the case, and this module being a really good computable approximation of one. Again, it all comes back to the fact that I am programming my AI on a turing complete computer. Unless I somehow (personally) develop the skills to program trans-turing-complete computers, then whatever I program is only able to comprehend something that is turing complete. I am sitting down to write the AI *right now*, and so regardless of what I discover in the future, I can't program my turing complete AI to understand anything beyond that. I'd have to program a *trans*-turing complete computer *now*, if I ever hoped for it to understand anything beyond turing completeness in the future.

**jeremysalwen**on Causal Universes · 2012-11-29T06:44:06.924Z · score: 1 (1 votes) · LW · GW

I don't see how this changes the possible sense-data our AI could expect. Again, what's the difference between infinitely many computations being performed in finite time and only the computations numbered up to a point too large for the AI to query being calculated?

If you can give me an example of a universe for which the closest turing machine model will not give indistinguishable sense-data to the AI, then perhaps this conversation can progress.

**jeremysalwen**on Causal Universes · 2012-11-28T21:39:26.873Z · score: 1 (1 votes) · LW · GW

Even if the world weren't computable, any non-computable model would be useless to our AI, and the best it could do is a computable approximation.

Again, what distinguishes a "turing oracle" from a finite oracle with a bound well above the realizable size of a computer in the universe? They are indistinguishable hypotheses. Giving a turing complete AI a turing oracle doesn't make it capable of understanding anything more than turing complete models. The turing-transcendant part must be an integral part of the AI for it to have non-turing-complete hypotheses about the universe, and I have no idea what a turing-transcendant language looks like and even less of an idea of how to program in it.

**jeremysalwen**on Causal Universes · 2012-11-28T08:41:00.825Z · score: 2 (2 votes) · LW · GW

Well I suppose starting with the assumption that my superintelligent AI is merely turing complete, I think that we can only say our AI has "hypothesis about the world" if it has a computable model of the world. Even if the world weren't computable, any non-computable model would be useless to our AI, and the best it could do is a computable approximation. Stable time loops seem computable through enumeration as you show in the post.

Now, if you claim that my assumption that the AI is computable is flawed, well then I give up. I truly have no idea how to program an AI more powerful than turing complete.

**jeremysalwen**on Money: The Unit of Caring · 2012-11-27T15:54:35.387Z · score: 2 (2 votes) · LW · GW

If you don't spend two months salary on a diamond ring, it doesn't mean you don't love your Significant Other. ("De Beers: It's Just A Rock.") But conversely, if you're always reluctant to spend any money on your SO, and yet seem to have no emotional problems with spending $1000 on a flat-screen TV, then yes, this does say something about your relative values.

I disagree, or at least the way it's phrased is misleading. The obvious completion of the pattern is that you care more about a flat screen TV than your SO. But that's not a valid comparison. What it really says is that you care more about the flat-screen TV than anything else you could purchase for your SO for $1000. But for example, if you're poorer than your SO, you could believe that it's always better marginal investment to invest in your own happiness rather than theirs, but this says nothing about how much you value the relationship or the person. How much you "value" a person isn't on the same scale.

**jeremysalwen**on 2012 Less Wrong Census/Survey · 2012-11-08T06:26:08.658Z · score: 4 (4 votes) · LW · GW

From what I could read on the iqtest page, it seemed that they didn't do any correction for self-selection bias, but rather calculated scores as if they had a representative sample. Based on this I would guess that the internet IQ test will underestimate your score (p=0.7)

**jeremysalwen**on 2012 Less Wrong Census/Survey · 2012-11-04T16:18:15.214Z · score: 3 (5 votes) · LW · GW

Luckily it will remain possible for everyone to do so for the foreseeable future.

**jeremysalwen**on How to Deal with Depression - The Meta Layers · 2012-10-28T04:46:55.062Z · score: 1 (1 votes) · LW · GW

Thanks for this. Although I don't suffer from depression, the comments about meta-suffering really resonate with me. I think (this is unverified as of yet) that my life can be improved by getting rid of meta-suffering.

**jeremysalwen**on Circular Altruism · 2012-10-12T01:29:30.206Z · score: 2 (2 votes) · LW · GW

I certainly wouldn't pay that cent if there was an option of preventing 50 years of torture using that cent. There's nothing to say that my utility function can't take values in the surreals.

**jeremysalwen**on New study on choice blindness in moral positions · 2012-09-21T01:44:11.893Z · score: 4 (4 votes) · LW · GW

I'll make sure to keep you away from my body if I ever enter a coma...

**jeremysalwen**on Less Wrong Polls in Comments · 2012-09-20T21:06:51.604Z · score: 0 (0 votes) · LW · GW

So what did you guess then?

**jeremysalwen**on Less Wrong Polls in Comments · 2012-09-20T20:53:58.356Z · score: 1 (1 votes) · LW · GW

Or maybe that's what I want you to think I'd say...

**jeremysalwen**on Less Wrong Polls in Comments · 2012-09-20T14:56:07.025Z · score: 4 (4 votes) · LW · GW

Hey everyone, I just voted, and so I can see the correct answer. The average is 19.2, so you should choose 17%!

**jeremysalwen**on Doublethink (Choosing to be Biased) · 2012-08-20T22:45:03.394Z · score: 7 (11 votes) · LW · GW

Perhaps I am just contrarian in nature, but I took issue with several parts of her reasoning.

"What you're saying is tantamount to saying that you want to fuck me. So why shouldn't I react with revulsion precisely as though you'd said the latter?"

The real question is why should she react with revulsion if he said he wanted to fuck her? The revulsion is a response to the tone of the message, not to the implications one can draw from it. After all, she can conclude with >75% certainty that any male wants to fuck her. Why doesn't she show revulsion simply upon discovering that someone is male? Or even upon finding out that the world population is larger than previously thought, because that implies that there are more men who want to fuck her? Clearly she is smart enough to have resolved this paradox on her own, and posing it to him in this situation is simply being verbally aggressive.

"For my face is merely a reflection of my intellect. I can no more leave fingernails unchewed when I contemplate the nature of rationality than grin convincingly when miserable."

She seems to be claiming that her confrontational behavior and unsocial values are inseparable from rationality. Perhaps this is only so clearly false to me because I frequent lesswrong.

"If it was electromagnetism, then even the slightest instability would cause the middle sections to fly out and plummet to the ground... By the end of class, it wasn't only sapphire donut-holes that had broken loose in my mind and fallen into a new equilibrium. I never was bat-mitzvahed."

This seems to show an incredible lack of creativity (or dare I say it, intelligence), that she would be unable to come up with a plausible way in which an engineer (never mind a supernatural deity) could fix a piece of rock to appear to be floating in the hole in a secure way. It's also incredible that she would not catch onto the whole paradox of omnipotence long before this, a paradox with a lot more substance.

"he eventual outcome would most likely be a compromise, dependent, for instance, on whether the computations needed to conceal one's rationality are inherently harder than those needed to detect such concealment."

Whoah, whoah, since when did cheating and catching it become a race of *computation*? Maybe an arms race of finding and concealing evidence, but when does computational complexity enter the picture? Second of all, the whole section about the Darwinian arms race makes the (extremely common) mistake of conflating evolutionary "goals" and individual desires. There is a difference between an action being evolutionarily advantageous, and an individual wanting to do it. Never mind the whole confusion about the nature of an individual human's goals (see http://lesswrong.com/lw/6ha/the_blueminimizing_robot/).

One side point is that the way she presents it ("Emotions are the mechanisms by which reason, when it pays to do so, cripples itself") is essentially presenting the situation as Newcomb's Paradox, and claiming that emotions are the solution, since her idea of "rationality" can't solve it on its own.

"By contrast, Type-1 thinking is concerned with the truth about which beliefs are most advantageous to hold."

But wait... the example given is not about which beliefs are most advantageous to hold... it's about which beliefs it's most advantageous to *act* like you hold. In fact, if you examine all of the further Type-X levels, you realize that they all collapse down to the same level. Suppose there is a button in front of you that you can press (or not press). How could it be beneficial to *believe* that you should push the button, but not beneficial to push the button? Barring of course, supercomputer Omegas which can read your mind. You're not a computer. You can't get a core dump of your mind which will show a clearly structured hierarchy of thoughts. There's no distinction to the outside world between your different levels of recursive thoughts.

I suppose this bothered me a lot more before I realized this was a piece of fiction and that the writer was a paranoid schizophrenic (the former applying to most else of what I am saying).

"Ah, yet is not dancing merely a vertical expression of a horizontal desire?"

No, certainly not merely. Too bad Elliot lacked the opportunity (and probably the quickness of tongue) to respond.

"But perplexities abound: can I reason that the number of humans who will live after me is probably not much greater than the number who have lived before, and that therefore, taking population growth into account, humanity faces imminent extinction?..."

Because I am overly negative in this post, I thought I'd point out the above section, which I found especially interesting.

But the whole "Flowers for Algernon" ending seemed a bit extreme...and out of place.

**jeremysalwen**on Rationality Quotes May 2012 · 2012-05-02T18:35:02.374Z · score: 7 (13 votes) · LW · GW

No, you can only get an answer up to the limit imposed by the fact that the coastline is actually composed of atoms. The fact that a coastline *looks* like a fractal is misleading. It makes us forget that just like everything else it's fundamentally discrete.

This has always bugged me as a case of especially sloppy extrapolation.

**jeremysalwen**on Decision Theories: A Semi-Formal Analysis, Part III · 2012-04-15T19:49:55.451Z · score: 1 (1 votes) · LW · GW

You're right, if the opponent is a TDT agent. I was assuming that the opponent was simply a prediction=>mixed strategy mapper. (In fact, I always thought that the strategy 51% one-box 49% two box would game the system, assuming that Omega just predicts the outcome which is most likely).

If the opponent is a TDT agent, then it becomes more complex, as in the OP. Just as above, you have to take the argmax over all possible y->x *mappings*, instead of simply taking the argmax over all outputs.

Putting it in that perspective, essentially in this case we are adding all possible mixed strategies to the space of possible outputs. Hmmm... That's somewhat a better way of putting it than everything else I said.

In any case, two TDT agents will both note that the program which only cooperates 100% iff the opponent cooperates 100% dominates all other mixed strategies against such an opponent.

So to answer the original question: *Yes*, it will defect against blind mixed strategies. *No*, it will not necessarily defect against simple (prediction =>mixed strategy) mappers. *N/A* against another TDT agent, as neither will ever play a mixed strategy, so to ask what whether it would cooperate with a mixed strategy TDT agent is counterfactual.

EDIT: Thinking some more, I realize that TDT agents will consider the sort of 99% rigging against each other — and will find that it is better than the cooperate IFF strategy. However, this is where the "sanity check" become important. The TDT agent will realize that although such a pure agent would do better against a TDT opponent, the opponent knows that you are a TDT agent as well, and thus will not fall for the trap.

Out of this I've reached two conclusions:

The sanity check outlined above is

*not*broad enough, as it only sanity checks the*best*agents, whereas even if the best possible agent fails the sanity check, there still could be an*improvement*over the nash equilibrium which passes.Eliezer's previous claim that a TDT agent will never regret being a TDT agent given full information is

*wrong*(hey, I thought it was right too). Either it gives in to a pure 99% rigger or it does not. If it does, then it regrets not being able to 99% rig another TDT agent. If it does not, then it regrets not being a simple hard-coded cooperator against a 99% rigger. This probably could be formalized a bit more, but I'm wondering if Eliezer et. al. have considered this?

EDIT2: I realize I was a bit confused before. Feeling a bit stupid. Eliezer never claimed that a TDT agent won't regret being a TDT agent (which is obviously possible, just consider a clique-bot opponent), but that a TDT agent will never regret being given information.

**jeremysalwen**on Decision Theories: A Semi-Formal Analysis, Part III · 2012-04-15T19:10:59.640Z · score: 1 (1 votes) · LW · GW

Well, it certainly will defect against any mixed strategy that is hard coded into the opponent’s source code. On the other hand, if the mixed strategy the opponent plays is dependent on what it predicts the TDT agent will play, then the TDT agent will figure out which outcome has a higher expected utility:

(I defect, Opponent runs "defection predicted" mixed strategy)

(I cooperate, Opponent runs "cooperation detected" mixed strategy)

Of course, this is still simplifying things a bit, since it assumes that the opponent can perfectly predict one's strategy, and it also rules out the possibility of the TDT agent using a mixed strategy himself.

Thus the actual computation is more like

ArgMax(Sum(ExpectedUtility(S,T)*P(T|S)))

where the argmax is over S: all possible mixed strategies for the TDT agent

the sum is over T: all possible mixed strategies for the opponent

and P(T|S) is the probability that opponent will play T, given that we choose to play S. (so this is essentially an estimate of the opponent's predictive power.)

**jeremysalwen**on The So-Called Heisenberg Uncertainty Principle · 2012-04-14T19:23:14.006Z · score: 1 (1 votes) · LW · GW

Okay, I completely understand that the Heisenberg Uncertainty principle is simply the manifestation of the fact that observations are fundamentally interactions.

However, I never thought of the *uncertainty principle* as the part of quantum mechanics that causes some interpretations to treat observers as special. I was always under the impression that it was quantum entanglement... I'm trying to imagine how a purely wave-function based interpretation of quantum entanglement would behave... what is the "interaction" that localizes the spin wavefunction, and why does it seem to act across distances faster than light? Please, someone help me out here.

**jeremysalwen**on SotW: Check Consequentialism · 2012-04-05T20:49:11.590Z · score: 2 (2 votes) · LW · GW

Er, this is assuming that the information revealed is not intentionally misleading, correct? Because certainly you could give a TDT agent an extra option which would be rational to take on the basis of the information available to the agent, but which would still be rigged to be worse than all other options.

Or in other words, the TDT agent can never be aware of such a situation.

**jeremysalwen**on Rationality Quotes April 2012 · 2012-04-05T06:18:04.350Z · score: 2 (2 votes) · LW · GW

**jeremysalwen**on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2012-04-04T07:24:05.228Z · score: 7 (9 votes) · LW · GW

Isn't this an invalid comparison? If The Nation were writing for an audience of reader which *only* read The Nation, wouldn't it change what it prints? The point is these publications are fundamentally part of a discussion.

Imagine if I thought there were fewer insects on earth then you did, and we had a discussion. If you compare the naive person who reads only my lines vs the naive person who reads only your lines, your person ends up better off, because on the whole, there are indeed a very large number of insects on earth This will be the case regardless of who actually has the accurate estimate of number of insect species. The point is that my lines will all present evidence that insects are less numerous, in an attempt to get you to adjust your estimate downward, and your lines will be the exact opposite. However, that says nothing about who has a better model of the situation.

**jeremysalwen**on The Singularity Institute's Arrogance Problem · 2012-04-02T03:20:52.569Z · score: 6 (6 votes) · LW · GW

Here: http://lesswrong.com/lw/ua/the_level_above_mine/

I was going to go through quote by quote, but I realized I would be quoting the entire thing.

Basically:

A) You imply that you have enough brainpower to consider yourself to be approaching Jaynes's level. (approaching alluded to in several instances) B) You were surprised to discover you were not the smartest person Marcello knew. (or if you consider surprised too strong a word, compare your reaction to that of the merely very smart people I know, who would certainly not respond with "Darn"). C) Upon hearing someone was smarter than you, the first thing you thought of was how to demonstrate that you were smarter than them. D) You say that not being a genius like Jaynes and Conway is a "possibility" you must "confess" to. E) You frame in equally probable terms the possibility that the only thing separating you from genius is that you didn't study quite enough math as a kid.

So basically, yes, you don't explicitly say "I am a mathematical genius", but you certainly positions yourself as hanging out on the fringes of this "genius" concept. Maybe I'll say "Schrodinger’s Genius".

Please ignore that this is my first post and it seems hostile. I am a moderate-time lurker and this is the first time that I felt I had relevant information that was not already mentioned.