Comment by petermccluskey on Why didn't Agoric Computing become popular? · 2019-02-17T17:40:26.430Z · score: 1 (1 votes) · LW · GW

Those concerns would have slowed adoption of agoric computing, but they seem to apply to markets in general, so they don't seem useful in explaining why agoric computing is less popular than markets in other goods/services.

Comment by petermccluskey on Why didn't Agoric Computing become popular? · 2019-02-17T17:32:30.099Z · score: 1 (1 votes) · LW · GW

The central planner may know exactly what resources exist on the system they own, but they don't know all the algorithms and data that are available somewhere on the internet. Agoric computing would enable more options for getting programmers and database creators to work for you.

Comment by petermccluskey on Why didn't Agoric Computing become popular? · 2019-02-16T17:50:50.268Z · score: 9 (5 votes) · LW · GW

One obstacle has been security. To develop any software that exchanges services for money, you need to put substantially more thought into the security risks of that software, and you probably can't trust a large fraction of the existing base of standard software. Coauthor Mark S. Miller has devoted lots of effort to replacing existing operating systems and programming languages with secure alternatives, with very limited success.

One other explanation that I've wondered about involves conflicts of interest. Market interactions are valuable mainly when they generate cooperation among agents who have divergent goals. Most software development happens in environments where there's enough cooperation that adding market forces wouldn't provide much value via improved cooperation. I think that's true even within large companies. I'll guess that the benefits of the agoric approach only become interesting when large number of companies switch to using it, and there's little reward to being the first such company.

Comment by petermccluskey on Individual profit-sharing? · 2019-02-14T03:49:59.760Z · score: 7 (3 votes) · LW · GW

Universities have tried something like this for tuition. See these mentions from Alex Tabarrok. They have some trouble with people defaulting.

Comment by petermccluskey on Drexler on AI Risk · 2019-02-01T23:09:20.560Z · score: 5 (3 votes) · LW · GW

Thanks, I've fixed those.

Drexler on AI Risk

2019-02-01T05:11:01.008Z · score: 31 (15 votes)
Comment by petermccluskey on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-28T22:32:35.165Z · score: 3 (2 votes) · LW · GW

After further rereading, I now think that what Drexler imagines is a bit more complex: (section 27.7) "senior human decision makers" would have access to a service with some strategic planning ability (which would have enough power to generate plans with dangerously broad goals), and they would likely restrict access to those high-level services.

I suspect Drexler is deliberately vague about the extent to which the strategic planning services will contain safeguards.

This, of course, depends on the controversial assumption that relatively responsible organizations will develop CAIS well before other entities are able to develop any form of equally powerful AI. I consider that plausible, but it seems to be one of the weakest parts of his analysis.

And presumably the publicly available AI services won't be sufficiently general and powerful to enable random people to assemble them into an agent AGI? Combining a robocar + Google translate + an aircraft designer + a theorem prover doesn't sound dangerous. But I'd prefer to have something more convincing than just "I spent a few minutes looking for risks, and didn't find any".

Comment by petermccluskey on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-25T16:49:49.591Z · score: 3 (2 votes) · LW · GW

I was assuming that long term strategic planners (as described in section 27) are available as an AIS, and would be one of the components of the hypothetical AGI.

That's not consistent with my understanding of section 27. My understanding is that Drexler would describe that as too dangerous.

suppose you asked the plan maker to create a plan to cure cancer.

I suspect that a problem here is that "plan maker" is ambiguous as to whether it falls within Drexler's notion of something with a bounded goal.

CAIS isn't just a way to structure software. It also requires some not-yet-common sense about what goals to give the software.

"Cure cancer" seems too broad to qualify as a goal that Drexler would consider safe to give to software. Sections 27 and 28 suggest that Drexler wants humans to break that down into narrower subtasks. E.g. he says:

By contrast, it is difficult to envision a development path in which AI developers would treat all aspects of biomedical research (or even cancer research) as a single task to be learned and implemented by a generic system.

Comment by petermccluskey on Does freeze-dried mussel powder have good stuff that vegan diets don't? · 2019-01-21T01:11:37.213Z · score: 17 (4 votes) · LW · GW

My guess, based on crude extrapolations from reported nutrients of other dried foods, is that you'll get half the nutrients of fresh mussels.

That ought to be a clear improvement on a vegan diet.

I suspect your main remaining reason for concern might be creatine.

My guesses about why vitamin pills tend to be ineffective (none of which apply to dried mussels):
* pills lack some important nutrients - ones which have not yet been recognized as important
* pills provide unnatural ratios of nutrients
* pills often provide some vitamins in a synthetic form, which not everyone converts to the biologically active form

Bundle your Experiments

2019-01-18T23:22:08.660Z · score: 18 (7 votes)

Time Biases

2019-01-12T21:35:54.276Z · score: 31 (8 votes)
Comment by petermccluskey on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-08T22:12:50.603Z · score: 3 (2 votes) · LW · GW

I'm not talking about the range. Domain seems possibly right, but not as informative as I'd like. I'm talking about what parts of spacetime it cares about, and saying that it only cares about specific outputs of a specific process. Drexler refers to this as "bounded scope and duration". Note that this will normally be an implicit utility function, that we infer from our understanding of the system.

"bounded utility function" is definitely not an ideal way of referring to this.

Comment by petermccluskey on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-08T20:26:46.727Z · score: 12 (5 votes) · LW · GW

I want to draw separate attention to chapter 40 of Drexler's paper, which uses what looks like a novel approach to argue that current supercomputers likely have more raw processing power than a human brain. I find that scary.

Comment by petermccluskey on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-08T19:58:27.064Z · score: 1 (1 votes) · LW · GW

I consider it important to further clarify the notion of a bounded utility function.

A deployed neural network has a utility function that can be described as outputting a description of the patterns it sees in its most recent input, according to whatever algorithm it's been trained to apply. It's pretty clear to any expert that the neural network doesn't care about anything beyond a specific set of numbers that it outputs.

A neural network that is in the process of being trained is slightly harder to analyze, but essentially the same. It cares about generating an algorithm that will be used in a deployed neural network. At any one training step, it is focused solely on applying fixed algorithms to produce improvements to the deployable algorithm. It has no concept that would lead it to look beyond its immediate task of incremental improvements to that deployable algorithm.

And in some important sense, those steps are the main ways in which AI gets used to produce cars that have superhuman driving ability, and the designers can prove (at least to themselves) that the cars won't go out and buy more processing power, or forage for more energy.

Many forms of AI will be more complex than neural networks (e.g. they might be a mix of RL and neural networks), and I don't have the expertise to extend this analysis to those systems. I'm confident that it's possible in principle to get general-purpose superhuman AIs using only this kind of bounded utility function, but I'm uncertain how practical that is compared to a more unified agent with a broader utility function.

Comment by petermccluskey on Why I expect successful (narrow) alignment · 2019-01-02T01:57:43.577Z · score: 11 (6 votes) · LW · GW

MIRI has a lot invested in the idea that AI safety is a hard problem which must have a difficult solution. So there’s a sense in which the salaries of their employees depend on them not understanding how a simple solution to FAI might work.

That doesn't sound correct. My understanding is that they're looking for simple solutions, in the sense that quantum mechanics and general relativity are simple. What they've invested a lot in is the idea that it's hard to even ask the right questions about how AI alignment might work. They're biased against easy solutions, but they might also be biased in favor of simple solutions.

Comment by petermccluskey on In Defense of Finance · 2018-12-18T19:03:38.623Z · score: 22 (6 votes) · LW · GW

Any discussion of bailouts ought to note that some countries have much fewer banking crises than the US.

Comment by petermccluskey on Player vs. Character: A Two-Level Model of Ethics · 2018-12-16T02:58:52.768Z · score: 16 (6 votes) · LW · GW

people generally identify as their characters, not their players.

I prefer to identify with my whole brain. I suspect that reduces my internal conflicts.

Comment by petermccluskey on Should ethicists be inside or outside a profession? · 2018-12-13T23:21:30.867Z · score: 6 (4 votes) · LW · GW

This seems a bit too strong. It seems to imply that I should ignore Bostrom's writings about AI ethics, and only look to people such as Demis Hassabis.

Or if I thought that nobody was close to having the expertise to build a superintelligent AI, maybe I'd treat it as implying that it's premature to have opinions about AI ethics.

Instead, I treat professional expertise as merely one piece of evidence about a person's qualifications to inform us about ethics.

Book review: Artificial Intelligence Safety and Security

2018-12-08T03:47:17.098Z · score: 30 (9 votes)
Comment by petermccluskey on Peanut Butter · 2018-12-04T21:22:15.024Z · score: 4 (3 votes) · LW · GW

Based on focusing I have realized some feelings like tiredness are not really ‘real’. They’re just a felt sense of not wanting to keep programming.

Feelings such as tiredness involve more high-level processing than I used to think.

That doesn't cause me to classify them as less real. Instead, I conclude that most, or maybe all, feelings of tiredness include some rather high-level predictions about the costs and benefits of whatever I'm doing now.

Comment by petermccluskey on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-04T20:39:26.065Z · score: 1 (1 votes) · LW · GW

Perhaps the ability to recognize individuals isn’t as tied to being a social animal as I had thought

I expect multiple sources of evolutionary pressure for recognizing individuals. E.g. when a human chases an animal to exhaustion, the human needs to track that specific animal even if it disappears into a herd, so as to not make the mistake of chasing an animal that isn't tired.

Comment by petermccluskey on Fat People Are Heroes · 2018-11-14T18:55:30.037Z · score: 2 (2 votes) · LW · GW

Being always hungry is a lousy way to loose weight. It means my body is always trying to conserve energy, as if this was a famine.

Part of the vicious cycle is addiction to food that doesn't make us feel full (see the Satiety Index for ideas about which foods). Remember that obesity is virtually unknown in hunter-gatherers, even when they have plenty of food available. It takes modern foods to make obesity common (see Stephan Guyenet).

Intermittent hunger can work somewhat well for weight loss, but mainly I need to eat food that's less addictive and that makes me feel full.

Comment by petermccluskey on Real-time hiring with prediction markets · 2018-11-13T00:39:01.390Z · score: 1 (1 votes) · LW · GW

It is often difficult to get people to bet in markets. This looks like a case where employees will do approximately no betting unless there are unusual incentives. It's hard to say whether these markets would produce enough benefit to pay for those incentives.

My intuition is that there's likely some cheaper way of improving the situation.

Comment by petermccluskey on Thoughts on short timelines · 2018-10-24T18:08:15.110Z · score: 20 (5 votes) · LW · GW

I disagree with your analysis of "are we that ignorant?".

For things like nuclear war or financial meltdown, we've got lots of relevant data, and not too much reason to expect new risks. For advanced nanotechnology, I think we are ignorant enough that a 10% chance sounds right (I'm guessing it will take something like $1 billion in focused funding).

With AGI, ML researchers can be influenced to change their forecast by 75 years by subtle changes in how the question is worded. That suggests unusual uncertainty.

We can see from Moore's law and from ML progress that we're on track for something at least as unusual as the industrial revolution.

The stock and bond markets do provide some evidence of predictability, but I'm unsure how good they are at evaluating events that happen much less than once per century.

Comment by petermccluskey on Where is my Flying Car? · 2018-10-23T01:15:56.585Z · score: 1 (1 votes) · LW · GW

By some strange coincidence, less than a week after posting this, I got another chance to observe the effects of being above the clouds. Technically, it was fog rather than clouds, but hiking up through and above fog has the same effect: in addition to creating the muted far-mode colors with fewer details that Josh mentions, it also enhances the impression of unusual height, by enabling my subconscious to imagine that the ground (or in this case ocean) beneath the clouds might be a half-mile below where I can see.

I wasn't able to observe that I was more in far mode than on a typical hike. I'm pretty sure the unusual feelings were more along the lines of feeling high status, due to some correlation between being higher than my surroundings and being high status, or maybe due to the strategic advantage of having higher ground than any hostile forces. Or maybe just a feeling of accomplishment for having climbed so high.

At any rate, it feels quite good, and it seems similar to the effects from flying small aircraft.

Comment by petermccluskey on Where is my Flying Car? · 2018-10-19T19:37:44.879Z · score: 1 (1 votes) · LW · GW

The book is maybe 25% nostalgia, 30% blame. It seems pretty relevant to how we make long-term forecasts.

He's talking SSTs. I'm unsure what propulsion methods he prefers for those. He claims sonic booms could have been cut in half compared to the Concorde. A good government would have set a fee for sonic booms that bore some resemblance to how much people disvalued loud noise. But the US government seems unwilling to say what conditions an SST would need to meet.

I'm unclear why I'd want undersea cities when there's room for seasteads.

Lunar colonies would hedge against some catastrophic risks. They might be created for reasons similar to why Europeans settled the New World.

Comment by petermccluskey on Dating book recommendations · 2018-10-18T16:57:38.986Z · score: 5 (3 votes) · LW · GW

I got some good insights from the Charisma Tips website, which now seems to only be available via Wayback.

For writing a good profile, I liked: I Can't Believe I'm Buying This Book: A Commonsense Guide to Successful Internet Dating, by Evan Marc Katz - mainly the examples in pages 36 through 39.

Authentic relating / circling workshops helped me, but they're more oriented toward what to do when you've got a date.

Comment by petermccluskey on Where is my Flying Car? · 2018-10-17T17:08:27.965Z · score: 1 (1 votes) · LW · GW

Comsats seem to be working as predicted, based on an energy-intensive technology that stagnated in 1970 and on low energy electronics that have continued to progress well.

Asimov said in 1964:

Robots will neither be common nor very good in 2014, but they will be in existence.

Josh classifies current industrial robotics as 100% of what was predicted for today, home robots as 90%, and AI as 80%.

Arthur C. Clarke said in 1964:

It will be possible in that age, perhaps only 50 years from now, for a man to conduct his business from Tahiti or Bali just as well as he could from London ... I am perfectly serious when I suggest that one day we may have brain surgeons in Edinburgh operating on patients in New Zealand.

Arthur C. Clarke in 1958 said we should have fusion power and a global library now, we should be getting weather control right about now, and gravity control in 2050.

Josh classifies our global library as 150% of what was predicted.

Other predictions Josh classifies as successful: home-based videophones, robocars, translating machines.

Other predictions that didn't come close: lunar base, "Transportation 1000 mph and one cent per mile", undersea cities.

Comment by petermccluskey on Where is my Flying Car? · 2018-10-15T23:11:08.829Z · score: 7 (4 votes) · LW · GW

There are lots of hypotheses about overall changes in innovation. But neither Josh nor I are talking about a general decline in innovation. I see patterns where there are big differences between industries, with innovation slowing in some and remaining good in others. I want to explain those differences.

Where is my Flying Car?

2018-10-15T18:39:38.010Z · score: 51 (15 votes)
Comment by petermccluskey on Effective Altruism Book Review: Radical Abundance (Nanotechnology) · 2018-10-15T16:46:52.293Z · score: 4 (3 votes) · LW · GW

nitrogen deficiency is one of the reasons for agricultural difficulties in Africa.

There's no shortage of nitrogen in the air. Nitrogen-fixing bacteria exist. Getting them to do the right job does not seem particularly hard compared to the rest of what Drexler discusses.

These problems are almost all geographical constraints

With sufficiently advanced technology, that claim seems obviously wrong.

Comment by petermccluskey on Thoughts on tackling blindspots · 2018-09-28T20:44:06.052Z · score: 4 (3 votes) · LW · GW

CFAR doesn't have anything resembling a textbook that would help advertise a lecture or seminar.

Some better analogies for what they have are notes that would supplement an improv class, or a yoga class, or a meditation retreat. Unlike textbooks / lectures, this category of teaching involves a fair amount of influencing system 1, in ways that are poorly captured by materials that are directed mainly at system 2. Another analogy for what they provide is group psychotherapy - in that example, something textbook-like seems somewhat valuable, but I think there are still good reasons not to expect a close connection between a specific textbook and a specific instance of group psychotherapy.

And calling CFAR's strategy a business model is a bit misleading - a good deal of their strategy involves focusing on free or very low cost workshops for people who show AI-related promise. They seem to get enough ordinary rationalists who pay $4000 via word of mouth that they can afford to give low priority to attracting more participants who will pay full price.

Comment by petermccluskey on Direct Primary Care · 2018-09-26T01:48:54.190Z · score: 2 (2 votes) · LW · GW

The ACA puts restrictions on buying minimalist insurance. I don't understand the details well enough to say how much it affects these goals.

Comment by petermccluskey on No standard metric for CFAR workshops? · 2018-09-09T16:22:59.793Z · score: 16 (6 votes) · LW · GW

IQ has a relatively large standard deviation compared to its mean

No, the mean here is an arbitrary convention, so 15 and 100 don't tell us anything relevant. The appropriate comparison is to what other interventions have accomplished.

Comment by petermccluskey on Book review: Pearl's Book of Why · 2018-07-09T00:19:38.840Z · score: 4 (2 votes) · LW · GW

That quote is from a section on history, with the context implying that "as frequently practiced" is likely to refer to an average over the 20th century, not a description of 2018.

Comment by petermccluskey on Book review: Pearl's Book of Why · 2018-07-08T17:13:42.322Z · score: 4 (3 votes) · LW · GW

You're right. I was trying to summarize ideas from the book The Cult of Statistical Significance, but that book now looks slightly misleading, and my summary was more misleading.

There are some important ways in which physics rejected significant parts of Fisher's ideas, but I guess I should describe them more as rejecting dogma than as rejecting p-values.

Comment by petermccluskey on Book review: Pearl's Book of Why · 2018-07-08T16:47:34.762Z · score: 6 (4 votes) · LW · GW

I meant as sophisticated as crows in terms of basic pattern recognition, and the number, diversity, and generality of the concepts they can learn. Maybe that just means throwing more cpu power at existing ML approaches. Maybe that requires better ways of integrating a more diverse set of approaches into a single system.

Maybe I don't have a clear enough meaning of "sophisticated" to be of much value here.

Book review: Pearl's Book of Why

2018-07-07T17:30:30.994Z · score: 69 (25 votes)
Comment by petermccluskey on Book Review: Why Honor Matters · 2018-06-26T02:25:16.507Z · score: 6 (3 votes) · LW · GW

See also The Institutional Revolution: Measurement and the Economic Emergence of the Modern World for ideas about why honor became less valuable in the west.

Comment by petermccluskey on Loss aversion is not what you think it is · 2018-06-21T01:27:45.291Z · score: 13 (4 votes) · LW · GW

Phrasing this in terms of utility functions is misleading. I suggest thinking in terms of a Schelling Point strategy, as David Friedman describes in his account of why property rights exist. Most utility functions will generate strategies such as this under many conditions.

Comment by petermccluskey on The Case Against Education: Why Do Employers Tolerate It? · 2018-06-12T00:38:58.486Z · score: 6 (3 votes) · LW · GW

Your post is mostly good, but I question your claim that majoring in English discourages conformity. My impression is that English departments reward conformity to a set of norms that somewhat conflicts with mainstream norms. I can imagine this being a valuable signal for unpopular professions.

Comment by petermccluskey on Why kids stop asking why · 2018-06-03T01:11:27.762Z · score: 13 (4 votes) · LW · GW

Could it be due to diminishing returns, as we pick the low-hanging/highest value fruit earlier?

Comment by petermccluskey on Monopoly: A Manifesto and Fact Post · 2018-06-01T14:52:35.048Z · score: 9 (3 votes) · LW · GW

The Elephant in the Brain has some relevant points - for some important industries such as medicine and schooling, consumers don't prefer low prices.

Comment by PeterMcCluskey on [deleted post] 2018-05-03T00:26:48.477Z

I'm unsure how much of this post I understand.

For a clear but long explanation of why compression provides a good measure of understanding, see Dan Burfoot's book draft.

Comment by petermccluskey on Does Thinking Hard Hurt Your Brain? · 2018-05-02T18:01:01.799Z · score: 3 (1 votes) · LW · GW

It sure feels like I use more energy when I'm thinking hard (but it usually doesn't give me headaches).

I'm having trouble reconciling this with the evidence that the brain uses a constant amount of energy.

The "wasted motion" explanation sounds close, but I sometimes feel drained after a long period of writing code that seemed to involve unusually little wasted motion.

I expect that part of the explanation is my S1 telling me I'm reaching diminishing returns to spending more time on this task.

Thinking hard correlates with feelings of stress, but I suspect they're both caused by having difficult problems, rather than thinking hard causing stress.

Comment by petermccluskey on Survey: Help Us Research Coordination Problems In The Rationalist/EA Community · 2018-04-10T14:47:24.767Z · score: 19 (4 votes) · LW · GW

It was clear to me as a donor in 2013 that CFAR was primarily motivated by AI risk, but I got that impression mainly from talking to the people involved.

The 2013 date marker was only on one of the two references to CFAR when I took the survey. That was confusing.

Comment by petermccluskey on Survey: Help Us Research Coordination Problems In The Rationalist/EA Community · 2018-04-08T02:14:55.276Z · score: 15 (4 votes) · LW · GW

The survey lists CFAR under "Raising The Sanity Waterline". I donate to CFAR because it's an AI risk charity. I don't donate to charities that aim at "Raising The Sanity Waterline".

Comment by petermccluskey on "Just Suffer Until It Passes" · 2018-02-12T22:33:56.375Z · score: 10 (3 votes) · LW · GW

With colds, I expect that waiting is roughly the best strategy. See https://www.goodreads.com/book/show/81788.Why_We_ Get_Sick. Our bodies are already evolved to have good defenses against pathogens, and most things we do to fight colds are ineffective, but sometimes look good because they address symptoms (those symptoms are often part of our bodies defenses) or because of regression to the mean.

Comment by petermccluskey on Monthly Meta: Referring is Underrated · 2018-02-10T00:12:50.312Z · score: 3 (1 votes) · LW · GW

https://www.goodreads.com/user/show/72997602-peter-mccluskey

Comment by petermccluskey on Biological humans and the rising tide of AI · 2018-01-29T19:01:10.136Z · score: 18 (5 votes) · LW · GW

Has Robin ever claimed property rights will never get trampled? My impression is that he's only saying it can be avoided in the time period he's trying to analyze.

Comment by petermccluskey on Security Mindset and the Logistic Success Curve · 2017-11-27T17:59:38.018Z · score: 23 (14 votes) · LW · GW

Coral isn't trying very hard to be helpful. Why doesn't she suggest that the company offer $10,000,000 for each security hole that people can demonstrate? Oh, right, she wants to use this as analogy for AGIs that go foom.

Comment by petermccluskey on Gears Level & Policy Level · 2017-11-24T18:00:13.134Z · score: 13 (4 votes) · LW · GW

We're not systematically bad forecasters. We're subject to widespread rewards to overconfidence.

Comment by petermccluskey on Project proposal: Rationality Cookbook · 2017-11-22T00:22:20.289Z · score: 9 (3 votes) · LW · GW

See curetogether.com - it seems like that would be what you want if they changed their categories a bit.

Comment by petermccluskey on Blind Empiricism · 2017-11-13T19:47:28.785Z · score: 11 (4 votes) · LW · GW

I think Eliezer may have been too modest(!) in describing the treatment as unfair. I think I recognize Startup Founder 1, and that looks very much like a conversation I'd expect the two of them to have.

I expect that Eliezer had more evidence than he conveys for the hypothesis that Startup Founder 1 was engaging in blind empiricism. But I have doubts about whether Eliezer was wise to reject hypotheses about why Startup Founder 1's advice might be right. Here are some mistakes that can be avoided by the "release early, release often" attitude:

  • creating overly elaborate hypotheses, rather than looking for hypotheses that can be tested cheaply.
  • being overconfident in one's ability to model users.
  • identifying with one's announced plans in a way that leads to them becoming a substitute for implementing the plans; I suspect this sometimes creates an aversion to exposing the plans to possible falsification.

I imagine that Startup Founder 1 suspected that Eliezer was making at least one of these mistakes, but couldn't articulate strong evidence for those suspicions.

Comment by petermccluskey on Inadequacy and Modesty · 2017-10-29T03:05:51.461Z · score: 15 (5 votes) · LW · GW

Scott Sumner suggests here that central banks worry about small risks that they'll need to be bailed out if their balance sheets get too large. (http://econlog.econlib.org/archives/2017/04/what_were_the_c.html)

Comment by petermccluskey on AlphaGo Zero and the Foom Debate · 2017-10-23T23:39:54.098Z · score: 13 (4 votes) · LW · GW

I've become substantially more confident in the past two years about Robin's position on the differences between humans and chimps.

Henrich's book The Secret Of Our Success presents a model in which social learning is the key advantage that caused humans to diverge from other apes. This model is more effective than any other model I've seen at explaining when human intelligence evolved.

To summarize chapter 2: Three year old humans score slightly worse than three year old chimpanzees on most subsets of IQ-like tests. "Social learning" is the only category where the humans outperform chimpanzees. Adult humans outperform chimpanzees on many but not all categories of IQ-like tests.

If humans have an important improvement in general intelligence over chimpanzees, why are its effects hard to observe at age 3?