Posts

Comments

Comment by jacopo on "Diamondoid bacteria" nanobots: deadly threat or dead-end? A nanotech investigation · 2024-03-06T17:34:30.691Z · LW · GW

You could also try to fit an ML potential to some expensive method, but it's very easy to produce very wrong things if you don't know what you're doing (I wouldn't be able for one)

Comment by jacopo on "Diamondoid bacteria" nanobots: deadly threat or dead-end? A nanotech investigation · 2024-03-06T17:30:21.931Z · LW · GW

Ahh for MD I mostly used DFT with VASP or CP2K, but then I was not working on the same problems. For thorny issues (biggish and plain DFT fails, but no MD) I had good results using hybrid functionals and tuning the parameters to match some result of higher level methods. Did you try meta-GGAs like SCAN? Sometimes they are suprisingly decent where PBE fails catastrophically...

Comment by jacopo on "Diamondoid bacteria" nanobots: deadly threat or dead-end? A nanotech investigation · 2024-03-06T08:48:17.305Z · LW · GW

My job was doing quantum chemistry simulations for a few years, so I think I can comprehend the scale actually. I had access to one of the top-50 supercomputers and codes just do not scale to that number of processors for one simulation independently of system size (even if they had let me launch a job that big, which was not possible)

Comment by jacopo on Research Post: Tasks That Language Models Don’t Learn · 2024-02-22T20:43:03.445Z · LW · GW

Isn't this a trivial consequence of LLMs operating on tokens as opposed to letters?

Comment by jacopo on If a little is good, is more better? · 2023-11-06T21:15:52.402Z · LW · GW

True, but this doesn't apply to the original reasoning in the post - he assumes constant probability while you need increasing probability (as with the balls) to make the math work.

Or decreasing benefits, which probably is the case in the real world.

Edit: misred the previous comment, see below

Comment by jacopo on "Diamondoid bacteria" nanobots: deadly threat or dead-end? A nanotech investigation · 2023-10-01T13:19:36.141Z · LW · GW

It seems very weird and unlikely to me that the system would go to the higher energy state 100% of the time

I think vibrational energy is neglected in the first paper, it would be implicitly be accounted for in AIMD. Also, the higer energy state could be the lower free energy state - if the difference is big enough it could go there nearly 100% of the time.

Comment by jacopo on "Diamondoid bacteria" nanobots: deadly threat or dead-end? A nanotech investigation · 2023-10-01T13:12:08.655Z · LW · GW

Although they never take the whole supercomputer, so if you have the whole supercomputer for yourself and the calculations do not depend on each other you can run many in parallel

Comment by jacopo on "Diamondoid bacteria" nanobots: deadly threat or dead-end? A nanotech investigation · 2023-09-30T19:52:08.343Z · LW · GW

That's one simulation though. If you have to screen hundreds of candidate structures, and simulate every step of the process because you cannot run experiments, it becomes years of supercomputer time.

Comment by jacopo on The commenting restrictions on LessWrong seem bad · 2023-09-18T16:35:39.397Z · LW · GW

There are plenty of people on LessWrong who are overconfident in all their opionions (or maybe write as if they are, as a misguided rhetorical choice?). It is probably a selection effect of people who appreciate the sequences - whatever you think of his accuracy record, EY definitely writes as if he's always very confident in his conclusions.

Whatever the reason, (rhetorical) overconfidence is most often seen here as a venial sin, as long as you bring decently-reasoned arguments and are willing to change your mind in response to other's. Maybe it's not your case, but I'm sure many would have been lighter with their downvotes had the topic been another one - just a few people strong downvoting instead of simple downvoting can change the karma balance quite a bit

Comment by jacopo on My current LK99 questions · 2023-08-02T13:44:14.954Z · LW · GW

(Phd in condensed matter simulation) I agree with everything you wrote where I know enough (for readers, I don't know anything about lead contacts and several other experimental tricky points, so my agreement should not be counted too much).

I just add on the simulation side (Q3): this is what you would expect to see in a room-T superconductor unless it relies on a completely new mechanism. But, this is something you see also in a lot of materials that superconduct at 20K or so. Even in some where the superconducting phase is completely suppressed by magetism or structural distortions or any other phase transition. In addition, DFT+U is a quick-and-dirty approach for this kind of problem, as fits the speed at which the preprint was put out. So from the simulation bayesian evidence in favor but very weak

Comment by jacopo on GPT-2's positional embedding matrix is a helix · 2023-07-22T11:18:24.499Z · LW · GW

Is there something that would regularise the vectors towards constant norm? An helix would make a lot of sense in this case. Especially one with varying radius, like in some (not all) the images

Comment by jacopo on Does descaling a kettle help? Theory and practice · 2023-05-03T20:23:44.815Z · LW · GW

I don't think it would change your conclusion but your kettle was not very scaly. My gets much worse than that, with the resistence entirely covered by a thick layer, despite descaling 3-4 times per year. It depends on the calc content of your tap water. I still don't think it affects energy use (maybe?), but the taste can be noticeable and I feel tea is actually harder to digest if I put off the descaling.

Also, you can use citric acid instead of vinegar. Better for the environment, less damaging to the kettle and it doesn't smell :)

Comment by jacopo on Invocations: The Other Capabilities Overhang? · 2023-04-04T20:06:47.040Z · LW · GW

Well stated. I would go even further: the only short timeline scenario I can immagine involves some unholy combination of recursive LLM calls, hardcoded functions or non-LLM ML stuff, and API calls. There would probably be space to align such a thing. (sort of. If we start thinking about it in advance.)

Comment by jacopo on Why no major LLMs with memory? · 2023-03-28T19:42:21.760Z · LW · GW

Isn't that the point of the original transformer paper? I have not actually read it, just going by summaries read here and there.

If I don't misremember RNN should be expecially difficult to train in parallel

Comment by jacopo on ChatGPT (and now GPT4) is very easily distracted from its rules · 2023-03-16T08:06:47.727Z · LW · GW

That seem reasonable, but it will probably change a number of correct answers (to tricky questions) as well if asked whether it's certain. One should verify that the number of incorrect answers fixed is significantly larger than the number of errors introduced.

But it might be difficult to devise a set of equally difficult questions for which the first result is different. Maybe choose questions where different instances give different answers, and see if asking a double check changes the wrong answers but not the correct ones?

Comment by jacopo on The epistemic virtue of scope matching · 2023-03-15T21:03:20.750Z · LW · GW

Good post, thank you for it. Linking this will save me a lot of time when commenting...

However I think that the banking case is not a good application. When one bank fails, it makes much more likely that other banks will fail immediately after. So it is perfectly plausible that two banks are weak for unrelated reasons, and that when one fails this pushes the other under as well.

The second one does not even have to be that weak. The twentieth could be perfectly healthy and still fail in the panic (it's a full blown financial crisis at this point!)

Comment by jacopo on What I mean by "alignment is in large part about making cognition aimable at all" · 2023-01-30T17:24:07.970Z · LW · GW

It's not clear here, but if you read the linked post it's spelled out (the two are complementary really). The thesis is that it's easy to do a narrow AI that knows only about chess, but very hard to make an AGI that knows the world, can operate in a variety of situations, but only cares about chess in a consistent way.

I think this is correct at least with current AI paradigms, and it has both some reassuring and some depressing implications.

Comment by jacopo on How to slow down scientific progress, according to Leo Szilard · 2023-01-06T17:47:41.822Z · LW · GW

I always thought Hall's point about nanotech was trivially false. Nanotech research like he wanted it died out in the whole world, but he explains it by US-specific factors. Why didn't research continue elsewhere? Plus, other fields that got large funding in Europe or Japan are alive and thriving. How comes?

That doesn't mean that a government program which sets up bad incentives cannot be worse than useless. It can be quite damaging, but not kill a technologically promising research field worldwide for twenty years.

Comment by jacopo on How to slow down scientific progress, according to Leo Szilard · 2023-01-06T17:26:18.878Z · LW · GW

The point about incouraging safe over innovative research is on spot though. Although the main culprits are not granting agencies but tying researcher careers to the number of peer reviewed papers imo. The main problem with the granting system is the amount of time wasted in writing grant applications.

Comment by jacopo on How to slow down scientific progress, according to Leo Szilard · 2023-01-06T17:19:09.428Z · LW · GW

That was quite different though (spoiler alert)

A benevolent conspiracy to hide a dangerous scientific discovery by lying about the state of the art and denying resources to anyone whose research might uncover the lie. Ultimately failing because apparently unrelated advances made rediscovering the true result too easy.

I always saw it as a reply to the idea that physicists could have hidden the possibility of an atomic bomb for more than a few years.

Comment by jacopo on Notice when you stop reading right before you understand · 2022-12-22T08:26:09.539Z · LW · GW

The example in the beginning is a perfect retelling of my interaction with transformers too :D

However, a word of caution: sometimes the efficient thing is actually to skim and move on. If you spend the effort to actually understand a topic which is difficult but limited in scope, but then you don't interact with it for a year or two, what you remember is just the high-level verbal summary (the same as if you stopped at the first step). For example, I have understood and forgotten MOSFET transistors at least three times in my life, and each time it was more or less the same effort. If I had to explain them now, I would retreat to a single shallow-level sentence.

Comment by jacopo on Adversarial Priors: Not Paying People to Lie to You · 2022-11-12T11:51:09.083Z · LW · GW

They commented without reading the post I guess...

Comment by jacopo on What will the scaled up GATO look like? (Updated with questions) · 2022-10-26T18:45:52.034Z · LW · GW

I think having an opinion on this requires much more technical knowledge than GPT4 or DALLE 3. I for one don't know what to expect. But I upvoted the post, because it's an interesting question.

Comment by jacopo on Naive Hypotheses on AI Alignment · 2022-07-04T11:23:53.526Z · LW · GW

I agree with you actually. My point is that in fact you are implicitly discounting EY pessimism - for example, he didn't release a timeline but often said "my timeline is way shorter than that" with respect to 30-years ones and I think 20-years ones as well. The way I read him, he thinks we personally are going to die from AGI, and our grandkids will never be born, with 90+% probability, and that the only chances to avoid it is that are either someone having a plan already three years ago which has been implemented in secret and will come to fruition next year, or some large out-of-context event happens (say, nuclear or biological war brings us back to the stone age).

My no-more-informed-than-yours opinion is that he's wrong on several points, but correct on others. From this I deduce that the risk of very bad outcomes is real and not negligible, but the situation is not as desperate and there are probably actions that will improve our outlook significantly. Note that in the framework "either EY is right or he's wrong and there's nothing to worry about" there's no useful action, only hope that he's wrong because if he's right we're screwed anyway. 

Implicitly, this is your world model as well from what you say. Discussing this then may look like nitpicking, but whether Yudkowsky or Ngo or Christiano are correct about possible scenarios changes a lot about which actions are plausibly helpful. Should we look for something that has a good chance to help in an "easier" scenario, rather than concentrate efforts on looking for solutions that work on the hardest scenario, given that the chance of finding one is negligible? Or would that be like looking for the keys under the streetlight?

Comment by jacopo on Naive Hypotheses on AI Alignment · 2022-07-03T15:43:00.656Z · LW · GW

I like the idea! Just a minor issue with the premise:

"Either I’d find out he’s wrong, and there is no problem. Or he’s right, and I need to reevaluate my life priorities."

There is a wide range of opinions, and EY's has one of the most pessimistic ones. It may be the case that he's wrong on several points, and we are way less doomed than he thinks, but that the problem is still there and a big one as well. 

(In fact, if EY is correct we might as well ignore the problem, as we are doomed anyway. I know this is not what he thinks, but it's the consequence I would take from his predictions)

Comment by jacopo on An AI defense-offense symmetry thesis · 2022-06-21T09:11:00.356Z · LW · GW

I think that you need to distinguish two different goals:

  • the very ambitious goal of eliminating any risk of misaligned AI doing any significant damage. If even possible, that would require an aligned AI with much stronger capabilities than the misaligned one (or many aligned AIs such that their combined capabilities are not easily matched)
  • the more limited goal to reduce extinction risk by AGI to a low enough level (say, comparable to asteroid risk or natural pathogen risk). This might manageble with the help of lesser AIs, depending on time to prepare
Comment by jacopo on Parliaments without the Parties · 2022-06-21T08:07:33.243Z · LW · GW

Addendum: if you want to bring legislation more in line with voters' preferences issue by issue, avoiding the distortion from coalition building, Swiss-style referenda seem to work to an acceptable degree http://www.lesswrong.com/posts/x6hpkYyzMG6Bf8T3W/swiss-political-system-more-than-you-ever-wanted-to-know-i

Comment by jacopo on Parliaments without the Parties · 2022-06-21T08:02:38.994Z · LW · GW

The biggest obstacle to your idea is, I think, the executive. In parlamentary systems the government answers to the parliament, and needs MPs support to continue - indeed, the Israeli maneuvering that you cite is related to making the government collapse, not to political parties. So as a first thing, you need a presidential system. But even then, MPs would probably organize as for or against the president - I imagine that the president's role in drafting and proposing legislation would be even higher than in present day US, as the coordination of MPs via the parties would be missing.

The second factor is the actual parties, i.e. the organisation of people that want to be politically active (in the current system these people select, support, control and in a few cases eventually become MPs, but that is only part of what the do). A lot of this activity will always be there and is important - what makes the party is members and of some political ideology, the MPs are not in principle needed. You want to separate these orgs from what happens in parliament, but it's not clear if it's possible - many candidates are always internal, people who have participated to party activity for years before running and continue to discuss their views with other members on a regular basis.

Personally, I think if we eliminated the parties we would probably be worse off, because they would be replaced by worse (less transparent) coalition building. But I would be curious to see you flesh out your ideas!

Comment by jacopo on Contra EY: Can AGI destroy us without trial & error? · 2022-06-14T17:16:39.181Z · LW · GW

You are correct (QM-based simulation of materials is what I do). The caveat is that exact simulations are so slow that they are impossible, that would not be the case with quantum computing I think. Fortunately, we have different levels of approximation for different purposes that work quite well. And you can use QM results to fit faster atomistic potentials.

Comment by jacopo on Who models the models that model models? An exploration of GPT-3's in-context model fitting ability · 2022-06-09T17:07:17.396Z · LW · GW

Note that there could still be some priors on some functions being more probable, or some more complex case being plainly impossible to fit because there's no way to get there from the meta-model that is the trained NN.

Comment by jacopo on Who models the models that model models? An exploration of GPT-3's in-context model fitting ability · 2022-06-09T16:55:33.563Z · LW · GW

I am left wondering if when GPT3 does few-shot arithmetics, it is actually fitting a linear model on the examples to predict the next token. I.e. the GPT3 weights do not "know" arithmetics, but they know how to fit, and that's why they need a few examples before they can tell you the answer to 25+17: they need to know what function of 25 and 17 to return.

It is not that crazy given my understanding of what a transformer does, which is in some sense returning a function of the most recent input which depends on earlier inputs. Or am I confusing them with a different NN design?

Comment by jacopo on A descriptive, not prescriptive, overview of current AI Alignment Research · 2022-06-09T16:10:32.907Z · LW · GW

Ahh sorry! Going back to read it was pretty clear from the text. I was tricked by the figure where the embedding is presented first. Again, good job! :)

Comment by jacopo on A descriptive, not prescriptive, overview of current AI Alignment Research · 2022-06-09T10:07:15.977Z · LW · GW

Cool work!

Can I ask a couple of questions about the DR+clustering approach? 

If I understand correctly, you do the clustering in a 2D space obtained with UMAP (ignore this if I am wrong). Are you sure you are not losing important information with such a low dimension? I say this because you show that one dimension is strongly correlated with style (academic vs forum/blog) and the second may be somewhat correlated with time. I remember that an argument exists for using n-1 dimensions when looking for n clusters, although that was probably using linear DR techniques and might not apply to UMAP. But it would be interesting to check if using higher n_components (3 to 5) results in the same clustering or generates some new insight.

Another thing you could check is using GMM instead of k-means. My (limited) experience is that if the embedding dimension is low you get better results this way. But, again, I was clustering downstream of linear DR.

Comment by jacopo on Ukraine Post #10: Next Phase · 2022-04-12T16:50:19.747Z · LW · GW

I agree. In fact, you could say that Mélenchon and le Pen are closer to each other on economic and possibly foreign policy, and very far from Macron. So not unreasonable that some votes would transfer from one to the other. Huge differences on everything else of course (immigration, but also law and order, education, culture, ...) I disagree on Hollande and generally center-left. Hollande had to juggle a very broad coalition as you say. He ended up hated by everyone because his way to handle it was not finding a middle ground, but campaigning as Mélenchon lite and then attempting to govern as Macron lite. Then he tried to dump the responsibility of the turnaround to financial markets and EU. After this, any possibility of a center-left coalition with an actual center-left agenda was dead and buried...

Comment by jacopo on Ways to invest your wealth if you believe in a high-variance future? · 2022-03-23T17:22:41.629Z · LW · GW

I think if you look up antifragile investment you find a lot of discussion of exactly this problem. As far as I understand, the idea is that most investments have limited downsides (at most, you lose what you put in) but may have limitless upsides in low-probability scenarios. Then you can make many small investments of this kind, so that when ones pays off, it's more than enough to pay you back from the loss of the rest. Taking your example of the nuclear bunker, if you could build one with 1% of your wealth or less, in this frame of mind probably you should. Or less dramatically, invest a bit into any technology that could reach world-changing level even if it looks unlikely, plus buy a house as well as a cabin far away from any city. Learn a little bit of skills that could be useless or incredibly useful, depending on the future.

The answers you are suggesting are more related to safe/robust investment (I'm uncertain on the correct term), i.e. investment which should be useful whatever happens, but has a capped upside. Both maintain good health and buy a house count in this category. I'm not more qualified to give specific advice here than anyone else, but if you ask yourself "what would a standard prudent person do" you're basically there.

Robust investment is what most people do in practice, and it's probably a good thing because I think antifragile investment is too easy to screw up. But under some assumption on the nature of the high variance, which I think match the kind of future scenarios that you listed, well-thought antifragile strategies could be much better. At the very least, you can copy the idea of not putting all eggs in one basket, even if you don't go all the way to make many small gambles in the expectation that at least one will pay off.

Comment by jacopo on Why Rome? · 2022-03-15T09:27:35.361Z · LW · GW

Interesting post! I like the picture you draw. But you should consider the possibility that it was not a Rome-unique factor, but the intersection of multiple things of which each one was true for multiple ancient states, but all of them only for Rome. In particular I have the impression that the subjects of the Persian empire were pretty happy with it and flourishing under its rule. To be clear, it was nothing like citizenship, because Persia was a kingdom and not a republican city-state. But between the investment model and the pillaging model that you mention, it was closer to the first. At some point, the Jews thought Cyrus was the Messiah! And in some ways, it's not a surprise that the second-greatest empire of western antiquity had some things in common with the first.

Comment by jacopo on Whence the determinant? · 2022-03-15T08:42:57.101Z · LW · GW

I like to think it in this way: the determinant is the product of the eigenvalues of a matrix, which you can conveniently compute without reducing the matrix to diagonal form. All interesting properties of the determinant are very easy (and often trivial!) to show for the product of the eigenvalues.

More in the spirit of your post, I don't remember how hard it is to show that the determinant is invariant under unitary transformation, but not too hard I think. It's not the only invariant of course (the trace is as well, I don't remember if there are others). But you could definitely start from the product of eigenvalues idea and make it invariant to get the formula for det.

Comment by jacopo on Conspiracy-proof archeology · 2022-03-01T17:25:02.750Z · LW · GW

Interesting read, but I don't think the initial example and the following are very much connected. The shift of opinion about ww2 has presumably happened without fabricated evidence or misinformation about factual events. USSR and USA played a very different role in the defeat of Germany, so asking "which contributed the most" is sensitive to shifting narratives and highlighting of different events. Similar questions from more distant past: who was to blame for ww1? Was Napoleon spreading modernity and equality in Europe, or ruthlessly subjugating neighbors? Were the middle ages a dark age? Was the western Roman empire brought down by barbarians, or mainly by other factors? For all these you can have different answers without fabricated evidence, just by shifting some facts forward and neglecting others. That's not to say that having tamper-proof historical sources is not important, just that it's not sufficient. And personally I think most manipulation happens at the broader narrative level (in the past and in the present).

Comment by jacopo on Transferring credence without transferring evidence? · 2022-02-04T20:18:49.198Z · LW · GW

Or more generally, X sends a costly signal of his belief in P. If X is the state (as in example 2) a bet is probably impractical, but doing anything that would be costly if X is false should work. But for this, it makes a big difference in what sense Y does not trust X. If Y thinks X may deceive, costly signals are good. If Y thinks X is stupid or irrational or similar, showing belief in P is useless.

Comment by jacopo on Counting Lightning · 2021-12-08T17:46:28.109Z · LW · GW

I mostly agree with the other commenters that the story does not show the qualitative changes we may expect to see from autonomous weapons. But I found it a very good short story nevertheless, and believable as well. I think it could serve well if broadly diffused, by getting someone to think about the topic for the first time before going into scenarios farther away from what they are used to.

Comment by jacopo on Why do you need the story? · 2021-11-25T10:35:30.559Z · LW · GW

I notice that while a lot of the answer is formal and well-grounded, "stories have the minimum level of internal complexity to explain the complex phenomena we experience" is itself a story :) Personally, I would say that any gear-level model will have gaps in the understanding, and trying to fill these gaps will require extra modeling which also has gaps, and so on forever. My guess is that part of our brain will constantly try to find the answers and fill the holes, like a small child asking "why x? ...and why y?". So if a more practical part of us wants to stop investigating, it plugs the holes with fuzzy stories which sound like understanding. Obviously, this is also a story, so discount it accordingly...

Comment by jacopo on An Unexpected Victory: Container Stacking at the Port of Long Beach · 2021-11-01T17:42:40.311Z · LW · GW

I agree it would be very good, and possibly an economic no-brainer. My point is just that what is discussed in the post works for a political no-brainer, by which I mean something that no one would bother to oppose. To get what you want you need a real political campaign, or a large scale economic education campaign. Even then it's difficult, imo, unless your proposals fit one of the cases I mention above.

That said, of you are thinking of the US there is an easy proposal to be done for medicine, which is making medical school equivalent to a college degree and eliminating the requirement of having already done college before to enter (see https://slatestarcodex.com/2015/06/06/against-tulip-subsidies/, which notes it's done that way in Europe, I add it's the same for law school etc.). It's not an earth-shaking reform but it could work exactly for that reason.

Comment by jacopo on An Unexpected Victory: Container Stacking at the Port of Long Beach · 2021-10-29T16:29:09.011Z · LW · GW

The problem is, licensed people have made an investment and expect to repay it by reaping profits from the protected market. Some have borrowed money to get in and may have to file for personal bankruptcy. So they will oppose the reform by any means at their disposal, for which I don't blame them (even if it is obviously against the general interest).

Such a reform would be doable in the following cases (1) it compensates the losers in some way (2) it's so gradual that current licensed will mostly retire before it's fully implemented (3) it is decided by a political faction that has no interest in the votes of the licensed and no sympathy for their concerns, while the licensed have no "hard power" to block the reform (and this third will never be fulfilled for a blanket effort on all licenses: in practice you get a party punching down on the least powerful people in the opponent's coalition).

As you see, it's a whole other order of complication with respect to the case presented in the post...

Comment by jacopo on Book Review: Rise and Fall of the Great Powers · 2021-10-14T09:11:09.604Z · LW · GW

On Prussia:

  • they managed to have almost the same GNP as France while keeping larger military spending, it's not surprising that they won the war
  • of course, it may be surprising that they managed to get there. Given the model, you would expect that they sacrificed internal stability, but in fact it was France that was the most unstable country in that period! (Revolution, Napoleon, restoration, second Republic, second empire)
  • you could say the political instability may have really hindered France, forcing higher consumption spending, but how comes this was true only post-napoleon?
  • back to Prussia: it is the case that they never needed to maintain superiority over Austria and France over a long period. Nearly everyone in Germany wanted to unify, the question was how/under whom. The Prussians in 1970 needed few quick victories to convince everyone that they were the only choice. For this reason they could focus on the short term. After that, they absorbed the rest of Germany which had focused on economy. Compare the parallel unification of Italy under Piedmont/Sardinia, a much weaker power that played a similar strategy.

It's not a coincidence that Hegel came up with the Zeitgeist idea exactly in 1800s Germany...

My overall take is that this is an useful starting point, and that structural factors are often underestimated, but the model is too simplified to actually make predictions with any confidence.

Comment by jacopo on Covid 9/30: People Respond to Incentives · 2021-10-01T16:11:48.428Z · LW · GW

On effectiveness and public health studies: the thread quoted says multiple times "in the US". I would be curious to know if this kind of things are done more elsewhere or it's an implicit assumption that it could be done only in the US anyway (which could very well be true for what I know, drug profits are way higher in the US after all).

Does anybody know?

Comment by jacopo on Schools probably do do something · 2021-09-27T08:08:06.363Z · LW · GW

My feeling is that many of the people which did not benefit tend to "generalise from one example" and assume that's true for most kids. Actually, I (despite being generally pro-schooling) would say something stronger than you: there is a minority of people who are actually harmed by school compared to a reasonable counterfactual (e.g. home-schooling for some). Plus, many kids can see easily where the system is failing them, less easily where it's working.

Comment by jacopo on Book Review: Who We Are and How We Got Here · 2021-09-24T12:43:31.000Z · LW · GW

Thanks for the review!

Regarding the "countering racism" doubts, I can see how the results should disprove at least some racist worldviews. 

I think that an interpretation of human history among racists is the following: the population splits in to clusters, these clusters diverge in different "races", eventually one emerges as "the best" and out-competes or replaces all others, before splitting again. Historically, this view was used to justify aggressive expansionism, opposition to intermarriage, and opposition to any policy that could slow this process by helping races which were seen as lesser.

I think what he wants to say is that this picture is not supported by the genetic data, which shows instead population clusters which split and merge and split again among different lines, on a fairly fast timescale and without one population replacing the other (except arguably for the Neanderthals, but even then not completely). In other words, there's no darwinian selection at the racial level, and there has almost never been.

Comment by jacopo on What fraction of breakthrough COVID cases are attributable to low antibody count? · 2021-08-23T14:13:57.549Z · LW · GW

According to my understanding (which comes from popularized sources, not I am not a doctor nor a biologist) antibody counts are not the main drivers of long-term immunity. Lasting immunity is given by memory T and B cells, which are able to quickly escalate the immune response in case of new infection, including producing new antibodies. So while high antibody count means you're well protected, a low count some months after the vaccine could mean that the protection has reduced, but in almost all cases you will be protected for a much longer time. Note that low antibody count immediately after the vaccine would be different, but I don't know if this even happens in people with an healthy immune system. Unfortunately there is no easy way to test how many memory T/B cells you have against a specific virus, without even going into how responsive they are.

So I think testing for antibodies before giving third doses would still result in giving the booster it to many more people than need it. Depending on how many doses you save, and on the costs of testing vs vaccinating, it may still be worth it. But it's probably more practical at this time to give the booster to the people we expect have developed less memory cells, in other words the immunocompromised and maybe elderly people. For the others, I would simply wait to have more data, and ship the extra doses to poor countries. 

Comment by jacopo on Rage Against The MOOChine · 2021-08-07T21:39:02.609Z · LW · GW

For info, you can find most of the exercises in python (done by someone else than Ng) here. They are still not that useful: I watched the course videos a couple of years ago and I stopped doing the exercises very quickly. 

I agree with you on both the praise and the complaints about the course. Besides it being very dated, I think that the main problem was that Ng was neither clear nor consistent about the goal. The videos are mostly an non-formal introduction to a range of machine learning techniques plus some in-depth discussion of broadly useful concepts and of common pitfalls for self-trained ML users. I found it delivered very well on that. But the exercises are mostly very simple implementations, which would maybe fit a more formal course. Using an already implemented package to understand hands-on overfitting, regularization etc. would be much more fitting to the course (no pun intended). At the same time, Ng kept repeating stuff like "at the end of the course you will know more than most ML engineers" which was a very transparent lie, but gave the impression that the course wanted to impart a working knowledge of ML, which was definitely not the case.

I don't know how much this is a common problem with MOOCs. It seems easily fixable but the incentives might be against it happening (being unclear about the course, just as aiming for students with minimal background, can be useful in attracting more people). Like johnswentworth I had more luck with open course ware, with the caveat that sometimes very good courses build on other ones with are not available or have insufficient online material.

Comment by jacopo on The Myth of the Myth of the Lone Genius · 2021-08-04T20:54:57.649Z · LW · GW

On this I agree with you. But the Darwin issue is a bit of a special case - the topic was politically/religiously charged, so it was important that a very respected figure was spearheading the idea. Wallace himself understood it, I think - he sent his research to Darwin instead of publishing it directly. But this is mostly independent of Darwin's scientific genius (only mostly, because he gained that status with his previous work on less controversial topics).

On the whole, I agree with jbash and Gerald below - "geniuses" in the sense of very smart scientists surely exist, and all else equal they speed up scientific advancement. But they are not that above ordinary smart-ish people. Lack of geniuses is rarely the main bottleneck, so an hypothetical science with less geniuses but more productive average-smarts researchers would probably advance faster if less glamorously. 

You could make a parallel between geniuses in science and heroes in war: heroic soldiers are good to have, but in the end wars are won by the side with more resources and better strategies. This does not stop warring nations to make a big deal of heroic exploits, but it's done to improve morale mostly.