Posts
Comments
The other day, during an after-symposium discussion on detecting BS AI/ML papers, one of my colleagues suggested doing a text search for “random split” as a good test.
A lot of what you write is to the point and very valid. However, I think you are missing part of the story. Let’s start with
“Unlike drug development, where you’re trying to precisely hit some key molecular mechanism, assessing toxicity almost feels…brutish in nature”
I assume you don’t really believe this. Toxicity is often exactly about precisely hitting some key molecular mechanism. A mechanism that you may have no idea your chemistry is going to hit before hand. A mechanism moreover that you cannot use a straight forward ML to find because your chemistry is not in any training set that an ML model could access. It is very easy to underestimate the vastness of drug-like chemical space, and it is generally the case any given biological target molecule (desired or undesired) can be inhibited or otherwise interfered with a wide range of different chemical moieties (thus keeping medicinal chemists very well employed, and patent lawyers busy). There is unlikely to be toxicological data on any of them unless the target is quite old and there is publically available data on some clinical candidates.
We look to AlphaFold as the great success for ML in the biological chemistry field, and so we should, but we need to remember that AlphaFold is working on an extremely small portion of chemical space, not much more than that covered by the 20 natural amino acids. So AlphaFold’s predictions can be comfortably within distribution of what is already established by structural biology. ML models for toxicology, on the other hand, are very frequently predicting out of distribution.
In point of fact the most promising routes to avoiding toxicity reside in models that are wholly or partially physics-based. If we are targeting a particular kinase (say) we can create models (using AlphaFold if necessary) of all the most important kinases we don’t want to hit and, using physics-based modelling, predict whether we could get unwanted activity against any of these targets. We still have the problem of hitting unrelated protein targets but even here we could, in principle, screen for similarities in binding cavities over a wide range of off-targets and use physics-based modelling to assess cases where there is a close enough match.
Needless to say that requires an awful lot of compute and no-one is really doing this to scale yet. It is a very difficult problem.
Yes, I agree, I think it is pretty unlikely. But not completely impossible. As I said it should be pretty easy to find them if they are in the lysate via, HP liquid chromatography. Brain penetrant cyclic peptides should on the whole be significantly less polar than acyclic polypeptides of similar mass.
An excellent analysis and I’m almost sure your mistrust in the pharmaceutical efficacy of Cerebrolysin is well founded. However, having some experience working in the field of brain-penetrant drugs, I can comment that your restrictions on molecular weight and properties are too conservative. Small molecules of >550 dalton are capable of crossing the blood brain barrier if very well tailored. Also small cyclic peptides can hide their polar backbones within buried intramolecular hydrogen bond networks and become membrane permeable. The bicyclic peptide SFTI-1, a 14mer peptide, has been shown brain penetrant in rat in what looks to me a reasonable study. So, playing devil’s advocate, there is a hypothesis that the lysis procedure generates certain cyclic peptides of 500-1000 Dalton that could penetrate the BBB and have a biological effect.
I don’t believe this hypothesis but it does need to be discounted. Such cyclic peptides should be straight-forward to detect by HPLC/MS, I’d have thought, through their significantly less polar nature. Has anyone published work looking for these in Cerebrolysin?
There is an additional important point that needs to be made. Alphafold3 is using predominantly “positive” data. By this I mean the training data encapsulates considerable knowledge of favourable atom-atom or group-group interactions and relative propensities can be deduced. But “negative” data, in other words repulsive electrostatic or Van der Waals interactions, are only encoded by absence because these are naturally not often found in stable biochemical systems. There are no relative propensities available for these interactions. So AF3 can be expected to not perform as well when applied to real-world drug design problems where such interactions have to be taken into account and balanced against each other and against favourable interactions. Again, this issue can be mitigated by creating hybrid physics compliant models.
It is worth also noting that ligand docking is not generally considered a high accuracy technique and, these days is often used to 1st pass screen large molecular databases. The hits from docking are then further assessed using an accurate physics-based method such as Free Energy Perturbation.
I have similar concerns regarding the ligand sets used to test Alphafold3. I’ve had a cursory look at them and it seemed to me there were a lot phosphate containing molecules, a fair few sugars, and also some biochemical co-factors. I haven’t done a detailed analysis, so some caveats. But if true, there are two points here. Firstly there will be a lot of excellent crystallographic training material available on these essentially biochemical entities, so AlphaFold3 is more likely to get these particular ones right. Secondly, these are not drug-like molecules and docking programs are generally parameterized to dock drug-like molecules correctly, so are likely to have a lower success rate on these structures than on drug-like molecules.
I think a more in-depth analysis of performance of AF3 on the validation data is required, as the OP suggests. The problem here is that biochemical chemical space, which is very well represented by experimental 3D structure, is much smaller than potential drug-like chemical space, which is poorly represented by experimental 3D structure comparatively speaking. So inevitably AF3 will often be operating beyond the zone of applicability, for any new drug series. There are ways of getting round this data restriction, including creating physics compliant hybrid models (and thereby avoiding clashing atoms). I’d be very surprised if such approaches are not currently being pursued.
So after tearing my hair out trying to generate increasingly complex statistical analyses of scientific data in Excel, my world changed completely when I started using KNIME to process and transform data tables. It is perfect for a non-programmer such as myself, allowing the creation of complex yet easily broken-down workflows, that use spreadsheet input and output. Specialist domain tools are easily accessible (e.g chemical structure handling and access to the RDKit toolkit for my own speciality) and there is a thriving community generating free-to-use functionality. Best of all it is free to the single desk-top user.
Useful post. I can expand on one point and make a minor correction. Single Particle Cryo-EM is indeed a new(ish) powerful method of protein structure elucidation starting to make an impact in drug design. It is especially useful when a protein cannot easily be crystallised to allow more straightforward X-Ray structure determination. This is usually the case with transmembrane proteins for example. However it is actually best if the protein molecules are completely unaligned in any preferred direction as the simplest application of the refinement software assumes a perfectly random 3D orientation of the many thousands of protein copies imaged on the grid. In practice this is not so easy to achieve and corrections for unwanted preferred orientation need to be made.
I’m not going to say I don’t share deep disquiet about where AI is taking us, setting aside existential risk. One thing that gives me hope, however, is seeing what has happened in chess. The doom mongers might have predicted, with the advent of StockFish and AlphaZero, that human interest in chess would greatly diminish, because, after all, what is the point when the machines are so much better than the world champion ( world champion ELO ~2800, StockFish ELO ~4000) . But this hasn’t happened, chess is thriving and the games of the best human players are widely streamed and analysed and their brilliancies greatly admired. The machines have actually been of great benefit. They have, for instance demonstrated that the classical swashbuckling 19th century style of chess, replete with strategic sacrifices that lead to beautiful attacks, is a valid way to play, because they often play that way themselves. This style of play was for a long period overshadowed by a preference for the more stifling positional chess, the gaining of small advantages. The machines also provide instant feedback in analysis on what is ground truth, whether a particular move is good, bad or neutral. This too has augmented the chess players enjoyment of the game rather than reduced It.
Maybe we can hope that the same situation will apply in other fields of human endeavour.
I think this is an interesting point of view. The OP is interested in how this concept of checked democracy might work within a corporation. From a position of ignorance can I ask whether anyone familiar with German corporate governance recognises this mode of democracy within German organisations? I choose Germany because large German companies historically incorporate significant worker representation within their governance structures, and, historically, tend to perform well.
My understanding is that off-label often means that the potential patient is not within the bounds of the clique of patients included in the approved clinical trials. We don’t usually perform clinical trials on children or pregnant women, for instance. Alternatively, strong scientific evidence is found that a drug works on a related disease to the actual target. It may well make sense to use drugs off label where the clinician can be comfortable that the benefits out-way the possible harms. In other cases, of course, it would be extremely poor medicine. In any case, having statistically significant and validated evidence that a drug actual does something useful, is non-negotiable IMO.
It is true that most pharma companies concentrate on indications that supply returns to offset the cost of development. The FDA does have a mechanism for Orphan Drug approval, for rare diseases, where the registration requirements are significantly lowered. According to this site 41 orphan drug approvals were made in 2023. Whether this mechanism is good enough allow the promototion of rare disease in the larger pharmaceutical industry is a good question. I wonder how many of these drugs, or their precursors, originated in academic labs,, and were then spun out to a start-up or sold on?
Two things that happen in the pharmaceutical industry today despite the FDA.
- Many drug candidates (compounds with IND status sanctioned by the FDA ) are pushed into clinical investigation prematurely by venture capital funded biotech, that more established and careful pharma companies would stay away from. These have a high rate of failure in the clinic. This is not fraud, by the way, it is usually a combination of hubris, inexperience, and a response to the necessity of rapid returns.
- Marketing wins over clinical efficacy, unless the difference is large. Tagamet was the first drug for stomach ulcers released in the late ‘70s.It was rapidly overtaken by Zantac, in the ’80s, through aggressive marketing, despite minimal clinical benefit. Today there is a large industry of medical writers sponsored by the pharmaceutical industry, whose job it is to present and summarise the clinical findings on a new drug in the most favourable way possible without straying into actual falsehood.
The scientists working at the sharp end of drug discovery, who fervently believe that what they do benefits mankind (this is, I believe, a gratifyingly large proportion of them) generally respect the job the FDA do. This is despite the hoops they force us to go through. Without the FDA keeping us honest, the medicines market would be swimming with highly marketed but inadequately tested products with dubious medicinal value. Investors would be less choosy about following respected well thought-out science, when placing their money. True innovation would actually be stifled because true innovation in drug discovery only shows its value once you’ve done the hard (and expensive) yards to prove medical benefit over existing treatments. Honest and well enforced regulation forces us to do the hard yards and take no short cuts.
In 2023 55 new drugs were approved by the FDA, hardly a sign that innovation is slacking. Without regulation the figure might be ten times higher but clinicians would be left swimming in a morass of claims and counter claims without good guidance (currently generally provided by the FDA) of what treatments should be applied in which situation.
Poorly regulated health orientated companies selling products that have little or no value? Seems unlikely.. Oh wait, what about Theranos?
A thought provoking post. Regarding peer reviewed science, I can offer the perspective that anonymous peer review is quite often not nice at all. But, having said that, unless a paper is extremely poor, adversarial reviews are rarely needed. A good critical constructive review can point out severe problems without raising the hackles of the author(s) unnecessarily and is more likely to get them dealt with properly than an overly adversarial review. This works so long as the process is private, the reviewer is truly anonymous, and the reviewer has the power to prevent bad work being published, even if from a respected figure in the field. Of these three criteria it is the last that I’d have most doubts about, even In well edited journals.
I’m not claiming this view to be particularly well informed, but it seems a reasonable hypothesis that the industrial revolution required the development, dispersement and application of new methods of applied mathematics. For this to happen there needed to be an easy-to-use number system with a zero and a decimal point. Use of calculus would seem to be an almost essential mathematical aid as well. Last but not least there needed to be a sizeable collaborative, communicative and practically minded scientific community who could discuss, criticise and disseminate applied mathematical ideas and apply them in physical experiments. All these three items were extant in Britain in the late 17th century, the latter being exemplified by the Royal Society. These, combined with the geologically bestowed gifts of coal and iron ore, would set Britain up to be in the best position to initiate the Industrial Revolution.
Now, can a proper historian of science critique this and show how this view is incorrect?
Anecdotal, but in the UK, in 1986, as a just graduated PhD I bought a 3 bedroom house for less than 4 times my salary. At present a similar house in a similar location, will cost roughly 10 times a starting PhD salary. House ownership for most young people in the UK is becoming a distant and ever delayed dream.
“Design is much more powerful than evolution since individually useless parts can be developed to create a much more effective whole. Evolution can't flip the retina or reroute the recurrent laryngeal nerve even though those would be easy changes a human engineer could make.”
But directed evolution of a polymeric macromolecule (E.g. repurposing an existing enzyme to process a new substrate) is so much easier practically speaking than designing and making a bespoke macromolecule to do the same job. Synthesis and testing of many evolutionary candidates is quick and easy, so many design/make/test cycles can be run quickly. This is what is happening at the forefront of the artificial enzyme field.
So my personal viewpoint (and I could be proved wrong) is that Bing hasn’t the capability to suffer in any meaningful way, but is capable (though not necessarily sentiently capable) of manipulating us into thinking it is suffering.
Whilst it may be that Bing cannot suffer in the human sense, it doesn’t seem obvious to me that more advanced AI’s, that are still no more than neural nets, cannot suffer in a way analogous to humans. No matter what the physiological cause of human suffering, it surely has to translate into a pattern of nerve impulses around an architecture of neurons that has most likely been purposed to give rise to the unpleasant sensation of suffering. That architecture of neurons presumably arose for good evolutionary reasons. The point is that there is no reason an analogous architecture could not be created within an AI, and could then cause suffering similar to human suffering when presented with an appropriate stimulus. The open question is whether such an architecture could possibly arise incidentally, or whether it has to be hardwired in by design. We don’t know enough to answer that but my money is on the latter.
I think this is a very good point.. Evolution has given humans the brain plasticity to create brain connectivity so that a predisposition for morality can be turned into a fully fledged sense of morality. There is, for sure, likely some basic structure in the brain that predisposes us to develop morality but I’d be of the view the crucial basic genes that control this structure are, firstly present in primates, and at least, other mammals, and, secondly, the mutations in these genes required to generate the morally inclined human brain, are far fewer than need be represented by 7.5 MB of information.
One thing both the genome and evolution have taught us is that huge complexity of function and purpose can be generated by a relatively small amount of seed information
A personal anecdote. Many, many moons ago I started my research career at a large multinational organisation in a profitable steady business. I enjoyed the job, the perks were nice, I did the work and did well in the system. Some years later my group were asked to take a training course run by an external organisation. We were set a scenario “Imagine your company has only money for 6 months? What are you going to do about It?” We, cossetted in our big company mindset, thought the question hilarious and ludicrous.
Fast forward a number of years, the company closed our site down and I went off and joined a start-up, Very soon we all found ourselves in exactly the scenario depicted in the training exercise. We managed to survive. I’ve worked in small/smallish organisations ever since. There have been ups and downs but on the whole I wouldn’t have changed anything.
This is perhaps slightly tangential, though likely consequential to the Middle Manager Hell the OP describes. The big company environment made it easy for us to be complacent and comfortable, and hard for us to follow up the high risk high/profit ideas that might have made a big difference to the bottom line.
This was a long while ago and since then at least some big companies have tried various initiatives to change this kind of mindset. So perhaps things have changed in some large multinationals. Can anyone else comment?
I‘m afraid you’ll have to do more to convince me of the argument that Lavoisierian theory held up the development of chemistry for decades by denying the role of energy. Can you provide some evidence? Until the discovery of the atomic model, chemistry by necessity had to be an empirical science where practitioners discovered phenomena and linked them together and drew parallels, and progressed in that manner. Great progress was made without a deep underlying theory of how chemistry worked. It was well known that some reactions gave out heat, and some required heat to proceed and not much more was needed as regards the role of “energy”. Alloys and dyes and such were all first discovered without much deep understanding of chemical reaction theory.
Once quantum theory came along we understood how chemistry works and a lot of observations and linkages made sense. But for a long time quantum theory didn’t help as much as you might expect in pushing chemistry in new directions because the equations were too hard to get any real numbers out. So, much of chemical research carried on quite happily following well tried and tested paths of empirical research (and still does to quite a large extent). It was only really with the advent of computers that we started to make heavy use of calculation to help drive research.
You make the very good point that the Phlogistonists didn’t deserve to be pilloried, because they had a theory that was self consistent enough to model the real world as we know it now. But until electrons were actually discovered, it is hard to see how any Phlogistonist could seriously compete with the Lavoisierian point of view. It could scarcely be otherwise.
Interesting example. I think the movie theatre in practice always has value and counts towards wealth, because even if you don’t have time/inclination to use it, you could in principle sell the house to an appropriate movie buff, for more than you could if you didn’t have the theatre, and use the extra money to do more of what you want to do. So the “potential“ argument still works. This argument could also be applied to a heck of a lot of other things we might own but have little use for. On that basis, EBay is a great wealth generator!
I see “wealth” not as a collection of desirable things but as a potential or a power. An individual who has some wealth has the potential or power to undertake certain things they would like to do, over and above basic survival. An individual with greater wealth has greater choice of the things they can choose to do. Such things might include eating Michelin 3 star food, or driving a Ferrari along the coast. They also might include a simple afternoon walk in the woods. In the latter case the “wealth“ required to undertake this activity comprises having the leisure time available for the activity, the personal good health that allows for enjoyable walking, clothing of suitable quality for the activity to be pleasurable, and a means of fairly effortlessly getting to the woods in the first place.
It follows that, whilst “wealth” might have a roughly linear relationship to “money”, the amount of surplus money one has to attain a certain “wealth” will be different for everybody, principally because we all different ideas of how we might use our wealth, some of which will cost more than others. Additionally, some wealth doesn’t necessarily cost any money to create or to acquire. Consider a coder who makes a compelling game and puts it out as open source. The coder has created “wealth” because they have created the potential for others to undertake something they would like to do, namely, play the game. The coder has used their own time and little else. If the creation of the game was an enjoyable activity for the coder then the wealth has been created at zero cost.
Yes, the lab protocol it actually suggests would likely lead to an explosion and injury to the operator. Mixing sodium metal and a reagent and adding heat does not usually end well unless/even done under an inert atmosphere (nitrogen or argon).. Also there is no mention of a “work-up step,“ which here would usually involves careful quenching with ethanol necessary to remove residual reactive sodium, and then shaking with an aqueous base.
It is rarely wrong to follow what you are passionate about. Go for it. But do think hard before discarding your placement in industry. Obtaining a diverse set of career relevant experiences early on is valuable. Industrial placements look good on a resumé as well.
I did wonder whether one reason it might be hard to commercialise orexins was because, being peptides, delivery would be difficult.
But, apparently not, nasal spray works just fine …
So the domain I’m most familiar with is early stage drug discovery In industry. This requires multidisciplinary teams of chemists, computational chemists, biochemists, biologists, crystallographers etc. Chemists tend to be associated with one project at a time and I don’t perceive part-time working to be beneficial there. However the other disciplines are often associated with multiple projects. So there’s a natural way to halve (say) the workload without reducing efficiency. The part-time scientist should be highly experienced, committed to what they are doing, and have few management responsibilities. If that holds then my experience is they are at least as productive than a full time worker, hour for hour.
Very interesting points. But some of them are surely specific to the size, workforce make-up and activities of your organisation. I’d like to put an alternative view on point 14, at least as it applies to an organisation with longer timelines and a more autonomous working regime (so less opportunity for blocking). My experience is that part-time workers can be more productive hour for hour than full-time workers, in the right work domain. A fully committed part-time worker has a ready-made excuse to avoid those meetings that don’t make them productive. They will use their slack time to be thinking of their work, coming up with ideas at leisure, and creating an effective plan for their next work period. They can be flexible in their work hours so as to attend the important meetings and one-to-ones and to avoid blocking anyone (Especially if they also WFH some of the time- so can dip into work for an hour in a day they normally don’t work). They can use (E.g computational) resources more effectively so that they are rarely waiting for lengthy production runs (or calculations, say) to finish. Lastly, they are often less stressed through not being overworked (and hence more effective).
Clearly this will not be true for all work domains. Nevertheless it has been recently reported in the UK press that an international experiment to test a 4 day (32 hr) work week at 100% salary has resulted in no loss of productivity for many of the companies involved, and many of them are continuing with the scheme.
Adding to my first comment, another basic problem that at least applies to organic chemical assemblies, is that easily constructed useful engineering shapes such as straight lines (acetylenes, polyenes), planes (graphene ) or spherical/ellipsoidal curves (buckminsterfullerene like structures) are always replete with free electrons. This makes them somewhat reactive in oxidative atmospheres. Everybody looked at the spherical buckminsterfullerene molecule and said “wow, a super-lubricant!” Nope, it is too darn reactive to have a useful lifetime. This is actually rather reassuring in the context of grey goo scenarios.
Excessive reactivity in oxidative atmospheres may perhaps be overcome if we use metal-organic frameworks to create useful engineering shapes (I am no expert on these so don’t know for sure). But much basic research is still required.
It’s my opinion that Drexler simply underestimated the basic scientific problems that yet needed to be solved. The discrete nature of atoms and the limited range of geometries that can be utilised for structures at the nanoscale alone make complex molecular machine design extraordinarily challenging. Drug design is challenging enough and all we usually need to do there is create the right shaped static block to fit the working part of the target protein and stop it functioning (OK, I over-simplify, but a drug is a very long way from a molecular machine). Additionally the simulation tools needed to design molecular machines are only now becoming accurate enough, largely because it is only now that we have cheap enough and fast enough compute power to run them in reasonable real time.
It will happen, in time, but there is still a large amount of basic science to be done first IMO. My best guess is that self-assembling biomimetic molecular machines, based on polypeptides, will be the first off the blocks. New tools such as AlphaFold will play an important role in their design.
I think you make a good point, but I also think fear of being attacked is not a good excuse for failing to be altruistic, at least if the altruism is through financial means. After all it is easy ( and very common) to give anonymously.
That’s not to say anonymous altruistic acts are entirely sacrificial. Usually there is some significant payback in terms of well-being (assuagement of guilt for the good fortune of one’s own relative affluence, for instance).
In Advanced Driving courses a key component was (and may still be -it’s been awhile) commentary driving. You sit next to an instructor and listen to them give a commentary on everything they are tracking, for instance other road users, pedestrians, road signs, bends, obstacles, occluders of vision etc; and how these observations affect their decision making, as they drive. Then you practice doing the same, out loud, and, ideally, develop the discipline to continue practising this after the course. I found this was a very effective way of learning from an expert, and I’m sure my driving became the safer because of it.
There is the saying “Genius will out” and it was true for the four individuals you mention. But there are equally, cases where an enlightened teacher in an unpromising school has recognised genius, perhaps emerging from a lowly background, and helped it flourish, when perhaps it otherwise would have withered. Gauss comes to mind as one example. In decent schools today I would be pretty hopeful that genius, even if coupled to unconventionality, would be identified and nurtured. Of course not all schools are decent.
I also disagree strongly with that paragraph, at least as it applies to higher mammals subject to consistent, objective and lengthy study. If I read it to include that context ( and perhaps I’m mistaken to do so), it appears to be dismissive (trolling even) of the conclusions of, at the very least, respected animal behaviour researchers such as Lorenz, Goodall and Fossey.
Instead of appealing to “empathy with an animal“ as a good guide, I would rather discuss body language. “Body language“ is called such for good reason. Before homo sapiens (or possibly precursor species) developed verbal communication, body language had evolved as a sophisticated communication mechanism. Even today between humans it remains a very important, if under-recognised, mode of communication (I recall attending a training course on giving presentations. It was claimed body language accounted for about 50% of the impact of the presentation, the facts presented on the slides only 15%). Body language is clearly identifiable in higher mammals. Even if it is not identical to ours in all, or even many, respects, our close evolutionary connection with higher mammals allows us, in my view, to be able to confidently translate their body language into a consistent picture of their mental state, actually pretty easily, without too much training. We have very similar ‘hardware’ to other higher mammals (including,- and this is important, in regard to regulating the strength and nature of mammalian emotional states- an endocrine system)) and this is key, at least in regard to correctly identifying equivalent mental states. Reading of body language seems to me to just as valid an informational exchange, as a verbal Turing Test carried out over a terminal, and our shared genetic heritage does allow a certain amount of anthropomorphic comparison that is not woo, if done with objectivity, IMO.
Equivalence of mental/ emotional states with ours, doesn’t necessarily lead to a strong inference that higher mammals are sentient, though it is probably good supporting evidence.
I would chose dogs rather than cats as, unlike Vanessa Kosoy, apparently, (see elsewhere in these threads) I’m a dog person. Domestic dogs are a bit of a special case because they have co-evolved with humans for 30,000-40,000 years. Dogs that were most able to make their needs plain to humans, likely prospered. This would, I think, naturally lead to an even greater convergence of the way the same human and dog mental state is displayed, for some important states-necessary-to-be-communicated-to-humans-for-dog-benefit, because that would naturally gives rise to the most error-free cross-species communication.
The mental states I would have no hesitancy in saying are experienced by myself and a domestic dog in a recognisably similar way (to >90% certainty) are fear, joy, pain, fight or flight response, jealousy/insecurity, impatience and contentment.
I’d be less certain, but certainly not dismissive, of anger, love, companionship ( at least as we understand it), and empathy. I also don’t have a very strong confidence they have a sense of self, though that is not necessary for my preferred model of sentience.
I have never seen my dog display anything I interpret as disgust, superiority, amusement or guilt.
But similarity of emotions and interpretation of body language are not the only signs I interpret as possibly indicating sentience. I also observe that a dog (mostly n=1) is capable of e.g.
- Self initiated behaviour to improve its own state.
- Clear and quite nuanced communication of needs ( despite limited ‘speech’)
- Attention engagement to request a need be met ( a paw on the ankle, a bark of a particular tone and duration)
- Deduction, at a distance, of likely behaviour of other individuals (mostly other dogs) and choosing a corresponding response
- Avoidance of aggressive dogs. (Via cues not always obvious to myself)
- Meet and smell with dogs of similar status
- Recognition and high tolerance of puppies ( less so with adolescents)
- Domineering behaviour against socially weak dogs.
On the basis of an accumulation of such observations (the significance of each of which may be well short of 90%) the model I have of a typical dog is that it has (to >99% likleyhood) some level of sentience, at least according to my model of sentience.
I have actually had a close encounter with a giant cuttlefish “where I looked into its eyes and thought I detected sentience” but here I‘m more aligned with Rob (to 90% confidence), and that this was a case of over-anthropomorphism - the genetic gap is probably too large (and it was a single short observation).
I would incidentally put a much lower probability than 10% that any statement of LaMDA that claims ownership of a human emotion, and claims it manifests just like that human emotion, means anything significant at all.
I think this hypothesis for some kinds of chronic pain makes sense and is helpful to me. Thanks for posting. The only thing I would comment on is in regard to the physiological mechanism at work. For me, the vicious cycle enabler of my own chronic pain (neck - ascribed to incipient arthritis, wheneverI ask a professional) is, I’m pretty sure, not blood flow restriction but muscle spasming. I wonder if others might say the same? I do find it is frequently self-fulfilling. If I think I’m going to get a seriously stiff neck in the night, then I will get a seriously stiff neck by morning plus accompanying serious headache. I too have no medical training so disclaimers as to what is really going on.
We may not run out of ideas but we may run out of exploitable physics. For instance what is most needed at the moment is a clean, cheap and large scale energy source that can replace gas, oil, and coal, without which much of the technological and economic development of the last hundred and fifty years or so would have been impossible. Perhaps fusion can be that thing. Perhaps we can paper over the Sahara with photovoltaics. Perhaps we can design fail-safe fission reactors more acceptable to the general population. Let’s assume we will solve the various technical and geopolitical problems necessary to get at least one of these technologies to where we need it to be. My point is this, what if physics either didn’t allow, or made it technologically too difficult; for any of these three possibilities to come to fruition ? We’d be at roughly the same place in terms of development, but without the potential safety net these technologies could give us. What likelihood of continued progress then? And so on. A greater population (of humans at least) in the future will certainly provide a greater fund of technological ideas, but to keep the world in a healthy enough state to support that population may require physics that is either unavailable to us, or just too difficult to exploit.
In regard to the amazing possibilities available to us by manipulating macromolecules, I am completely with you. We have only just scratched the surface of what is achievable using the physics we have readily to hand, IMO.
I agree with you on both counts. So, I concede, saving millions in research costs may be small beer. But I don’t see that invalidates the argument in my previous comment, which is about getting good drugs discovered as fast as is feasible. Achieving this will still have significant economic and humanitarian benefit even if they are no cheaper to develop. There are worthwhile drugs we have today that we wouldn’t have without Structure-Based Design.
The solving of the protein folding problem will also help us to design artificial enzymes and molecular machines. That won‘t be small potatoes either IMO.
Not a bottleneck so much as a numbers game. Difficult diseases require many shots on goal to maximise the chance of a successful program. That means trying to go after as many biological targets as there are rationales for, and a variety of different approaches (or chemical series) for each target. Success may even require knocking out two targets in a drug combination approach. You don’t absolutely need protein structures of a target to have a successful drug-design program but using them as a template for molecular design (Structure-Based Drug Design) is a successful and well established approach and and can give rise to alternative chemical series to non-structure based methods. X-ray crystal derived protein structures are the usual way in but if you are unable to generate X-Ray structures, which is still true for many targets, AlphaFold structures can in principle provide the starting point for a program. They can also help generate experimental structures in cases where the X-ray data is difficult to interpret.
OK, the question asked for demonstration of economic value now and I grant you AlphaFold, which is solely a research enabler, has not demonstrated that to date. Whether AlphaFold will have a significant role in breaking Eroom’s law is a good question but cannot be answered for at least 10 years. I would still argue that the future economic benefits of what has already been done with AlphaFold and made open access, are likely to be substantial. Consider Alzheimer’s. The current global economic burden is reckoned to be $300 B, p.a. rising in future to $1T. If, say, an Alzheimer’s drug that halved the economic cost, was discovered 5 years earlier on account of AlphaFold the benefit would run to at least $0.75 T in total. This kind of possibility is not unreasonable (for Alzheimer’s replace with your favourite druggable high economic value medical condition)
I agree with your assessment of the BridgeBase ‘bots. They can appear to play very well a lot of the time but often make plays that look foolish, or sometimes bizarre. Nook played against the best (by competition) bridge ‘bots available. However, against Nook, as I understand, even these ‘bots sometimes made poor plays that quite average human players would know not to make.
DeepMind have delivered AlphaFold thereby solving a really important outstanding scientific problem. They have used it to generate 3D models of almost every human protein (and then some) which have been released to the community. This is, actually, a huge deal. It will save many many millions in research costs and speed up the generation of new therapeutics.
I think this idea absolutely hits the spot. A well worn saying is that good workers are generally promoted beyond their own capability. This is true but often because they get bogged down with meetings, fire-fighting, admin, pleasing the (Wo)Man etc.., lots of reactive stuff. My experience exactly. I changed jobs, gave up management, went down to three days a week (gaining two days of slack), declined as many meetings, as I could getaway with and became more productive than any time previously, largely because of having time to let ideas slowly gestate, play with stuff, notice the subtle things, and thereby make progress in useful directions. I swear my employer effectively gets 5 days out of me. I should be paid more!
I think there is a danger that the current abilities of ML models in drug design are being overstated. The authors appear to have taken a known toxicity mode ( probably acetylcholinesterase inhibition - the biological target of VX, Novichok and many old pesticides) and trained their model to produce other structures with activity against this enzyme. Their model claims to have produced significantly more active structures but none were synthesised. Current ML models in drug design are good at finding similar examples of known drugs, but are much less good (to my own disappointment - this is what I spend many a working day on) at finding better examples, at least in a single property optimisation. This is largely because, in predicting stronger chemicals, the models are generally moving beyond their zone of applicability. In point of fact, the field of acetylcholine esterase inhibition has been so well studied (much of it in secret) it is quite likely IMO that the list of predicted highly toxic designs is, at best, only very sparsely populated with significantly stronger nerve agents than the best available. Identifying which structures those are, out of potentially thousands of good designs, still remains a very difficult task.
This is not to take away from the authors’ main point, that ML models could be helpful in designing better chemical weapons. A practical application might be to attempt to introduce a new property (E.g brain penetration) into a known neurotoxin class that lacked that that property. An ML model optimised on both brain penetration and neurotoxicity would certainly be helpful in the search for such agents.