[Book Review]: The Bonobo and the Atheist by Frans De Waal 2022-01-05T22:29:32.699Z
DnD.Sci GURPS Evaluation and Ruleset 2021-12-22T19:05:46.205Z
SGD Understood through Probability Current 2021-12-19T23:26:23.455Z
Housing Markets, Satisficers, and One-Track Goodhart 2021-12-16T21:38:46.368Z
D&D.Sci GURPS Dec 2021: Hunters of Monsters 2021-12-11T12:13:02.574Z
Hypotheses about Finding Knowledge and One-Shot Causal Entanglements 2021-12-01T17:01:44.273Z
Relying on Future Creativity 2021-11-30T20:12:43.468Z
Nightclubs in Heaven? 2021-11-05T23:28:19.461Z
I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness 2021-10-29T11:09:20.559Z
Nanosystems are Poorly Abstracted 2021-10-24T10:44:27.934Z
No Really, There Are No Rules! 2021-10-07T22:08:13.834Z
Modelling and Understanding SGD 2021-10-05T13:41:22.562Z
[Book Review] "I Contain Multitudes" by Ed Yong 2021-10-04T19:29:55.205Z
Reachability Debates (Are Often Invisible) 2021-09-27T22:05:06.277Z
A Confused Chemist's Review of AlphaFold 2 2021-09-27T11:10:16.656Z
How to Find a Problem 2021-09-08T20:05:45.835Z
A Taxonomy of Research 2021-09-08T19:30:52.194Z
Addendum to "Amyloid Plaques: Medical Goodhart, Chemical Streetlight" 2021-09-02T17:42:02.910Z
Good software to draw and manipulate causal networks? 2021-09-02T14:05:18.389Z
Amyloid Plaques: Chemical Streetlight, Medical Goodhart 2021-08-26T21:25:04.804Z
Generator Systems: Coincident Constraints 2021-08-23T20:37:38.235Z
Fudging Work and Rationalization 2021-08-13T19:51:44.531Z
The Reductionist Trap 2021-08-09T17:00:56.699Z
Uncertainty can Defuse Logical Explosions 2021-07-30T12:36:29.875Z
Hobbies and the curse of Spontaneity 2021-07-22T13:25:43.973Z
A Models-centric Approach to Corrigible Alignment 2021-07-17T17:27:32.536Z
Generalising Logic Gates 2021-07-17T17:25:08.428Z
Equivalent of Information Theory but for Computation? 2021-07-17T09:38:48.227Z
Positive Expectations; how to build Hopefulness 2021-07-03T13:41:16.188Z
Jemist's Shortform 2021-05-31T22:39:28.638Z
Are there any methods for NNs or other ML systems to get information from knockout-like or assay-like experiments? 2021-05-18T21:33:38.474Z
Optimizers: To Define or not to Define 2021-05-16T19:55:35.735Z
Alzheimer's, Huntington's and Mitochondria Part 3: Predictions and Retrospective 2021-05-03T14:47:23.365Z
Alzheimer's, Huntington's and Mitochondria Part 2: Glucose Metabolism 2021-05-03T14:47:10.125Z
Alzheimer's, Huntington's and Mitochondria Part 1: Turnover Rates 2021-05-03T14:46:41.591Z
Hard vs Soft in fields as attitudes towards model collision 2021-04-20T18:57:51.401Z
Most Analogies Are Wrong 2021-04-16T19:53:41.940Z
My Thoughts on the Apperception Engine 2021-02-25T19:43:55.929Z


Comment by Jemist on How an alien theory of mind might be unlearnable · 2022-01-06T17:08:28.993Z · LW · GW

I think I understand now. My best guess is that if your proof was applied to my example the conclusion would be that my example only pushes the problem back. To specify human values via a method like I was suggesting, you would still need to specify the part of the algorithm that "feels like" it has values, which is a similar type of problem.

I think I hadn't grokked that your proof says something about the space of all abstract value/knowledge systems whereas my thinking was solely about humans. As I understand it, an algorithm that picks out human values from a simulation of the human brain will correspondingly do worse on other types of mind.

Comment by Jemist on How an alien theory of mind might be unlearnable · 2022-01-05T22:39:11.622Z · LW · GW

I don't understand this. As far as I can tell, I know what my preferences are, and so that information should in some way be encoded in a perfect simulation of my brain. Saying there is no way at all to infer my preferences from all the information in my brain seems to contradict the fact that I can do it right now, even if me telling them to you isn't sufficient for you to infer them.

Once an algorithm is specified, there is no more extra information to specify how it feels from the inside. I don't see how there can be any more information necessary on top of a perfect model of me to specify my feeling of having certain preferences.

Comment by Jemist on Regularization Causes Modularity Causes Generalization · 2022-01-03T18:19:18.753Z · LW · GW

This is a great analysis of different causes of modularity. One thought I have is that L1/L2 and pruning seem similar to one another on the surface, but very different to dropout, and all of those seem very different to goal-varying.

If penalizing the total strength of connections during training is sufficient to enforce modularity, could it be the case that dropout is actually just penalizing connections? (e.g. as the effect of a non-firing neuron is propagated to fewer downstream neurons)

I can't immediately see a reason why a goal-varying scheme could penalize connections but I wonder if this is in fact just another way of enforcing the same process.

Comment by Jemist on Covid 12/30: Infinity War · 2021-12-30T18:38:48.736Z · LW · GW

I think the tweet about the NHS app(s) is slightly misleading. I'm pretty confident those screenshots relate to two separate apps: one is a general health services app which can also be used to generate a certificate of vaccination (as the app has access to health records). The second screenshot relates to a covid-specific app which enables "check-ins" at venues for contact-tracing purposes, and the statement there seems to be declaring that the local information listing venues visited could - in theory - be used to get demographic information. One is called the "NHS App" and the other is called the "NHS Covid 19 App" so it's an understandable confusion.

Comment by Jemist on D&D.Sci GURPS Dec 2021: Hunters of Monsters · 2021-12-21T13:39:04.475Z · LW · GW

I'm afraid I didn't intend for people to be able to add conditions to their plans. While something like that is completely reasonable I can't find a place to draw the line between that and what would be too complex. The only system that might work is having everyone send me their own python code but that's not fair on people who can't code, and more work than I'm willing to do. Other answers haven't included conditions and I think it wouldn't be fair on them. I think my decision is that:

If you don't get the time to respond with a time to move on from the Thunderwood Peaks then I'll put it at a week (which I have chosen but won't say here for obvious reasons) somewhere between 0 and 10 which I would guess best represents your intentions.

I'm really sorry about the confusion, I should've made that all clearer from the start!

Comment by Jemist on Open & Welcome Thread December 2021 · 2021-12-20T16:29:38.302Z · LW · GW

I think your comment excellently illustrates the problems with the experiment!

Next to the upvote/downvote buttons there's a separate box for agreement/disagreement. I think the aim is to separate "this post contributes to the discussion in a positive/negative way" from "I think the claims expressed here are accurate". It's active in the comments of the post I linked in my comment and there's a pinned comment from Ruby explaining it.

Comment by Jemist on Open & Welcome Thread December 2021 · 2021-12-20T13:10:23.865Z · LW · GW

I'm very interested to try the new two-axis voting system but it seems to only be active on one post which also happens to be very tied up with some current Bay Area-specific issues which limits who can actually engage with it. I also think it would be good for the community to get to "practice" with such voting on some topics which are easier to discuss so norms can be established before moving on to the more explosive ones. I'd like to see more posts with this enabled, perhaps a few more people with posts having >20 comments currently on the frontpage could be asked about it, or a pinned post could be made by the mods explaining it and enabling people to ask for it.

I do think that group-politics-related posts might have the greatest potential to benefit from this type of voting (especially relative to the current system).

Comment by Jemist on D&D.Sci GURPS Dec 2021: Hunters of Monsters · 2021-12-17T23:35:58.022Z · LW · GW

Sure! I was planning to anyways but that plus my own busyness means it will more likely be early next week/even later if people would prefer.

Comment by Jemist on Housing Markets, Satisficers, and One-Track Goodhart · 2021-12-16T22:50:47.616Z · LW · GW

As the unmet demand for housing at all levels is currently outstripped by supply, the optimal local move is to replace cheaper-per-space housing with expensive-per-space housing, where the latter is targeted towards rich people, whenever permission from local government can be obtained. If the unmet demand for housing at all levels were much smaller, then this move wouldn't be profitable by default and developers would have to choose where to build new marginal rich-people-targeted houses more carefully. For some human-desirable variable "strength of community", the rents/sale prices will be higher the more of that is present. Then the obvious choice is to build your new development such that the "strength of community" of the removed building is lowest, relative to the "strength of community" of the new building. The existence of this sort of choice would mean that existing communities that people like would be less likely to be removed.

Comment by Jemist on A fate worse than death? · 2021-12-14T13:22:34.822Z · LW · GW

 But I don't know why it's downvoted so far - it's an important topic, and I'm glad to have some more discussion of it here (even if I disagree with the conclusions and worry about the unstated assumptions).

I agree with this. The author has made a number of points I disagree with but hasn't done anything worthy of heavy downvotes (like having particularly bad epistemics, being very factually wrong, personally attacking people, or making a generally low-effort or low-quality post). This post alone has changed my views towards favouring a modification of the upvote/downvote system.

Comment by Jemist on D&D.Sci GURPS Dec 2021: Hunters of Monsters · 2021-12-13T20:19:23.642Z · LW · GW

Option 2

Comment by Jemist on A fate worse than death? · 2021-12-13T14:30:43.799Z · LW · GW

In the described scenario, the end result is omnicide. Thus, it is not much different from the AI immediately killing all humans. 

I strongly disagree with this. I would much, much rather be killed immediately than suffer for a trillion years and then die. This is for the same reason that I would rather enjoy a trillion years of life and then die, than die immediately.

In this case, the philosophy's adherents have no preference between dying and doing something else with zero utility (e.g. touching their nose). As humans encounter countless actions of a zero utility, the adherents are either all dead or being inconsistent. 

I think you're confusing the utility of a scenario with the expected utility of an action. Assigning zero utility to being dead is not the same as assigning zero expected utility to dying over not dying. If we let the expected utility of an action be defined relative to the expected utility of not doing that action, then "touching my nose", which doesn't affect my future utility, does have an expected utility of zero. But if I assign positive utility to my future existence, then killing myself has negative expected utility relative to not doing so.

Comment by Jemist on A fate worse than death? · 2021-12-13T12:17:47.624Z · LW · GW

You're argument rests on the fact that people who have suffered a million years of suffering could - in theory - be rescued and made happy, with it only requiring "tech and time". In an S-risk scenario, that doesn't happen.

In what I'd consider the archetypical S-risk scenario, an AI takes over, starts simulating humans who suffer greatly, and there is no more human agency ever again. The (simulated) humans experience great suffering until the AI runs out of power (some time trillions of years in the future when the universe can no longer power any more computation) at which point they die anyway.

As for your points on consistency, I'm pretty sure a utilitarian philosophy that simply assigns utility zero to the brain state of being dead is consistent. Whether this is actually consistent with people's revealed preferences and moral intuitions I'm not sure.

Comment by Jemist on D&D.Sci GURPS Dec 2021: Hunters of Monsters · 2021-12-11T20:38:31.390Z · LW · GW

I imagine their response would be along the lines of: "Why the hell should I let to someone who doesn't even know how big a Dull Viper is tell me how to hunt it!?"

Comment by Jemist on Second-order selection against the immortal · 2021-12-06T17:35:46.630Z · LW · GW

I think it won't be easy to modify the genome of individuals to achieve predictable outcomes even if you get the machinery you describe to work. 

Is this because of factors like the almost-infinite number of interactions between different genes, such that even with a hypothetical magic technology to arbitrarily and perfectly change the DNA in every cell in the body, it wouldn't be possible to predict the outcome of such a change? Or is it because you don't think that any machinery will ever be precise enough to make this work well enough? Or some other issue entirely?

Comment by Jemist on Second-order selection against the immortal · 2021-12-05T20:15:09.042Z · LW · GW

What I meant is changing the genetic code in ~all of the cells in a human body. Or some sort of genetic engineering which has the same effect as that.

Here's one model I have as to how you could genetically engineer a living human:

Many viruses are able to reverse-transcribe RNA to DNA and insert that DNA into cells. This causes a lot of problems for cells, but there are (probably) large regions of the genome where insertions of new DNA wouldn't cause problems. I don't think it would be difficult to target insertion of DNA to those regions, as DNA binding proteins could be attached to DNA insertion proteins.

This sort of technology requires only the insertion of RNA into a cell. There are a number of ways to put RNA into cells at the moment, such as "edited" viruses, lipid droplets, and more might be developed.

I also believe targeting somatic stem cells for modification via cell-specific surface proteins is possible. If not we could also cause the modified cells to revert to stem cells (by causing them to express Yamanaka Factors etc.).

The stem cells will differentiate and eventually replace (almost all) unmodified cells.

The resulting technology would allow arbitrary insertion of genetic code into most somatic cells (neurons might not be direct targets but perhaps engineering of glia or whatever could do them). Using CRISPR-like technologies rather than reverse transcription we could also do arbitrary mutation, gene knockout, etc.

I guess this is still somewhat handwavey. Speculating on future technology is always handwavey. 

Comment by Jemist on Second-order selection against the immortal · 2021-12-04T12:06:54.871Z · LW · GW

I think cultural evolution will be the greater factor by a large margin. I think the technology for immortality is possible but that it will either directly involve genetic engineering of living humans, or be one or two steps away from it. People who are willing to take an immortality drug are very likely to also be willing to improve themselves in other ways. If the Horde is somehow going to outcompete them due entirely to beneficial mutations, the Imperium could simply steal them.

Comment by Jemist on Hypotheses about Finding Knowledge and One-Shot Causal Entanglements · 2021-12-02T11:59:14.850Z · LW · GW

Thanks! I get your arguments about "knowledge" being restricted to predictive domains, but I think it's (mostly) just a semantic issue. I also don't think the specifics of the word "knowledge" are particularly important to my points which is what I attempted to clarify at the start, but I've clearly typical-minded and assumed that of course everyone would agree with me about a dog/fish classifier having "knowledge", when it's more of an edge-case than I thought! Perhaps a better version of this post would have either tabooed "knowledge" altogether or picked a more obviously-knowledge-having model.

Comment by Jemist on Rapid Increase of Highly Mutated B.1.1.529 Strain in South Africa · 2021-11-26T14:45:42.364Z · LW · GW

This is a pretty strong indication of immune escape to me, if it persists in other outbreaks. If this was purely from increased infectiousness in naive individuals it would imply an R-value (in non-immune populations) of like 40 or something, which seems much less plausible than immune escape. I don't know what the vaccination/infection rates are in these communities though.

Comment by Jemist on Jemist's Shortform · 2021-11-25T17:34:26.275Z · LW · GW

The UK has just switched their available rapid Covid tests from a moderately unpleasant one to an almost unbearable one. Lots of places require them for entry. I think the cost/benefit makes sense even with the new kind, but I'm becoming concerned we'll eventually reach the "imagine a society where everyone hits themselves on the head every day with a baseball bat" situation if cases approach zero.

Comment by Jemist on Potential Alignment mental tool: Keeping track of the types · 2021-11-23T13:21:16.003Z · LW · GW

My current belief on this is that the greatest difficulty is going to be finding the "human values" in the AI's model of the world. Any AI smart enough to deceive humans will have a predictive model of humans which almost trivially must contain something that looks like "human values". The biggest problems I see are:

1: "Human values" may not form a tight abstracted cluster in a model of the world at all. This isn't so much  conceptual issue as in theory we could just draw a more complex boundary around them, but it makes it practically more difficult.

2: It's currently impossible to see what the hell is going on inside most large ML systems. Interpretability work might be able to allow us to find the right subsection of a model.

3: Any pointer we build to the human values in a model also needs to be stable to the model updating. If that knowledge gets moved around as parameters change, the computational tool/mathematical object which points to them needs to be able to keep track of that. This could include sudden shifts, slow movement, breaking up of models into smaller separate models.

(I haven't defined knowledge, I'm not very confused about what it means to say "knowledge of X is in a particular location in the model" but I don't have space here to write it all up)

Comment by Jemist on Petrov Day Retrospective: 2021 · 2021-11-05T10:35:05.314Z · LW · GW

Very good point. Perhaps there just intrinsically is no way of doing something that this community perceives as "burning" money, without upsetting people.

Comment by Jemist on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T21:20:05.454Z · LW · GW

Having now had a lot of different conversations on consciousness I'm coming to a slightly disturbing belief that this might be the case. I have no idea what this implies for any of my downstream-of-consciousness views.

Comment by Jemist on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-30T18:40:20.393Z · LW · GW

I'm confident your model of Eliezer is more accurate than mine.

Neither the twitter thread or other writings originally gave me the impression that he had a model in that fine-grained detail. I was mentally comparing his writings on consciousness to his writings on free will. Reading the latter made me feel like I strongly understood free will as a concept, and since then I have never been confused, it genuinely reduced free will as a concept in my mind.

His writings on consciousness have not done anything more than raise that model to the same level of possibility as a bunch of other models I'm confused about. That was the primary motivation for this post. But now that you mention it, if he genuinely believes that he has knowledge which might bring him closer to (or might bring others closer to to) programming a conscious being, I can see why he wouldn't share it in high detail.

Comment by Jemist on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-29T16:41:17.472Z · LW · GW

Basically yes I care about the subjective experiences of entities. I'm curious about the use of the word "still" here. This implies you used to have a similar view to mine but changed it, if so what made you change your mind? Or have I just missed out on some massive shift in the discourse surrounding consciousness and moral weight? If the latter is the case (which it might be, I'm not plugged into a huge number of moral philosophy sources) that might explain some of my confusion.

Comment by Jemist on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-29T16:25:36.388Z · LW · GW

he defines consciousness as "what an algorithm implementing complex social games feels like when reflecting on itself".

In that case I'll not use the word consciousness and abstract away to "things which I ascribe moral weight to", (which I think is a fair assumption given the later discussion of eating "BBQ GPT-3 wings" etc.)

Eliezer's claim is therefore something along the lines of: "I only care about the suffering of algorithms which implement complex social games and reflect on themselves" or  possibly "I only care about the suffering of algorithms which are capable of (and currently doing a form of) self-modelling".

I've not seen nearly enough evidence to convince me of this.

I don't expect to see a consciousness particle called a qualon. I more expect to see something like: "These particular brain activity patterns which are robustly detectable in an fMRI are extremely low in sleeping people, higher in dreaming people, higher still in awake people and really high in people on LSD and types of zen meditation."

Comment by Jemist on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-29T16:15:35.892Z · LW · GW

You present an excellently-written and interesting case here. I agree with the point that self-modelling systems can think in certain ways which are unique and special and chickens can't do that.

One reason I identify consciousness with having qualia is that Eliezer specifically does that in the twitter thread. The other is that qualia is generally less ambiguous than terms like consciousness and self-awareness and sentience. The disadvantage is that the concept of qualia is something which is very difficult (and beyond my explaining capabilities) to explain to people who don't know what it means. I choose to take this tradeoff because I find that I, personally, get much more out of discussions about specifically qualia, than any of the related words. Perhaps I'm not taking seriously enough the idea that illusionism will explain why I feel like I'm conscious and not explain why I am conscious.

I also agree that most other existing mainstream views are somewhat poor, but to me this isn't particularly strong positive evidence for Eliezer's views. This is because models of consciousness on the level of detail of Eliezer's are hard to come up with, so there might be many other excellent ones that haven't been found yet. And Eliezer hasn't done (to my knowledge) anything which rules out other arguments on the level of detail of his own.

Basically I think that the reason the best argument we see is Eliezer's is less along the lines of "this is the only computational argument that could be made for consciousness" and more along the lines of "computational arguments for consciousness are really difficult and this is the first one anyone has found".

Comment by Jemist on I Really Don't Understand Eliezer Yudkowsky's Position on Consciousness · 2021-10-29T15:57:41.361Z · LW · GW

Eliezer later states that he is referring to qualia specifically, which for me are (within a rounding error) totally equivalent to moral relevance.

Comment by Jemist on Petrov Day Retrospective: 2021 · 2021-10-23T11:31:15.768Z · LW · GW

My first thought was that this could be avoided by - if the button was pressed - giving it to a "rare diseases in cute puppies" type charity, rather than destroying it. I'd suspect the intersection of "people who care strongly enough about effective altruism to be angry", "people who don't understand the point of Petrov Day", and "people who have the power to generate large amounts of negative publicity" is very small.

But I think a lot of LWers who are less onboard with Petrov Day in general would be just as (or almost as) turned off by this concept as the idea of burning the money. Perhaps something akin to the one landfish did would be better? At least in that case I would guess most LWers are OK enough with either MIRI or AMF (or maybe substitute other charities?) receiving money at the expense of one another for it to work OK.

Comment by Jemist on Jemist's Shortform · 2021-10-15T20:53:30.108Z · LW · GW

Just realized I'm probably feeling much worse than I ought to on days when I fast because I've not been taking sodium. I really should have checked this sooner. If you're planning to do long (I do a day, which definitely feels long) fasts, take sodium! 

Comment by Jemist on Questions about YIMBY · 2021-10-08T22:35:20.606Z · LW · GW

The green belt problem is not one I'd considered before. I've always assumed the biggest problems for places like London were the endless low-density suburbs rather than the limit on building houses outside of a certain radius. If you work in the centre of London and live in some new development just outside the green belt, that already seems like something of a failure.

I don't want to doubt the expert economic analysis though, perhaps removing it would allow people to move from the suburbs to new developments, freeing up suburb space. This also seems wrong as higher population density is the goal, but perhaps the people who are better off living outside the city are retirees or similar who reduce the demand for low-density housing in the city and therefore them leaving allows higher density housing to be built.

Comment by Jemist on No Really, There Are No Rules! · 2021-10-08T17:12:43.789Z · LW · GW

Actually that's a good point,  I think that's the only rule which doesn't need to be written (which I completely forgot to mention). Other rules regarding text can be manipulated the same way the other rules can.

Comment by Jemist on D&D.Sci 4th Edition: League of Defenders of the Storm · 2021-10-04T11:14:45.755Z · LW · GW

Using python I conducted a few different analyses:

Proportion of character wins vs other characters:
Proportion of character wins when paired with other characters:

With these I gave each possible team a score, equal to the sum over characters (sum over enemy characters(proportion of wins for said character) + sum over other teammates(proportion of wins when paired with said character)), and the highest scoring team was:

Rock-n-Roll Ranger, Blaze Boy, Nullifying Nightmare, Greenery Giant, Tidehollow Tyrant

This was much more pleasant than using Excel! I think I might try and learn R or some other dedicated statistical language for the next one.

PvP team (without having the time to estimate anything about my enemies' teams so highly likely to get countered) has actually come out the same. There's a good chance something is up with my analysis or my method is too biased towards synergistic teams.

Tidehollow Tyrant, Rock-n-Roll Ranger, Nullifying Nightmare, Greenery Giant, Blaze Boy

Comment by Jemist on Reachability Debates (Are Often Invisible) · 2021-10-02T16:40:20.134Z · LW · GW

The first example doesn't seem like a game of chicken to me, since neither Alexi nor Beth can make a change themselves. It may be that they have "inherited" the debate from their political factions' respective allies, who are actually playing a game of chicken. But Alexi and Beth are doing the classic political topic "talking past one another" and part of this seems to be that they're treating different sets of actions as reachable, and only assigning should-ness to reachable actions.

Comment by Jemist on A Confused Chemist's Review of AlphaFold 2 · 2021-09-28T18:47:33.981Z · LW · GW

This is a "review" in the sense of reviewing the paper. I actually haven't used AlphaFold or crystallographic data as the protein I'm currently studying only takes on a defined structure when bound to certain metals (ruling out AlphaFold) and has yet to be crystallized.

Comment by Jemist on [Book Review] "The Vital Question" by Nick Lane · 2021-09-28T10:31:35.694Z · LW · GW

I was also halfway through a review of this book. Since I've only met one other person who'd read it I thought it was unlikely anyone else would! I guess LWers have more similar interests than I would have predicted.

I suppose I'll review another book instead!

Comment by Jemist on D&D.Sci Pathfinder: Return of the Gray Swan Evaluation & Ruleset · 2021-09-09T17:41:31.504Z · LW · GW

Though the task seemed really interesting, I didn't even enter an answer as I lost interest after some preliminary analysis. Almost all of these applied to me too. The data was presented in an excel-unfriendly way and as I'm currently settling into a new job I didn't have the energy to code a python script to trawl through the data. I suspect the participation was weighted towards those with more experience of statistical languages. A better presentation might have been a log of all squares ships had planned to go through with encounters listed there (with ??? for sunk ships) or something like that. I wish I'd had the time to participate properly as I do love doing D&D.sci when I can. Other than that I agree with most of GuySrinivasan's points.

Also: as GuySrinivasan is also planning to run one of these might I suggest the formation of a community rota for those interested? As my commitments are about to shrink I'd be interested in doing one at some point and it might help to avoid people "scooping" each other.

Comment by Jemist on Amyloid Plaques: Chemical Streetlight, Medical Goodhart · 2021-09-01T21:19:24.714Z · LW · GW

This is an excellent comment, and I'm very glad to see my thinking inspiring others!

My own findings on the issue are as such:

I am confident that mitochondrial dysfunction is upstream of AD.

This one gene called PGC 1 alpha is probably involved or something.

I do not know what (if anything) is upstream of that. It could be immune system health but the immune system is so complex that my understanding of it is generally poor.

Mitochondria which are defective are replaced in cells through a process called mitophagy. Stimulating the creation of mitochondria (mitogenesis) probably increases mitophagy too as cells can regulate themselves pretty well.

Drugs which stimulate mitophagy and mitogenesis are probably more productive avenues for AD research than lots of things, an example of each is metformin and EET-A respectively. EET-A has an effect on amyloid plaques in mice but that isn't that useful (however I think that information is worth more when a mechanistic story can be told).

I suspect that AD is an "attractive state" that brains just fall into for lots of reasons, this explains the endless feedback loops confusing researchers, and also it being much more common than lots of other brain diseases.

AD is an interesting microcosm of not just brain health, but also of ageing in general. An effective anti-ageing therapy would almost certainly eliminate AD or at least stop progression. An effective AD therapy is worth investigating as a general anti-ageing therapy (although if it's that effective on ageing we'll probably notice).

Comment by Jemist on D&D.Sci Pathfinder: Return of the Gray Swan · 2021-09-01T20:58:12.272Z · LW · GW

Further observations having graphed all encounter damage as a histogram:

Dragon: Somtimes does zero, often does a lot of damage, long tail

Harpies: Usually do zero, occasionally do one of a few values up to about 0.2

Iceberg: One of ten-ish values spaced sporadically between 0 and 0.3

Kraken: Exponential-ish distribution with tail going up to 0.9 ish

Merfolk: Usually does zero, flat-ish distribution which goes up to 0.65

Sharks: Often do zero, otherwise one of a few values up to like 0.15, not a big threat

Storm: Exponential-ish with faster dropoff than kraken

WMF: Might be half a gaussian? Also randomy hits high

I suspect due to the frequency of zeros, some captains/ships are immune to certain threats. Will investigate further

Another observation is that lots of the most dangerous squares have no encounters listed. This is spooky and I have a couple of hypotheses:

1: There's an unobserved fatal threat thing going on. For example those are "dragon nests" and if you go there the dragons have some chance to just destroy you. Doesn't seem to be a correlation to other things though so I'm not confident.

2: Some weird selection effects where the useless ships/captains always go via those squares (always include selection effects)

Comment by Jemist on D&D.Sci Pathfinder: Return of the Gray Swan · 2021-09-01T20:08:24.588Z · LW · GW

Edited as my maths was wrong and I forgot row zero!

First finding: I should stop using excel for these challenges.

Second finding: cell deadliness defined as  where  sums over cells,  sums over journeys,  is whether a journey planned to go through a cell,  is whether a journey resulted in a wreck, and  is the length of a journey

0     1.01%0.71%0.40% 3.46%1.11%1.52%4.04%      
1 0.00%0.61%0.64%0.90%1.25%1.08%1.02%0.94%0.96%0.84%0.86%0.74%0.73%0.78%0.89%1.02%1.42%1.31%
2    0.76%1.12%1.31%1.12%1.19%1.40%0.97%  1.52%0.45%0.28%1.28%1.49%0.69%
3    1.35%1.01%  1.29%1.06%1.23% 0.43%0.39%0.64%0.69%0.91%0.65%0.82%
4    0.00%1.10%0.99%1.32%1.45%1.31%0.94%1.03%1.13%0.98%0.97%0.89%0.65%0.68%0.46%
5    3.46%2.22%1.90%1.59%1.24%1.54%1.16%0.84%1.03%1.27%1.10%1.12%1.05%1.83%4.23%
6     2.55%2.03%1.81%0.72%1.34%1.84%2.12%1.65%1.26%1.35%1.34%1.16% 2.01%
7     3.37%2.05%0.61%0.96%2.14%1.50%1.72%1.53%1.60%1.65%0.97%1.24% 2.88%
8    0.00%0.61%2.19%0.91%0.83% 2.02%1.63%1.32%1.72% 0.56%1.25% 1.62%
9   1.06%1.67%2.14%1.33%0.90%0.98%1.84%1.73%1.75%1.79%1.96% 0.61%1.52%2.25%1.21%
10   1.52%1.10%1.70%1.64%1.24%0.88%1.63%1.42%1.62% 1.62%1.54%0.59%0.89%1.70%1.25%
11  0.00%0.65%1.29%1.55%1.39%1.18%1.92%1.83%1.74%1.66%0.83%1.45%1.03%0.61%0.96%1.17%0.35%
12  0.00%0.00%0.19%0.96%1.47%1.29%1.86%1.79%1.31%0.76%1.56%1.06%1.17%0.96%0.58%1.36%0.46%
13 0.00%0.22%0.15%0.56%0.73%1.08%0.74%0.69%0.79%1.02%1.03%1.07%0.99% 0.84%1.03%0.83%2.02%
14  0.00% 1.48%1.35%1.28% 0.49%0.25%0.12%0.27%0.54%0.67%0.91% 0.96%1.07%1.77%
15  2.42%1.67%1.33%1.27%1.43% 0.56%0.40%0.35%0.33%0.52%0.49%0.68%0.87%1.48%1.76%1.95%
16   2.10%1.63%1.37%1.79%  0.00%0.61% 0.62%0.32%0.36%0.53%4.08%2.38%1.97%

I apologize for not highlighting the cells with some sort of colour but it makes the spoiler tags not work.


Wreck chance on any given cell is low enough that I probably don't have to take into account high rates of per-square wreck in models

Sharp changes seem to demonstrate that any smearing effect of having ship routes isn't too bad.

Next order of business is to look for things which are common in the deadlier squares. Hopefully this will correspond to things with generally deadlier distributions too.

Comment by Jemist on D&D.Sci Pathfinder: Return of the Gray Swan · 2021-09-01T18:23:04.667Z · LW · GW

Excellent to see community D&D.Sci taking place! This looks far more complex than the original series.

I'll be posting my findings in a thread under this comment

Comment by Jemist on Generator Systems: Coincident Constraints · 2021-08-24T20:56:44.803Z · LW · GW

Ah, I should have clarified what you meant before responding.

modernity has led to improvements to a whole bunch of different things ... It doesn't seem all it would be all that surprising to me that improvements would on average have some sort of directional effect

I agree with this assessment, but modern-ness has been increasing at a reasonable rate for the past six decades at least. If modernity just caused a bunch of changes with a net effect on crime, we would see a (relatively) steady increase. The time distribution of changes in crime rates tells us something else is going on.

Unless an argument gives good reasons why - for example - there is some property of the 90s that produced an exceptional number of improvements which reduced crime and very few which increased it, as opposed to other decades where the improvements both increased and reduced crime and mostly cancelled out, then that explanation suffers a big complexity penalty.

Even if all the arguments as to why certain technologies decreased crime rather than increasing it seem solid, we should be very suspicious of the coincidence of them all happening at once. That sort of thinking smacks of post-hoc rationalization and the conjunction fallacy.

Comment by Jemist on Generator Systems: Coincident Constraints · 2021-08-24T16:50:34.969Z · LW · GW

If one technological advance - like mobile phones - causes a multitude of small changes, which all push one outcome in the same direction, then that's sort of a single-cause model in disguise. It still pays a complexity penalty as a hypothesis but a smaller one. On the other hand it is worth asking why the consequences of something all (or almost all) push the lever of crime in a specific direction, and this is not true for other technologies.

If you mean modernity in general leading to a lot of technological advances, then we're back to the same problem, the ones that decrease crime should be fairly randomly distributed. If we see a big change in crime rate in one period and not anywhere else, then either one factor has a disproportionate impact on crime; or a disproportionate number of crime-decreasing technological changes have occurred at once. The latter pays a complexity penalty.

If you mean a big change over the last ~150 years, then yeah I'd say having lots of causes for certain trends makes sense.

Comment by Jemist on Framing Practicum: Dynamic Equilibrium · 2021-08-16T21:58:51.594Z · LW · GW

I'm assuming the point is that I've not seen the examples used as examples of dynamic equilibrium before, not that I've not seen the equilibrium before? Given that that's the case:

  1. Total area of districts in a city. Poor areas become gentrified, rich areas go out of fashion, elderly residents become economically (or biologically) inactive, and become run-down. Overall the distribution changes very slowly, even though the standards of what constitutes a "rich" or "poor" area generally increase with time. This is broken if the municipal government fails or something.
  2. Size of staff in a company. For most well-established companies, people enter and leave at a rate much much faster than the company grows or shrinks.
  3. In terms of dynamic equilibria of outcomes, political parties in certain democracies. The short term predictions can be based on the current political landscape, but in the long term, people get tired of politicians, so each politician's reign is limited. Discontent is always a limiting factor on staying in office.
Comment by Jemist on D&D.Sci August 2021: The Oracle and the Monk · 2021-08-16T20:54:42.305Z · LW · GW

Looks like I was slightly wrong about Solar

Seems like it's on two 28 + 9 day cycles, or similar, but sometimes spikes 10-20 points for no apparent reason. Solar will be 44-ish barring any spikes which isn't enough for solar + doom to do the job reliably.

Given this, Solar + Earth probably has a good chance, as does Solar + Ocean, both of which I think will be better than Earth + Ocean

UPDATE: I had my Doom plot wrong, Solar + Doom is still a good strategy, but Earth + Ocean is probably better

Further info: Flame is a biased random walk, each day it changes by a value greater than Floor(Flame/4) + 1, but I can't see a further pattern to what is chosen. The residual looks a bit like an exponential distribution but the fit is terrible. 

There's no easy pattern to much of the rest of the randomness, I'm gonna guess it's all some bamboozling combination of random integer generation

Comment by Jemist on D&D.Sci August 2021: The Oracle and the Monk · 2021-08-16T20:43:22.629Z · LW · GW

Here are my findings and plans, before looking at any comments:

  • Solar is on a 28-day cycle, on each day there are a few possible options based on the day number mod 28
  • Sometimes solar is randomly higher but this is rare
  • Solar has recently gone up in power by (at least?) 15
  • On day 384, solar will be very strong, I think minimum power would normally be 42, so I suspect minimum will now be 57
  • Lunar is directly related to Solar, a 100% negative correlation with the solar of 14 days previously, I think the linear transformation is 75 - Solar
  • On day 384, Lunar will be 16 (unless Lunar isn't affected by the increase in solar, in which case it will be more, which would be sufficient, and better than doom)
  • Earth and Ocean are strongly negatively correlated
  • Flame is correlated with itself on a day-to-day basis but seems to be a random walk, difficult to predict into the future
  • Ash is entirely determined by the previous day's flame mana
  • Doom is on an 8-day cycle, with high variance
  • Minimum Doom on day 384 will be 19, maximum will be 31
  • Spite is entirely deterministic on a 140 cycle
  • On day 384 spite will be 0

Therefore Doom + Solar is my strategy

Comment by Jemist on Fudging Work and Rationalization · 2021-08-14T18:29:26.024Z · LW · GW

I think the general case of what I was describing isn't always 0.5% difference, it's any mistake which it's reasonable to make, but not acceptable to leave uncorrected.

Comment by Jemist on The Reductionist Trap · 2021-08-11T21:22:22.735Z · LW · GW

Yeah I think you're right actually. My own confusion was probably due to the conflation:

Holistic = Non-reductionist = Nonapple

Where the first step of this is incorrect, rather than the second step.

I think this whole confusion is what has led me to be too critical of "holistic" approaches in the past, where these approaches are in fact well developed.

Comment by Jemist on The Reductionist Trap · 2021-08-09T22:25:11.800Z · LW · GW

I think this might be a semantic distinction, if "I use non-reductionist methods in microbiology" conveys the same meaning as "I use metagenomics etc." then it's not nonapples.

Thanks for the comment. I now think the original title put the emphasis in the wrong place so I've changed that.

Comment by Jemist on Uncertainty can Defuse Logical Explosions · 2021-08-04T18:09:58.431Z · LW · GW

Yeah, so there are four options, . These will have the ratios . By D4 we'd eliminate the first one. The remaining odds ratios are normalized to be something around . I.e. given that the agent takes $5 instead of $10, it is pretty sure that it's taken the smaller one for some reason, gives a tiny probability of it having miscalculated which of $5 and $10 are larger, and a really really small probability that both are true.

In fact were it to reason further it would see that the fourth option is also impossible, we have an XOR type situation on our hands. Then it would end up with odds around .


That last bit was assuming that it doesn't have uncertainty about its own reasoning capability.

Ideally it would also consider that D4 might be incorrect , and still assign some tiny  of probability ( for example, the point is it should be pretty small to both the first and fourth options giving . It wouldn't really consider them for the purposes of making predictions, but to avoid logical explosions, we never assign a "true" zero.