I don't want to listen, because I will believe you 2020-12-28T14:58:34.952Z
What are intuitive ways for presenting certainty/confidence in continuous variable inferences (i.e. numerical predictions)? 2020-12-25T00:55:39.451Z
[Meta?] Using the LessWrong codebase for a blog 2020-12-20T03:05:55.462Z
Machine learning could be fundamentally unexplainable 2020-12-16T13:32:36.105Z
Costs and benefits of metaphysics 2020-11-09T14:31:13.718Z
What was your behavioral response to covid-19 ? 2020-10-08T19:27:07.460Z
The ethics of breeding to kill 2020-09-06T20:12:00.519Z
Longevity interventions when young 2020-07-24T11:25:35.249Z
Divergence causes isolated demands for rigor 2020-07-15T18:59:57.606Z
Science eats its young 2020-07-12T12:32:39.066Z
Causality and its harms 2020-07-04T14:42:56.418Z
Training our humans on the wrong dataset 2020-06-21T17:17:07.267Z
Your abstraction isn't wrong, it's just really bad 2020-05-26T20:14:04.534Z
What is your internet search methodology ? 2020-05-23T20:33:53.668Z
Named Distributions as Artifacts 2020-05-04T08:54:13.616Z
Prolonging life is about the optionality, not about the immortality 2020-05-01T07:41:16.559Z
Should theories have a control group 2020-04-24T14:45:33.302Z
Is ethics a memetic trap ? 2020-04-23T10:49:29.874Z
Truth value as magnitude of predictions 2020-04-05T21:57:01.128Z
When to assume neural networks can solve a problem 2020-03-27T17:52:45.208Z
SARS-CoV-2, 19 times less likely to infect people under 15 2020-03-24T18:10:58.113Z
The questions one needs not address 2020-03-21T19:51:01.764Z
Does donating to EA make sense in light of the mere addition paradox ? 2020-02-19T14:14:51.569Z
How to actually switch to an artificial body – Gradual remapping 2020-02-18T13:19:07.076Z
Why Science is slowing down, Universities and Maslow's hierarchy of needs 2020-02-15T20:39:36.559Z
If Van der Waals was a neural network 2020-01-28T18:38:31.561Z
Neural networks as non-leaky mathematical abstraction 2019-12-19T12:23:17.683Z
George's Shortform 2019-10-25T09:21:21.960Z
Artificial general intelligence is here, and it's useless 2019-10-23T19:01:26.584Z


Comment by george3d6 on A non-mystical explanation of "no-self" (three characteristics series) · 2021-01-01T01:48:24.768Z · LW · GW

Can you give me one example of an invention that couldn't be communicated using the language of the time ?

For example, "a barrel with a fire and tiny wheel inside that spins by exploiting the gust of wind drawn towards the flame after it consumes all inside, and using an axel can be made to spin other wheels"... Is a barbaric description of a 1 chamber pressure based steam engine (and I could add more paragraphs worth of detail), but it's enough to explain it to people 2000 years before the steam engine was invented.

Comment by george3d6 on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T22:20:31.171Z · LW · GW

I can reduce "pushing the envelope" to other pre existing concepts. It's a shorthand not a whole new invention (which really would make little sense, new language is usually created to describe new physical phenomenon or to abstract over existing language, maybe and exception or two exist, but I assume they are few)

Comment by george3d6 on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T22:18:29.622Z · LW · GW

Your metaphor doesn't quite work, because you are trying really hard to show me the color red, only to then argue I'm a fool for thinking there is such a thing as red.

As in, it might be that no person on Earth has such a naive concept of subjective experience, but they are not used to expressing it in language, then when you try to make them express subjective experience in language and/or explain it to them, they say

  • Oh, that makes no sense, you're right

Instead of saying:

  • Oh yeah, I guess I can't define this concept central to everything about being human after 10 seconds of thinking in more than 1 catchphrase.

But again, what I'm saying above is subjective, please go back and consider my statement regarding language, if we disagree there, then there's not much to discuss (or the discussion is rather much longer and moves into other areas), because at the end of the day, I literally can not know what your talking about. Maybe I have a vague impression from years of meditation as to what you are referring to...or maybe not, maybe whatever you had in your experience is much more different and we are discussing two completely different things, but since we are very vague when referring to them, we think we have a disagreement in what we see, when instead we're just looking in completely different places.

Comment by george3d6 on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T19:32:47.415Z · LW · GW

I can't say something is right or wrong or probable unless I have a system of logic to judge those under.

Language is a good proxy for a system of logic, though sometimes (e.g. math and science) it's not rigorous enough. But for most discussion it seems to do kind of fine.

If you are introducing new concepts that can't be expressed using the grammar and syntax of the English language, I'm not sure there's a point in discussing the idea.

Using new terms or even syntax to "reduce" a longer idea is fine, but you have to be able to define the next terms or syntax using the old one first.

Doesn't that seem kind of obvious?

Just to be clear here, my stance is that you can actually describe the feeling of "being self" in a way that makes sense, but that way is bound to be somewhat unique to the individual and complicated.

Trying to reduce it to a 10 word sentence results in something nonsensical because the self is a more complex concept, but one's momentary experience needn't be invalid because it can't be explained in a few quick words.

Nor am I denying introspection as powerful, but introspection in the typical Buddhist way that you prescribe seems to simplist to me, and empirically it just leads to people contempt with being couch potatoes.

If you tried solving the problem, instead of calling paradox based on a silly formulation, if you tried rescuing the self, you might get somewhere interesting...or maybe not, but the other way seems both nonsensical (impossible to explain in a logically consistent way) and empirically leads to meh ish result unless your wish in life is to be a meditation or yoga teacher.

Comment by george3d6 on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T17:05:47.456Z · LW · GW

As for paraphrasing your argument, that's the thing, I can't, my point here is that you don't have an argument, you are abusing language without realizing it.

I'm not saying you're doing so maliciously or because you lack understanding of English, what I call "abuse" here would pass in most other essays but in this case, the abuse ends up throwing a bunch of unsolvable phenomenological issues that would normally raise to oppose your viewpoint under the rug.

 Let me try to give a few examples:

from behind their eyes; but they are actually aware of the sensation, as opposed to being aware from it

The English language lacks the concept of "being aware from a sensation", actually, the English language lacks any concept around "sensation" other than "experiencing it". 

"I am experiencing the world from behind my eyes" and "I am experiencing a pain in my foot" are the exact same in terms of "self" that is "having" a "sensation". This is very important, since in many languages, such as those that created various contemplative religion, "body" and "soul" are different things with "soul" seeing and "body" feeling and "self" being "soul" (I'm not a pali scholar, just speculating as to why the sort of expression above might have made sense to ancient hindus/budhists). In English languages (and presumably in English speakers, since otherwise, they'd feel the need for two terms) this idea is not present. The same "I" is seeing the world and experiencing pain.

Maybe you disagree, fine, but you have to use an expression that is syntactically correct in the English language, at least, instead of saying:

being aware from a sensation

This is a minimum amount of rigor necessary, it's not the most rigorous you can get (that would be using a system of formal logic), but it's the minimum amount of rigor necessary.


Another example, more important to your overall argument but the mistake here is less suttle:

It is a computational representation of a location, rather than being the location itself

First, very important, what is "It", the subject of this sentence, try to define "It" and you see the problem vanishes or the sentence no longer makes sense. But one way you can see this is by examining the phrase: 

"being the location itself"

A {location} can't {be}, not in the sense you are using {be} as {conscious as the}.


Etc, these sort of mistakes are present throughout this paragraph and neighboring ones, and I think they go unnoticed because usually, it's acceptable to break a few syntactic rules in order to be more poetic or fast in what you're communicating, but in this case, breaking the rules of syntax you end up subtly saying things that make no sense and can make no sense no matter how much you'd try to make them so. Hence why I'm trying to encourage you to try to be more explicit. 

First, just try to put the whole text into a basic syntax checker (e.g. Grammarly) and make it syntactically correct, and I'm fairly sure you will be enlightened by this exercise.


I'd speculate that the generator of the spelling mistakes is the fact that you are subtly shifting in your thinking from a perspective that says "An external world exists in a metaphysical way completely separated from my brain" and one that says "Everything in the external world, including my body, is an appearance in consciousness". And while both of these views are valid on their own, using both viewpoints in a unified argument is ignoring a few "hard problems of {X}".

But maybe I'm mistaken that this is what you are doing, I can't see inside your mind. However, I am fairly certain that simply try to be syntactically correct will show you that whatever you are trying to express makes no sense in our language. And if you try to go deeper, blame the language, and abstract it with a system of formal logic... then you will either run into an inconsistency or became the most famous neuroscientist (heck, scientist) in the history of mankind.  

Comment by george3d6 on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T16:23:27.131Z · LW · GW

That makes more sense if I use the term "phenomenological frameworks"

Comment by george3d6 on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T12:30:33.706Z · LW · GW

I'm not sure how to make it more clear, I can suggest rereading your own words again and trying to see if you can spot any inconsistency.

Comment by george3d6 on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T08:08:54.461Z · LW · GW

Haven't read the article fully, but I'm familiar with the general ideas presented thus far, one of the most philosophically naive is one I've also heard in various ways from loch Kelly, sam harris and douglas harding:

looking at the world from behind their eyes; but they are actually aware of the sensation, as opposed to being aware from it. It is a computational representation of a location, rather than being the location itself. Still, once this representation is fed into other subsystems in the brain, those subsystems will treat the tagged location as the one that they are “looking at the sense data from”, as if they had been fed a physical map of their surroundings with their current location marked.

Here you're basically switching phenomenological frameworks to perform a magic trick.

Either "the world" exists, truly exists outside my brain and then there is "something looking out at the world" and that representation is correct.


"The world" is just "inside my brain", but that world the includes the physical representation of my body, which is part of it, and that physical representation is still "outside and looking out at the world".

Both these viewpoints can be correct, simulateneously, they are different perspectives in which you can collapse the concept of "the outside world".

But shifting between the two without taking into account that a shift between two unreconcilable perspectives has happened, is missing a VERY important point.

I think that, in having to reconcile those perspectives, a lot of truth can be found, though it may be truth that doesn't exactly confirm a Buddhist worldview.

Comment by george3d6 on I don't want to listen, because I will believe you · 2020-12-28T17:13:16.492Z · LW · GW

Correction by me to avoid biased word.

The word choice was intentional. Since alternative is a loaded word. People don't think of an insane's relatives facebook post when you say "alternative to conventional medical knowledge".

Everybody can agree that there were historical situations in which this was the right thing to do, and others in which this was the wrong thing to do. So the question is: how to distinguish them?

There obviously were, but in aggregate the establishment opinion is likely the correct one unless you have very good proof that you are surrounded by geniuses which can give you a better take.

Also, please note that I'm calling for "starting with" the establishment view, not limiting yourself to it.

That we are unconsciously suggestible to information is a valuable point. Now, this begs the question: why do some people leave cultish beliefs after being raised in them?

What's the chance those people still hold a lot of misguided beliefs and leaving the cult took and still takes a great deal of time spent reasoning through the false beliefs they were indoctrinated with.

See first heading, I'm not claiming truth doesn't win over falsehood in the end (or rather that more probable beliefs don't win over less probable alternatives), only that significant mental energy must be spend for such a thing to happen, and we can't do it with every single belief we have.

Comment by george3d6 on [Meta?] Using the LessWrong codebase for a blog · 2020-12-20T23:10:10.896Z · LW · GW

Site search is powered by Algolia, which is kind of expensive and not especially good.

Why not use something like ?

I had to use it for a project (completely unrelated) and integrating it into a website is a sub-hour task, it's also silly portable since with rust you can just get a statically compiled version of packages that just need a dynamically linked libc.

Comment by george3d6 on [Meta?] Using the LessWrong codebase for a blog · 2020-12-20T23:01:19.879Z · LW · GW

Thanks a lot for the in-depth reply.

I must agree with you that there are a lot of dealbreakers for me there. But it's interesting to see what goes into deploying & maintaining the website.

I do think that if you guys refactor it, it might be quite nice to put a "generic" version of it out there, including some instructions like these for people that want to run it. There's a lack of well-executed open source community platforms at the moment.

This one may be messy and not the most efficient or prettiest in terms of the underlying code, but the user experience is one of the most polished I've ever stumbled upon.


The phone bit I find surprising though since in my experience I've had issues with LW on my laptop but on my android, it runs really smoothly (latest stable firefox on both). But I guess it depends on a lot of factors and "run smoothly" is rather subjective after a point.

Comment by george3d6 on [Meta?] Using the LessWrong codebase for a blog · 2020-12-20T22:58:22.351Z · LW · GW

This number is on the money for me actually. With the traffic above (5k visitors day average plus front page HN + front page of some mid-sized subreddit spikes... which end up in the low x0,000k/hr)... I pay a grand total of ~30$/month (CDN included), but I use the same machine to run coordination scripts for my GPU machines and to host friend wordpress blogs.

Then again, I optimized the website quite a bit, and the comment+auth system (remark42) is also a monster in terms of minimalism and caching.

Comment by george3d6 on [Meta?] Using the LessWrong codebase for a blog · 2020-12-20T18:39:21.720Z · LW · GW

Good to know, I suspect you're right, since many people posting on LW have blogs, and I'm unlikely to be the first with this idea, I assume only 2 deployments in existence means it's annoying to maintain.

Indeed, I took a closer look at the thing yesterday eve and it did seem a bit, ahem, convoluted (not necessarily a bad thing, but I assume it takes a bit of time to get an intuitive feel for it).

Thanks for the feedback

Comment by george3d6 on [Meta?] Using the LessWrong codebase for a blog · 2020-12-20T18:36:24.207Z · LW · GW

There is remark42 (what I use currently) which is a plug and play comment system with upvotes and users and loads of auth. It doesn't have karma but I believe it would be trivial to implement, it has the problem of not supporting CSS, but again, fairly trivial to implement (or, at least, simpler than modifying LW).

So probably having a semi-static blog + a remark42 mod as a comment system would replicate the effect (and that's what I might go for) 

Comment by george3d6 on Machine learning could be fundamentally unexplainable · 2020-12-18T22:58:27.158Z · LW · GW

Regrading the thought experiment:

For the pedantic among you assume the AUC above is determined via k-fold cross-validation with a very large number of folds and that we don’t mix samples from the same patient between folds


As a general rule of thumb, I agree with you, explainability techniques might often help with generalization. Or at least be intermixed with them. For example, to use techniques that alter the input space, it helps to train with dropout and to have certain activations that could be seen as promoting homogenous behavior on OOD data 

Comment by george3d6 on Machine learning could be fundamentally unexplainable · 2020-12-17T19:59:18.331Z · LW · GW

This seems pretty false to me. You yourself give some counterexamples later.

Hmh, I don't think so.

As in, my argument is simply that it might not be worth groking through the data and the explanation is a poorly defined concept which we don't have even about human-made understanding.

I'd never claim that it's impossible for me to know a specific about the outputs of an algorithm I have full data about, after all, I could just run it and see what the specific output I care about it. The edge case would come when  I can't run the algorithm due to computing power limitations, but someone else can by having much more compute than me. In which case the problem becomes one of trying to infer things about the output without running the algorithm itself (which could be construed as similar to the explanation problem, maybe, if I twist my head at a weird angle)

Anyway, I can see your point here but I can see it from a linguistic perspective, as in, we seem to use similar terms with slightly different meanings and this leads me to not quite understanding your reasoning here (and I assume the feeling is mutual). I will try to read this again later and see if I can formulate a reply, but for now I'm unable to put my hand on what those linguistic differences are, and I find that rather frustrating on a personal level :/

Comment by george3d6 on Machine learning could be fundamentally unexplainable · 2020-12-17T13:15:22.889Z · LW · GW

(their network weights, the activations, etc.)

I still don't understand the example. If you have access to everything about a given algorithm you are guaranteed to be able to know anything you want about it.

If "cheating" means something like "deciding at T that I will do action X at T+20 even though I said "" I will do action Y at T+20"" "... then that decision is stored somewhere in those parameters and as such is known to anyone with access to them.

If neither system knows what action will happen at T+20 until T+20 arrises, then it becomes a problem of one turing machine trying to simulate another turing machine, so the amount of operations available from T until T+20 will decide the problem.

But I feel like the framework you are using here doesn't really make a lot of sense, as in, what you are describing is very anthropomorphized.

Comment by george3d6 on Machine learning could be fundamentally unexplainable · 2020-12-17T11:27:12.394Z · LW · GW

That example doesn't really make sense to me, could you taboo the word "lying". I am rather confused as to what you mean by it, it could have a lot of different interpretations.

Comment by george3d6 on Machine learning could be fundamentally unexplainable · 2020-12-16T15:34:52.002Z · LW · GW

If, on the other hand, you are lied to regularly, and you are promised jobs and tax breaks and they don't materialize, then I don't find it surprising that some people don't trust vaccines.


Kind of unrelated to this article, or rather, one of many tangents that can diverge from the topic, but I actually really like this idea... I don't think I ever considered this perspective when thinking about "applied epistemology", that some people's place in society might make them pre-disposed to low levels of trust, not because of the environment, but rather because of how often "society" itself lies to them (either via things like politicians making promises of jobs or sugar-oil bars making promises of losing weight... and other obvious falsehoods that people end up not being immunized to in their upbringing)

Comment by george3d6 on Jeff Hawkins on neuromorphic AGI within 20 years · 2020-12-13T11:41:31.322Z · LW · GW

Yeah, bad choice of words in hindsight, especially since I was criticizing the subject of the article, not necessarily its contents.

But now there's 2 comments which are in part reacting to the way my comment opens up, so by editing it I'd be confusing any further reader of this discussion if ever there was one.

So I think it's the lesser of two evils to leave it as is.

Comment by george3d6 on Jeff Hawkins on neuromorphic AGI within 20 years · 2020-12-13T11:38:35.355Z · LW · GW

The point of benchmarking something is not to see if it's "better" necessarily, but to see how much worst it is.

For example, a properly tuned FCNN will almost always beat a gradient booster at a mid-sized (say < 100,000 features once you bucketize your numbers, since a GB will require that, and OHE your categories and < 100,000 samples) problem.

But gradient boosting has many other advantages around time, stability, ease of tuning, efficient ways of fitting on both CPUs and GPUs, more tradeoffs flexibility between compute and memory usage, metrics for feature importance, potentially faster inference time logic and potentially easier to train online (though both are arguable and kind of besides the point, they aren't the main advantages).

So really, as long as benchmark tell me a gradient booster is usually just 2-5% worst than a finely tuned FCNN on this imaginary set of "mid-sized" tasks, I'd jump at the option to never use FCNNs here again, even if the benchmarks came up seemingly "against" them.

Comment by george3d6 on Jeff Hawkins on neuromorphic AGI within 20 years · 2020-12-13T06:37:25.021Z · LW · GW

I don't particularly disagree with anything you said here, my reaction was more related towards the subject of the article than the article itself.

Well, I deeply disagree with the idea of using reason to judge the worth of an idea, I think as a rule of thumb that's irrational, but that's not really relevant.

Anyway, HMMs are something I was unaware on, I just skimmed the george d paper and it looks interesting, the only problem is that it's compared with what I'd call a "2 generations" old lang model in the form of char RNN.

I'd actually be curious in trying to replicate those experiments and use the occasion to bench against some aiayn models. I'll do my own digging beforehand, but since you're familiar with this area it seems, any clue if there's some follow-up work to this that I should be focusing on instead ?

Comment by george3d6 on Covid 12/10: Vaccine Approval Day in America · 2020-12-12T20:08:53.399Z · LW · GW

If you can send back the jacket and get your money I'd seriously consider doing that.

Lookup any serious mountaineering guide and you'll see one thing regarding clothing:


Layering, layering, layering. Be it in the city or at 8000 meters high, layering seems like the most reasonable way to protect yourself.

After all, temperature variations of 5-10 degrees during a day and depending on if you're traveling or not they can carry by 20-110 degrees throughout the year, so what are you going to do? Have a special wardrobe for all temperatures? **** No

Skin-tight shirt, normal shirt, Merino sweater, fluffy warm jacket, sky/boating coat.

Merino leggings, some normal pants, potentially some fluffy pants (overkill usually, ankle socks are enough), and skying/boating pants (by which I mean, **** that is made to keep you warm in 100+km/h wet wind)

In my experience, you can get the equipment needed for 4k in the alps for < 300$ (granted, this summer had great sales), it pains me to see people spend 1000$ on a city jacket.


Also, the places where you "feel" cold are your face, hands, and feet. By the time your abdomen feels uncomfortably cold you've made a mistake and need to get the fuck to safety as soon as possible because permanent damage is setting in (unless you got your chest wet or something).

But you'll feel horrible pain from your hands and feet and face a dozen hours or even days before your "core" body temperature starts dropping enough for you to feel it. Granted, keeping the core warm will allow it to keep normal blood circulation in the hands/feet/face (thus warming them up)... but it's a secondary effect and you're walking a tight line between warm and sweating.

If you're uncomfortable with the cold, I'd bet at 5/1 odds that you'll feel better with:


  • Very very thick woolen socks, maybe 2 pairs, and some warm boots
  • Thick gloves
  • A mask (the kind bad guys wear in movies) over your face, and a hat, and a scarf, and some glasses (or some other more reasonable face-protection)

Even if your core is exposed (e.g. just a light jacket and some leggings under your pants).

Than with


 A very thick jacket but more exposed peripheries.

Comment by george3d6 on How common is it for one entity to have a 3+ year technological lead on its nearest competitor? · 2020-12-12T07:25:38.849Z · LW · GW

Distinguishing between a technological lead and ineffective competition is also important. An example is database engine technology. Some proprietary databases are orders of magnitude more efficient/scalable than any open source comparable, which looks qualitative, but is widely recognized as a product of design quality rather than any technological lead. (see also: Google’s data infrastructure)

Seems untrue to me, and I've benchmarked dozens of databases for dozens of problems.

In the column-store space (optimized for aggregate analytics... distributed execution of aggrgated queries, quick filtering based on ordering and data compression) Clickhouse is the best there is in my experience... I made that point 4 years ago, but now you can find plenty of other benchmarks for it. It's used by many large scale search engines and advertisers except google, and among others, by CERN.

In wide column storage space, and more broadly in the "heavy filtering, large amounts of data space" cassandra (, facebook) and now Scylla seem to lead. I've never had to put dozens of petabytes in a database, but the few people that do need this seem to agree.

In the transactional space I haven't seen anyone bring a significant gain over postgres and mariadb yet.

Kv store and in memory caching you have aerospike, rocksdb and stuff that's based on tikv more recently... All slightly different trade-offs, all open source. I'm not even aware of proprietary products here to be honest.

Those 4 combined cover most use cases a db has.

So, not saying I'm convinced I'm correct, but could you provide some examples to back up your claims ? Name some names, or, ideally, provide some uecases/domain where one could find benchmarks that demonstrate a proprietary database gas the upper hand.

Comment by george3d6 on Jeff Hawkins on neuromorphic AGI within 20 years · 2020-12-12T07:08:41.591Z · LW · GW

Oh God, a few quick points:

  • Require some proof of knowledge of the people you pay attention to. At least academic success, but ideally market success. This guy has been peddling his ideas for over 10 years with no results or buy in.
  • His conception of the ML field is distorted, in that for his points to stand one has to ignore the last 10-20 years of RNN R&D.
  • Even assuming he made a more sophisticated point, there's hardly any reason to believe brain designs are the pinecle of efficiency, indeed they likely aren't, but building digital circuits or quantum computers via evolution might require too much slack.
Comment by george3d6 on Why are delicious biscuits obscure? · 2020-12-08T07:36:10.842Z · LW · GW

A more practical explanation based on seeing the recipie, I'm very bad at baking, so I can't cook it:

  • Too little sugar (say ~35g/100g after baking, though I will admit I'm unsure how much or into what form the heat will break down the polysaccharides in flour)
  • Crumbly, how does this whole mix bind together tightly ? How will it not break into tiny pieces if transported in a truck over hundreds of km ?
  • Too fat for most pallets (~22g/100g, assuming "normal' 80-85% fat butter), most people are not used to those levels of saturated fats, pastries tend to be less far and importantly the fats are mainly vegetable oils.
  • Ginger is hit and miss, it's like aceto balsamico, some people add it to every dish and it makes everything better, other people hate it, little in between. So adding gingers is a move that will probably divide your audience quite heavily (as in, it's a taste boost for some, but a downgrade for others, without making or breaking the whole experience, but enough to elevate it or to turn it from good to mediocre )
Comment by george3d6 on My Fear Heuristic · 2020-12-02T20:24:08.305Z · LW · GW

The crux of the matter is determining what "stupid" means.

I'm afraid of boredom, darkness and prolonged effort.

Yesterday I decided a 8km treck from 2300 to 3800m (having no acclimatization in the last 5 months) was something I should do, in part to counteract these fears.

I had done similar feats of mouteneering before and I had equipment for 4k+ altitudes in the Alps (horrible wind, -10-20 temp and the like). My climb was with moderate-high wind with 10- -5 temperatures.

My "rational" self told me all of my fears were the kinds I should face, I was so over-equioed my one risk was heaving due to altitude sickness and having to turn back, something safe at that low elevation. Armed with the info I had yesterday I'd still agree.

Yesterday, however, I lacked knowledge about:

  • Just how much glasses help regulate face temperature
  • How hellish 90% air humidity feels with 100km/h winds in freezing temperatures
  • How inaccurate wind forecasts can be
  • How much havock the stress caused by darkness can reack upon my body's ability to allocate energy efficiently.

90% air humidity + 80km/h wind + -5 degrees on an exposed mountain face at 3000m with no acclimatization is a horrible experience, even with very good gear. By horrible I mean, I'm fairly sure I'd have risked long term injury or death had I turned back 1 hour later.

At the outtest the fear I had seemed like the kind that might stop you from flirting with a pretty girl at a restaurant.

In hindsight it was that kind of fear, but if the restaurant was in a very conservative region of saudi arabia and the girl was the favorite wife of a regional judge.

Overall I agree with the approach though. Just keep in mind fear is there in part to guard against incomplete information and bets that have much greater downsides.

Note: If you want a more suitable example think fear of doctor/dentist stopping people from going in for minor procedures (usually small upside) and the fact that, in many cases, this might actually be the correct choice, since even minor procedures carry a risk of death or chronic pain (cosmetic tooth extractions, vasectomies, IUDs are great examples). The above just came to mind due to proximity, but maybe it doesn't illustrated my point well, since most people would blanket ban high mountain climbing as "stupid" without room for appeal.

Comment by george3d6 on Should we postpone AGI until we reach safety? · 2020-11-20T11:42:05.734Z · LW · GW

Maybe one could just run the assembly lines in the AI factories at half speed, or even better, reduce the assembly quota of AI workers to half.

This will give workers more holidays, maybe even 3-4 day weekends, gets AI labor unions off your back and the AI factories are still cranking, so it's not like progress stops entirely.

Then again, putting a higher VAT on AI sales might be more practical.

The issue is a coherent policy proposal, but I'm sure brilliant regulatory minds like those that wrote legislation for defending us from the evils of UDP sockets and openssl would ride up to the task of ironing out the edges.

Overall though, your idea is brilliant, you're on the right path.

Comment by george3d6 on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-18T05:41:48.221Z · LW · GW

I will ping you with the more cohesive + in depth, syntactically and grammatically correct version when it's done. Either this Monday or the next, it's been in draft form ever since I wrote this comment...

Though the main point I'm making here is basically just Taleeb's Skin in the Game idea, he doesn't talk about the above specifically, but the idea flows naturally after reading him (granted, I read the book ~3yrs ago, maybe I'm missremembering the)

Comment by george3d6 on Is Success the Enemy of Freedom? (Full) · 2020-11-16T13:47:57.811Z · LW · GW

I don't really agree with the concepts used here, nor with the conclusion.

The basic idea behind this article is that an increase in responsibility & standards in one field limits one to that field, this is simply untrue, or at least often not true.

For example, an academic might indeed get piegenholed into studying the niche they got good at, but that is only true until tenure.

A financial advisor might indeed be stuck doing boring backhanded deals and attending highhigh-class class NY bars to get access to the best dark polls, but that's only until they have enough money to quit working.


In part, the problems you outline are ones regarding how people put less value on freedom as they progress through life. But this is only natural and happens regardless of success levels.

However, given a change in value is possible, transitioning from a field to another should be easier, not harder, with success.


So the conclusion of putting on baby wheels when learning something new is not one I'd agree with. Indeed, I'd rather push myself much harder than I would push young me, since my previous experience and knowledge is likely to give me a better starting point for many endevours, at least for most of those I could potentially find interesting.

Even if it wouldn't, the "don't push yourself too hard" is simply a counter-biasing tool, useful only for saying "Hey, remember that X thing you are really good at, well, Y is completely unrelated to that, so expect to be mediocre at Y and take it slow".

Comment by george3d6 on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-11T11:25:47.161Z · LW · GW

I'm in somewhat agreement with this general idea, but I think that most people that try to "build knowledge" ignore a central element of why money is good, it's a hard to fake signal.

I agree with something like:

So, two years ago I quit my monetarily-lucrative job as a data scientist and have mostly focused on acquiring knowledge since then. I can worry about money if and when I know what to do with it.

A mindset I recommend trying on from time to time, especially for people with $100k+ income: think of money as an abundant resource. Everything money can buy is “cheap”, because money is "cheap".

To the extent that I basically did the same (not quit my job, but got a less well-paying, less time-consuming job doing a thing that's close to what I'd be doing in my spare time had I had no job).

But this is a problem if your new aim isn't to make "even more money", i.e. say a few million dollars.


The problem of money is when it scales linearly, when you make 50k this year, 55 the next, 100k ten years later. Because the difference between 50 and 100k is indeed very little.

But the difference between 100m and 100k isn't, 100m would allow me to pursue projects that are far more interesting than what I'm doing right now.


Knowledge is hard to anchor. The guy building TempleOS was acquring something like knowledge in the process of being an unmedicated schizophrenic. Certainly, his programming skills improved as his mental state went downhill. Certainly he was more knowledgeable in specific areas than most (who the **** build a x86 OS from scratch, kernel and all, there's maybe like 5 or 6 of them total, I'd bet there are fewer than 10,000 people alive that can do that given a few years !?)... and those areas were not "lesbian dance PhD" style knowledge, they were technical applied engineering areas, the kind people get paid in the millions for woerking in.

Yet for some reason, poor Terry Davis was becoming insane, not smart, as he went through life.

Similarly, people doing various "blind inuit pottery making success gender gap PhD" style learning think they are acquiring knowledge, but many of the people here would agree they aren't. Or at least that they are acuqiring knowledge of little consequence, which will not help them live more happily or affect positive change, or really any change, upon the world.

At most, you can see it "fail" in the knolwedge you've acquiered once it hits an extreme, once you're poor, sad, and alone in spite of a lifetime of knowledge acquistion.

Money, on the other hand, is very objective, everyone wants it. Most need it. Everyone, to some extent, won't give up theirs or print more of it very easily. It's also instant. Given 10 minutes I can tell you +/-1% how much liqudity I have access to in total at this very moment. That number will then be honored by millions of businesses and thousands of banks accross the world who will give me services, goods or precious metals, stakes in businesses and government bonds in exchange for it. I can't have any such validation with knowledge.

So is it no a good "test of your knowledge" to try and acquire some of it ?

Even if doing a 1-1 knowledge-money mapping is harmful, doing a, say, 0.2 - 1 knowledge - money mapping isn't. Instead it serves as a guideline. Are you acquiring relevant knowledge about the world ? Maybe you're just becoming a numerology quack, or a religious preacher, or a self-help guru, or a bullshiter, or whatever.

Which is not to say the knowledge-money test is flawless, it isn't, it's just the best one we have thus far. Maybe one could suggest other tests exchanging knowledge for things that one can't buy (e.g. affection), but most of those things are much less easy to quantify and trying to "game" them would feel dirty and immoral, trying to "game" money is the name of the game, everyone does it, that's part of its role.

Comment by george3d6 on Is corruption a valuable antidote to overregulation? · 2020-11-09T00:28:15.792Z · LW · GW

Meant to say public servant, no idea why I used that abreviation, sorry.

Comment by george3d6 on Is corruption a valuable antidote to overregulation? · 2020-11-08T16:37:33.299Z · LW · GW

Corruption can help solve problems of disproportionate blame vs credit for politicians and other functionaries.

That is to say, assume a given PS will get much more blame breaking/removing a regulation if things go south than praise if they go well. In that case, if an ambitious project needs to get rid of said regulation, they must balance the PS's cost matrix (e.g. break this regulation for us, and here's some money, plus extra if things go well).

On the other hand, the kind of corruption the US (and in part the EU) has works on a reverse model, where it's the government that gives money to businesses in order to get help passing policies, rather than the businesses that pay officials to get rid of regulations. (E.g. the way the US does healthcare, college, military equipment contracts, or space-exploration).

These two types of corruption seem completely unrelated to met. So really, I think the answer is "Yes, but, a narrower definition of corruption is needed to make the point stick"

Comment by george3d6 on Covid 10/22: Europe in Crisis · 2020-10-22T21:24:03.581Z · LW · GW

My personal recommendation for any Europeans that can travel to Spain is Gran Canaria (there are safe coridors to head to it, including the somewhat safe option via Valenica if you don't want to bother with 2x border corssings).

The case count is extremely low (as % of population), laws regrading masks are strongly enforced (as far as I hear) and, best of all, the sunny summer~ish wheather is potentially helpful. Not necessarily in lowering infection rates (extremely debtable) but in not sucmbing to a repsiraotry infection so easily (and the first and most critical phase of covid-19, as far as I know, can still be modeled like a serious respiratory infection with a few *s).

Obviously there's the cost benefit analysis of whehter or not a flight is actually worth the risk, I think for many people here it might not be. I.e. if you can stand a few more months of heavy quarantine then it's probably safer to just get evertyhing delivered and stay put. But in order to continue "life back to normal" mode with minimal risks of infection and serious side effects it seems like the best option.

Comment by george3d6 on Is Stupidity Expanding? Some Hypotheses. · 2020-10-17T11:19:05.006Z · LW · GW

I guess it depends on what you classify as stupidity, I'd wager the reason is a mix of:

People use intelligence for different things in different eras. Just as language, music, art changes over time, so does thinking. I’m just not keeping up, and assuming because kids these days can’t dance the mental Charleston that they can’t dance at all.


What I’m interpreting as rising stupidity has been the collapse in power and status of that clique and the political obsolescence of the variety of “truth” and “rationality” I internalized as a child. Those pomo philosophers were right all along.

The arguments here are many and long, so let me point of a few:

  1. "Intelligence", as was viewed "back in the day", is associated with a corrupt meritocratic ssystem and thus people don't want to signal it. See "The Tyranny of Merit", I believe it explains this point much better, or for a quicker listen the PEL disucssion with the author.
  2. You are not looking for intelligence, you are looking for "signals" of intelligence that have changed. You'r definition of an "intelligent" person probably requires,  at minimum, the ability to do reasonably complex mental calculations, the ability to write in gramatically correct <their native language>, the ability to write (using a pen), and a college degree (or at leas the ability to sit still and learn in a college style education). But all those 4 skills are made redundant and thus potentially harmful for those who still hang on to them instead of, .e.g: Using a computer which include a spellchecker, using a programing language for complex computational problems, learning in short and efficient bursts from varried sources depending on your immediate interests.  An 18th century puritan would think you are somehwat dumb for not knowing a bit of Greek or Latin and having not read at least one version of the bible in both those language.

As well as:

People ordinarily use different modes of thinking in different communications contexts. In some, finding the truth is important and so they use rational intelligence. In others, decorative display, ritual, asserting dominance or submission, displaying tribal allegiances, etc. are more important and so they use modes more appropriate to those things. It’s not that people are getting stupider, but that these non-intelligent forms of communication (a) are more amplified than they used to be, (b) more commonly practiced than they used to be, or (c) are more prominent where I happen to be training my attention.

E.g. you and I might think a famous yogi guru is stupid, but the yogi guru is healthy, well loved, makes loads of money, seems genuinely happy, works relatively little and enjoys his work. So is the yogi guru stupid or not understanding modern science ? No, he's just manifesting his intelligence towards another fascet of the world that requires a different metaphysical grounding and different epistemology to understand.

It is possible that a set of social incentives that promoted "kosher 20th century western intelligence" as a core value made the market for "kosher 20th-century20th century western intelligence" oversaturated, so what you are observing now is just people branching towards other areas of using their intellect.

Comment by george3d6 on The Colliding Exponentials of AI · 2020-10-16T12:36:46.758Z · LW · GW

At the end of the day, the best thing to do is to actually try and apply the advances to real-world problems.

I work on open source stuff that anyone can use, and there's plenty of companies willing to pay 6 figures a year if we can do some custom development to give them a 1-2% boost in performance. So the market is certainly there and waiting.

Even a minimal increase in accuracy can be worth millions or billions to the right people. In some industries (advertising, trading) you can even go at it alone, you don't need customers.

But there's plenty of domain-specific competitions that pay in the dozens or hundreds of thousands for relatively small improvements. Look past Kaggle at things that are domain-specific (e.g. and you'll find plenty.

That way you'll probably get a better understanding of what happens when you take a technique that's good on paper and try to generalize. And I don't mean this as a "you will fail", you might well succede but it will probably make you see how minimal of an improvement "success" actually is and how hard you must work for that improvement. So I think it's a win-win.

The problem with companies like OpenAI (and even more so with "AI experts" on LW/Alignment) is that they don't have a stake by which to measure success or failure. If waxing lyrically and picking the evidence that suits your narrative is your benchmark for how well you are doing, you can make anything from horoscopes to homeopathy sound ground-breaking.

When you measure your ideas about "what works" against the real world that's when the story changes. After all, one shouldn't forget that since OpenAI was created it got its funding via optimizing the "Impress Paul Graham and Elon Musk", rather than via the "Create an algorithm that can do something better than a human than sell it to humans that want that thing done better" strategy... which is an incentive 101 kinda problem and what makes me wary of many of their claims.

Again, not trying to disparage here, I also get my funding via the "Impress Paul Graham" route, I'm just saying that people in AI startups are not the best to listen to in terms of AI progress, none of them are going to say "Actually, it's kinda stagnating". Not because they are dishonest, but because the kind of people that work in and get funding for AI startups genuinely believe that... otherwise they'd be doing something else. However, as has been well pointed about by many here, confirmation bias is often much more insidious and credible than outright lies. Even I fall on the side of "exponential improvement" at the end of the day, but all my incentives are working towards biasing me in that direction, so thinking about it rationally, I'm likely wrong.

Comment by george3d6 on The Colliding Exponentials of AI · 2020-10-15T21:36:44.081Z · LW · GW

Could you clarify, you mean the primary cause of efficiency increase wasn’t algorithmic or architectural developments, but researchers just fine-tuning weight transferred models?


Algorithm/Architecture are fundamentally hyperparameters, so when I say "fine-tuning hyperparameters" (i.e. the ones that aren't tuned by the learning process itself), those are included.

Granted, you have jumped from e.g. LSTM to attention, where you can't think of it as "hyperparameter" tuning, since it's basically a shift in mentality in many ways.

But in computer vision, at least to my knowledge, most of the improvements would boil down to tuning optimization methods. E.g here's an analysis of the subject ( describing some now-common method, mainly around CV.

However, the problem is that the optimization is happening around the exact same datasets Alexnet was built around. Even if you don't transfer weight, "knowing" a very good solution helps you fine-tune much quicker around a problem ala ImagNet, or cifrar, or mnist or various other datasets that fall into the category of "classifying things which are obviously distinct to humans from square images of roughly 50 to 255px width/height"

But that domain is fairly niche if we were to look at, e.g., almost any time-series prediction datasets... not much progress has been made since the mid 20s. And maybe that's because no more progress can be made, but the problem is that until we know the limits of how "solvable" a problem is, the problem is hard. Once we know how to solve the problem in one way, achieving similar results, but faster, is a question of human ingenuity we've been good at since at least the industrial revolution.

I mean, you could build an Alexnet-specific circuit, not now, but back when it was invented, and get 100x or 1000x performance, but nobody is doing that because our focus is not (or, at least, should not) fall under optimizing very specific problems. Rather, the important thing is finding techniques that can generalize.

**Note: Not a hardware engineer, not sure how easily one can come up with auto diff circuits, might be harder than I'd expect for that specific case, just trying to illustrate the general point**

Are you saying that the evidence for exponential algorithmic efficiency, not just in image processing, is entirely cherry picked? 

Ahm, yes.

if you want a simple overview of how speed and accuracy has evolved on a broader range of problems. And even those problems are cherry picked, in that they are very specific competition/research problems that hundreds of people are working on.

I googled that and there were no results, and I couldn’t find an "academica/internet flamewar library" either.

Some examples:

Paper with good arguments that impressive results achieved by transformer architectures are just test data contamination:

A simpler article: (which makes the same point as the above paper)

Then there's the problem of how one actually "evaluates" how good an NLP model is.

As in, think of the problem for a second, I ask you:

"How good is this translation from English to French, on a scale from 1 to 10" ?

For anything beyond simple phrases that question is very hard, almost impossible. And even if it iisn'tsnt', i.e. if we can use the aggregate perceptions of many humans to determine "truth" in that regard, you can't capture that in a simple accuracy function that evaluates the model.

Granted, I think my definition of "flamewar" is superfluous, I mean more so passive-aggressive snarky questions with a genuine interest in improving behind them posted on forums ala:

More on the idea of how NLP models are overfitting on very poor accuracy functions that won't allow them to progress much further:

And a more recent one (202) with similar ideas that proposes solutions:

If you want to generalize this idea outside of NLP, see, for example, this:

And if you want anecdotes from another field I'm more familiar with, the whole "field" of neural architecture search (building algorithms to build algorithms), has arguably overfit on specific problems for the last 5 years to the point that all state of the art solutions are:

Basically no better than random and often worst:

And the results are often unreliable/unreplicable:


But honestly, probably not the best reference, you know why?

Because I don't bookmark negative findings, and neither does anyone. We laugh at them and then move on with life. The field is 99% "research" that's usually spending months or years optimizing a toy problem and then having a 2 paragraph discussion section about "This should generalize to other problems"... and then nobody bothers to replicate the original study or to work on the "generalize" part. Because where's the money in an ML researcher saying "actually, guys, the field has a lot of limitations and a lot of research directions are artificial, pun not intended, and can't be applied to relevant problems outside of generating on-demand furry porn or some other pointless nonsense".

But as is the case over and over again, when people try to replicate techniques that "work" in papers in slightly different conditions they return to baseline. Probably the prime example of this is a paper that made it into **** nature about how to predict earthquake aftershocks with neural networks and then somebody tried to apply a linear regression to the same data instead and we got this gem

One neuron is more informative than a deep neural network for aftershock pattern forecasting

(In case the pun is not obvious, a one neuron network is  a linear regression)

And while improvements certainly exist, we have observed exponential improvements in the real world. On the whole, we don't have much more "AI powered" technology now than in the 80s.

I'm the first to argue that this is in part because of over-regulation, I've written a lot on that subject and I do agree that it's part of the issue. But part of the issue is that there are not so many things with real-world applications. Because at the end of the day all you are seeing in numbers like the ones above is a generalization on a few niche problems.

Anyway, I should probably stop ranting about this subject on LW, it's head-against-wall banging.

Comment by george3d6 on The Colliding Exponentials of AI · 2020-10-15T08:58:46.317Z · LW · GW

It seems to me like you are miss-interpreting the numbers and/or taking them out of context.

This resulted in a 44x decrease in compute required to reach Alexnet level performance after 7 years, as Figure 1 shows.  

You can achieve infinitely (literally) faster than Alexnet training time if you just take the weight of Alexnet.

You can also achieve much faster performance if you rely on weight transfer and or hyperparameter optimization based on looking at the behavior of an already trained Alexnet. Or, mind you, some other image-classification model based on that.

Once a given task is "solved" it become trivial to compute models that can train on said task exponentially faster, since you're already working down from a solution.

On the other hand, improvements on ImageNet (the datasets alexnet excelled on at the time) itself are logarithmic rather than exponential and at this point seem to have reached a cap at around human level ability or a bit less (maybe people got bored of it?)

To get back to my point, however, the problem with solved tasks is that whatever speed improvements you have on them don't generalized, since the solution is only obvious in hindsight.


Other developments that help with training time (e.g. the kind of schedulers fastAI is doing) are, interesting, but not applicable for "hard" problems where one has to squeeze a lot of accuracy and not widely used in RL (why, I don't know)

However, if you want to look for exp improvement you can always find it and if you want to look for log improvement you always will.

The OpenAI paper is disingenuous in not mentioning this, or at least disingenuous in marketing itself to a public that doesn't understand this.


In regards to training text models "x time faster",  go into the "how do we actually benchmark text models" section the academica/internet flamewar library. In that case my bet is usually on someone hyperoptimizing for a narrow error function (not that there's an alternative). But also, above reasoning about solved >> easier than unsolved still applies.

Comment by george3d6 on What was your behavioral response to covid-19 ? · 2020-10-14T13:32:50.564Z · LW · GW

However, I'm still being really cautious because of the not-well-understood long-term effects. SARS was really nasty on that front. What evidence convinced you that's not a big deal? If you don't already have evidence for that, then rationality isn't the reason you changed your behavior.

Not sure this is directed at me or just a question for poetic reasons, but I'm going to answer it anyway:

  1. The "bradykinin hypothesis" is the only one that has a reasonable model of long term damage, basically attributing it to ACE2 expression in tissues where it would be normally close-to-absent and bradykinin overproduction being triggered in part by that an synergizing badly with it.
  2. This is "hopeful" in that it predicts side effects are non-random and instead associated with a poor immune response. That is to say, youth's protective role against death also protects against side effects.
  3. I found no quantifiable studies of side effects after the infection, the ones that exist are case studies and/or very small n and in older demographics (i.e. the kind that needs to attend the hospital in the first place and is then monitored long term after the infection passed)
  4. Absence of evidence is not evidence of absence and a model of infection is just a useful tool not a predictor of reality, plus my understanding of it is likely simplistic. But that same statement I could make about a lot of coronavrisues and influenza viruses I expose myself to every year.
Comment by George3d6 on [deleted post] 2020-10-12T20:47:05.278Z

Point, I'm not sure the analogy is correct here. Too many mistakes, moving this to draft, probably not worth debating in favor of.

Comment by george3d6 on The Treacherous Path to Rationality · 2020-10-11T23:14:14.389Z · LW · GW

I'd stress the idea here that finding a "solution" to the pandemic is easy and preventing it early on based on evidence also is.

Most people could implement a solution better than those currently affecting the US and Europe, if they were a global tsar with infinite power.

But solving the coordination problems involved in implementing that solution is hard, that's the part that nerds solving and nobody is closer to a solution there.

Comment by george3d6 on Against Victimhood · 2020-09-20T10:53:51.196Z · LW · GW

I agree that victim mentality is useless, but reminding oneself that you were a victim of certain things isn't.

Outside of, maybe, a pure objectivist, reminding yourself that a certain system or group is against you can serve as a good driver of rational actions, i.e. you can use it to tone down your empathy and act in a more self-interested way towards that group.

Of course, the key word here is "self-interest", the problems you rightfully point out with victim mentality is that people often act upon it in ways that aren't self-interested, where they go into depressive or aggressive spirals that are of not help to themselves and at most (though seldom) just serve to hurt their victimizer, though often at greater personal cost.

Comment by george3d6 on The ethics of breeding to kill · 2020-09-11T09:09:24.360Z · LW · GW

You bring up good points, I don't have time to answer in full, but notes on a few of them to which I can properly retort:

I don't think I agree that suicide is a sufficient proxy for whether an entity enjoys life more than it dislikes life because I can imagine too many plausible, yet currently unknown mechanisms wherein there are mitigating factors. For example:
I imagine that there are mental processes and instincts in most evolved entities that adds a significant extra prohibition against making the active choice to end their own life and thus that mental ability has a much smaller role in suicide "decisions".
In the world where there is no built-in prohibition against ending your own life, if the "enjoys life" indicator is at level 10 and the "hates life" indicator is at level 11, then suicide is on the table.
In, what I think is probably our world, when the "enjoys life" indicator is at level 10 the "hates life" indicator has to be at level 50.
What's more, it seems plausible to me that the value of this own-life-valuing indicator addon varies from species to species and individual to individual.

But, if we applied this model, what would make it unique to suicide and not to any other preference ?

And if you apply this model to any other preference and extent it to humans, things get really dystopian really fast.

This seems to me similar to the arguments made akin to "why waste money on space telescopes (or whatever) when people are going hungry right here on earth?".

This is not really analogous, in that my example is "potential to reduce suffering" vs "obviously reducing suffering". A telescope is neither of those, it's working towards what I'd argue is more of a transcedent goal.

It's more like arguing "Let's give homeless people a place to sleep now, rather than focusing on market policies that have potential for reducing housing costs later down the line" (which I still think is a good counter-example).

In summary, I think the main critique I have of the line of argument presented in this post is that it hangs on suicide being a proxy for life-worth-living and that it's equivalent to not having existed in the first place.
I don't think you've made a strong enough case that suicide is a sufficient measure of suffering-has-exceeded-the-cost-of-continuing-to-live. There are too many potential and plausible confounding factors. I think that the case needs to be really strong to outweigh the costs of being wrong.

I don't think what I was trying is to make a definitive case for "suicide is a sufficient measure of suffering-has-exceeded-the-cost-of-continuing-to-live" I was making a case for something close to "suicide is better than any other measure of suffering-has-exceeded-the-cost-of-continuing-to-live if we want to keep living in a society where we treat humans as free conscious agents and give them rights based on that assumption, and while it is still imperfect, any other arbitrary measure will also be so, but worst" (which is still a case I don't make perfectly, but at least one I could argue I'm creeping towards).

My base assumption here is that in a society of animal-killers, the ball is in the court of the animal-antinatalists to come up with a sufficient argument to justify the (human-pleasure-reducing) change. But it seems to me like the intuitions based on which we breed&kill animals are almost never spelled out, so I tried to give words to what I hoped might be a common intuition as to why we are fine with breeding&killing animals but not humans.

Here you're seemingly willing to acknowledge that it's at least *possible* that animals dislike life more than they enjoy it. If I read you correctly and that is what you're acknowledging, then you would really need to compare the cost of that possibility being correct vs the cost of not eating meat before making any conclusion about the ethical state of eating animals.

I am also willing to acknowledge that it is at least *possible* some humans might benefit from actions that they don't consent to, but still I don't engage in those actions because I think it's preferable to treat them as agentic beings that can make their own choices about what makes them happy.

If I give that same "agentic being" treatment to animals, then the suicide argument kind-of-hold. If I don't give that same "agentic being" treatment to animals, then what is to say suffering as a concept even applies to them ? After all a mycelia or an ecosystem is also a very complex "reasoning" machine but I don't feel any moral guilt when plucking a leaf or a mushroom.

Comment by george3d6 on The ethics of breeding to kill · 2020-09-08T12:31:19.442Z · LW · GW
I’m taking it as granted that every human not in a coma can suffer, which I hope is uncontroversial.

I don't think it's that uncontroversial

Similarly, in England and Wales in 2006, 89% of terminations occurred at or under 12 weeks, 9% between 13 and 19 weeks, and 2% at or over 20 weeks.

CNS starts developing at ~4 weeks, but the cerebral hemispheres start differentiating around week 8. Given 200,000 abortions a year in the UK alone, which the people doing and most (all?) of us don't see as an immoral act, that's at least 12,000 human children with a functioning brain killed a year in the UK, a number that is probably 10x in the US and hundreds of times higher if you account for all the world.

When you reach 20 weeks, where abortions still happens, well, one could argue the brain could be more developed than that of living human being, unless you want to assume it's not a question of synaptic activity, nr of neurons & axons but instead of divine transubstantiation ( in which case the whole debate is moot).

So I would indeed say many humans agree that suffering is not a universal experience for every single being that shares our genetic code and exception such as human still in a mother's womb are made. Whether that is true or not is another question entirely.

Many of us might claim this is not the case, but as I made it clear in this article, I'm a fan of looking at our actions rather than the moral stances we echo from soapboxes.

Comment by george3d6 on The ethics of breeding to kill · 2020-09-08T12:18:30.714Z · LW · GW
I'd expect the same to apply to typically developing toddlers

Very quick search reveals suicide as young as 6:

Murder as young as 4:

Presumably cloud happen earlier in kids with a better developmental environment, but suicide and murder at an age this young is going to come from outliers that lived in a hellish developmental environment.

Not sure about ages < 1 or 2 years of age, but:

1. We think that beyond a certain point of brain development abortion is acceptable since the kid is not in any way "human". So why not start you argument there ? and if you do, well, you reach a very tricky gray line

2. Surgeons did use to think toddlers couldn't feel "pain" the way we do and operate on them without anesthesia. This was stopped due to concerns/proof of PTSD, not due to anyone remembering the exact experience, after all there's a lot of traumatic pain one goes through before the age of 1 that none will remember. Conscious experience might be present at that age but... this is really arguable. People don't have memories at ages bellow 1 or 2 and certainly no memories indicative of conscious experience. It might exist, but I think this falls in the same realms as "monkeys" rather than fully fledged humans in terms of certainty.

and it's plausible to me that you could in principle shelter normally developing humans from understanding of death and suicide into adulthood, and torture them, and they too would not attempt suicide.

This I find, harder to believe, but it could be a good thought experiment to counter my intuition if I ever have the time to mold it into a form that fits my own conception of the world and of people.

We (humans and other animals) also have instincts (especially fear) that deter us from committing suicide or harming ourselves regardless of our quality of life, and nonhuman animals rely on instinct more, so I'd expect suicide rates to underestimate the prevalence of bad lives.

I don't see how this undermines the point, unless you want to argue the "fear" of death can be so powerful one can lead what is essentially a negative value life because an instinct to not die (similarly to, say, how one would be able to feel pain from a certain muscle twitch yet be unable to stop in until it becomes unbearable).

I don't necessarily disagree with this perspective, but from this angle you reach a antinatalist utilitarian view of "Kill every single form of potentially conscious life in a painless way as quickly as possible, and most humans for good measure, and either have a planet with no life, or with very few forms of conscious life that have nothing to cause them harm". No matter how valid this perspective is, almost by definition it will never make it into the zeitgeist and it's fairly pointless to think about since it's impossible to act upon and the moral downside of being wrong would be gigantic.

Comment by george3d6 on The ethics of breeding to kill · 2020-09-07T05:54:48.375Z · LW · GW

The problem with that research is that it's shabby, I encountered this problem when dealing with the research on animal suicide and the one on animal emotions expands that trend.

Fundamentally, it's a problem that can't be studied unless you are able to metaphorically see as a bat, which you can't, so I chose to think the closest thing we can do is treat it much like we do with other humans, assume their mental state based on their actions and act accordingly.

Comment by george3d6 on The ethics of breeding to kill · 2020-09-07T05:52:09.653Z · LW · GW
The first point seems fallacious, since most factory farmed animals don't have the physical ability to commit suicide.

Does the argument require for that to be the case ? In the ideal scenario yes, but in the pragamatic scenario one can just look for such behavior in conditions where it can be expressed. Since, much like humans vary enough that some "suffer" under the best of conditions enough to commit suicide, presumably so would animals.

There are many humans who don't have the ability to reason about suicide but undoubtedly suffer

Wait, what ? Ahm, can I ask for source on that ?

Comment by george3d6 on On Systems - Living a life of zero willpower · 2020-08-17T21:44:26.946Z · LW · GW

The main issue with these kind of routines, in my experience, is that they are very rigid and breaking them is hard.

A lot of things (hard and difficult things that make life worth living) involve breaking routines, be it starting a company/ngo, having kids, doing ground-breaking research or even just traveling (including e.g. difficult hikes to remote places or visiting weird cities, towns and villages half a world away).

So to some extent these kind of routines work if you want to get to an "ok" place and have an overall stable life outside of e.g. health issues, but seem to put you in a bad spot if you want to do anything else.

Of course, not everything here is routine-focused advice, but a lot of it seems to be, so I just wanted to give this perspective on that particular topic.

Comment by george3d6 on Longevity interventions when young · 2020-07-26T19:32:58.299Z · LW · GW

No... and searching for it I can only find things like:

Which are referring to other forms of B3 being found in whey protein.

The things with NR is that it's considered a form of B3 (which is the **** way "vitamins" work in that for some of them the "vitamin" is actually any substance that after some point turns by some % into a specific metabolite) and some other forms of B3 are found in whey protein.

I haven't seen claims of NR specifically being found in whey protein, so I have no idea and a quick google doesn't reveal much for me other than stuff like the above.

Comment by george3d6 on Longevity interventions when young · 2020-07-26T07:54:39.097Z · LW · GW
What do you mean by the advice "test your drugs"?

A joke

Which blood biomarkers do you measure for assessing the effectiveness of the supplements?

Would be an article on it's own, ask your doctor, see my response above about vitamin D3 for an example.

You can just look at the studies done on the supplements and measure what they measure. If experts say: "This supplement is good because it increases/decreases X,Y,Z as per studies done on it", if you take it and your X/Y/Z decrease/increase it's also good for you.

What's your intuition on the expected life added by researching this stuff personally and in-depth?

No idea, long discussion.

With a protocol like this I'm hopeful one could get 20, maybe 30% added years in the 20-35 "pocket" where you're "at your prime", but I'm pulling those out of my arse.