Hmm, I did this by default for a while (gave up TV 10 years ago, vidyogames 5, news and YouTube ~3).
What I discovered last year is that there's certainly a thing I'm "missing out on" by doing this, in that (dumb) movies/cartoons/yt-videos can help with neurosis in particularly bad periods, and I tentatively reintroduced them for short period every few months, in all cases I did this when I was really sick (as in, physical injuries)
So I do think there's a point where renouncing all media can backfire.
Also I'm torn if e.g. less wrong, reddit, Twitter, hacker news and certain blogs count. Currently I have a blocker that only allows them 30min/12 hours.
On the one hand I procrastinate and they throw a lot of harmful memes at me, on the other hand I find the conversations potentially useful and they fill the day in a way I enjoy.
For YouTube you can use Firefox with the background play addon and unlock+ a YouTube blocker addon and a subscription list to music channels to skip the issue of recommendations.
For Facebook you can use messenger.com and the messenger lite app to turn it into a 100% messaging app with no social media element.
Given the alarmist and uninformed nature of LW's audience, it might be wise to demand the source code (in this case none, pressumably and API was used) and methodology used to generate any such content.
In this case it seems kind of obvious the author either wrote both sides and/or cherry picked a lot. All fun and games, but you have an AGI death cult going here and this kind of thing can be fodder for false beliefs that stochastic parrots are infinitely more powerful than what's experimentally proven thus far.
Tangential but: sorry to hear about the nerve endings damage thing :/ I got something like that because I made the (in hindsight potentially dumb) move to pull out a wisdom tooth and adjecent tooth.
Dentist was not the best and the tug fucked up a nerve.
Not as bad as your case, but occasionally it gives a very "creepy" feel and a bit of pain.
What I can stay is that being mindful to the pain always seems to drive it away, this doesn't work with other types of pain. So unlikely but if you can't find anything else, it might be worth a shot to see if close examination can "cure" it.
Humanity is misdiagnosing infectious diseases. Statistically speaking you have a few dozen HPVs and say 0.7 HSVs infecting you. Like, permanently, your immune system can't destroy them, they lay dormant in immunno-privileged areas.
How many other viruses do we misdiagnose as "not there" when really they just act in an undetectable way? hundreds? thousands? millions? I don't think there's a good estimate out there
Bacteria and fungi that cause superficial infections enough to an immune reaction to leaving you with localized genetic and structural damage?
Like yeah, the cause of death is now more vague, it's because the guy was fat and had tumors, and part of being fat and having tumors is just the way the human degrades... but to think that none of it is caused by the trillions of organisms that enter your body every day and our body's reaction to it? I find that unlikely.
It's just that we eliminated all "obvious" infectious disease, but even that will not stay as is. The current prevention measures are just an evolutionary pressure and it's combined with all of us no longer being selected based on our ability to react to infectious disease. I think it's not unreasonable to predict an uptik in this is the trans hummanist future comes to pass, and if it does, then it will be a long time before we're rid of deadly disease, even if the signal is now weaker and more confounded.
I'll make an analogy here as to get around the AI-worship induced gut reactions:
I think most people are fairly convinced there isn't a moral imperative beyond their own life, as in, even if behaving as if your own life is the ultimate driver of moral value is wrong and ineffective, from a logical standpoint it is, once your conscious experience ends everything ends.
I'm not saying this is certain, it may be that the line between conscious states is so blurry that continuity between sleep and awakenes is basically 0, or as much as that between you can other completely different humans (which will be alive even once you die and will keep on flourishing). It may be that there is a ghost in the machine under whatever metaphysical framework you want... but, if I had to take a bet, I'd say something like ... 15,40,60% chance that once you close your eyes is over, the universe is done for.
I think many people accept this viewpoint, but most of them don't spend even a moment thinking about anti-aging, even those like myself that do, aren't to concerned about death in a "mood" sense. Why would you be? It's inevitable, like, yeah, your actions might contribute to averting death by 0.x% if you're very lucky and so you should pursue that area because... well, nothing better to do, right? But it makes no sense to concern oneself about death in an emotional way since it's likely coming anyway.
After all the purpose of life is living, and if you're not living because you're worrying about death you lost, even in the case where you were able to defeat death, you still lost, you didn't live, or less metaphorically you lived a life of suffering, or of unmeet potential.
Nor does it help to be parallelized by the fear of death every waking moment of one's life. It will likely make you less able to destory the very evil you are oposing.
Such is the case with every potential horrible inevitability in life, even if it is "absolute" in it's bad-ness, being afraid of it will not make ir easier to avoid and it might ultimately defeat the purpose of avoiding it, which is the happiness of you and the people you care about, since all of those will be more miserable if you are paralleized by fear.
So even if whatever fake model you had assumed a 99.9% chance of being destroyed by HAL or whatever in 10 years from now, it would still be the most sensible course of action to not get too emotional about the whole thing.
I don't think that anoynone but insane (or dumb) people are thinking about the scenario of "Superintelligent AI contained in a computer unable to interact with the outside world outside of being given inputs and outputing simple text/media".
The real risk comes when you have loads of systems build by thousands of agents controlling everything from nukes, to drones, to all the text, video and audio anyone on Earth is reading, to cars, to power plants, to judges, to police and armed forces deployment... which is kind of the current case.
Even in that case I'd argue the "takeoff" idea is stupid, and the danger is posed by humans with unaligned incentives not the systems they built to accomplish their goals
But the ""smartest"" systems in the world are and will be very much connected to a lot of physical potential.
Mutations of germline cells come with huge fitness penalties. Taking 0.01% of your genes introduces an extremely small amount of variance. And unilaterally replacing 50% of my genes with yours is equivalent to a 50% drop in fitness
99.9% of those genes will be identical, hence why I used the 0.01% number, if you want to induce more mutations you can (see bacteria) and if you want to introduce more mutations in a controlled way (i.e. not break anything important) you also can, and humans actually already do this, as do most multi-celular eukaryotes (see B cell selection in response to antigens, for example)
The bacterial method itself also doesn't seem to work in humans (because you need to have the genes during development, and less importantly you need to spread them throughout your body). So it seems to me like sex adds very significant benefits over these alternatives.
I'm not sure if it works in humans, but humans did not invent sex. Simple multi-ceulular eukaryotes did. In those species HGT works just fine and is indeed a contributor to evolutionary variance that got them to where they are. I had a great paper on this but can't find it, see for example: https://onlinelibrary.wiley.com/doi/pdf/10.1002/bies.201300007
It's not a big thing in practice, because sex works better, so it makes sense to protect against HGT in all but a few niche cases (transfer from bacteria-like organels such as mitochondria)
Is your view here coming from some quantitative estimate or further reasoning you didn't include in the comment, or is it reflecting the consensus view in some field? In either case, it would be great to see a pointer. If this is just your guess based on the reasoning in the comment, that's fine and I'm happy to leave the argument here (or with your rebuttal).
It reflects a consensus view in the field in so far as nobody has tried to justify sex in terms of variance in genetics since the 70s (see the very wikipedia article cited) and even that tried to justify it was being variance under very specific conditions that allow it to be less dangerous and/or with a higher chance of varying in a "right" direction.
Variance is for sure an advantage. But it's not the advantage. Since there are 1001 ways you can introduce variance.
So both under common sense and under all accepted models I know of the intuition presented here is wrong.
And you don't need much to figure that out, again, just try to answer the question:
If variance is the main reason we have sexed reproduction than why not do it bacteria-style, or more realistically why not do it B-cell-selection style but for germinal cells?
That also means the variance could be better controlled by environment, you could literally say "I want exactly x% variance".
Additional questions to ponder might be:
Why always 2 sexes (outside of weird fungi)
Why is the sex stable (outside of a few niche animals)
Those will also point you in the right direction of actually figuring out why we have sexed reproduction.
Again, this is not to say variance is not a benefit. Much like "locomotion over flat ground" is a benefit of crab legs. But "variance" is something boring that can come about 1000 different ways. The reason crab legs are interesting and special and worth evolving is showcased when looking at how crabs move on steep terrian. Similarly, the reason sexed reproduction is interesting and special and worth evolving only becomes obvious when thinking of things that you can only accomplish using it (or at least that are much more costly to accomplish otherwise, and that no other lifeforms do in other ways)
If the above doesn't make sense please tell me and I'll try to rephrase
If you want a good pop-science book that goes over how biologists think of the role of sexed reproduction you could check out "The Red Queen: Sex and the Evolution of Human Nature". I read it like 4 years ago, when I knew nothing on the subject, and while in hindsight it has some bad points (and I don't agree with the central hypothesis) it does a good job of showcasing the different views in the field in a way that you can get with limited knowledge of the molecular level processes involved.
I'm not sure why this is pinned. This seem like a wrong explanation.
You can get varriance with mutations of germinal cells, and indeed this does happen. Not to mention gene transfer (e.g. I'll keep 99.99% of mine and take 0.01% of yours), which over the last decade has been observed in hundred of eukaryotes.
Sex in primitive species is not a varriance inducing mechanism it's a hidden-trait preserving mechanism.
The chromosomal setup that allows for sex means we can have hidden traits that only show up in a small % of offsprings (e.g. simple story is you need 2x SNPs which are present only on one of the chromosomes for each parent => 25% occurance). This allows for e.g. sikle cell anemia to be a thing that can be selected for when malaria incidence is high in the population, and selected against when incidence is low. So you have "fitter" hunters in no-malaria times and fitter malaria-resisters in loads-of-malaria times. But you don't get rid of either genotype entirely, because it can be carried around and only manifsted in a small % of kids (which are going to be selected out early on) until it's needed, and then the small % with the adaptive mutation are heavily selected for.
This "hidden trait" explanation is what the chromosomal setup we have provides over bacteria.
Now the question remains something like "Why not carry around 4x different ones in your germinal cells all the time but only "activate" two while allowing all 4 to mutate" ... and the answer is something like 1) might be harder to evolve, since you want to keep the embrional stage as simple as possible, that's when most mistakes in growth happen 2) that requires carrying around extra DNA 3) asymetric selection has benefits 4) actually <this weird species of fungi> 5) group selection
But you only need 1) in order to account for this, and as in most cases I assume that's probably the real explanation, instead of a just so story, it was the path of least resistance.
Book writing - I'm publishing my first technical book this year (no name publisher, they gave me a good deal) and I want to publish a fiction book next year (this one paid out of pocket) | I'd love help from anyone that's gone through the process and published a few books, especially if your starting point was blogging style articles.
Trading - I eer on the side of Taleb here in thinking that the chances someone sharing true knowledge on the subject is low. But if anyone would like to help me with automated trading I'd be very happy to listen. My main focus is using a mix of high compte timeseries analysis for the prices plus NLP to analyze odd sources of data, with a focus on instruments popular among amateur traders, where I assume it's easier to find inefficiencies (but maybe I'm entirely wrong, that's why I'd appreciate advice)
Remote working - if you need advice on how to find remote work, how to manage the resulting cost benefit matrix (e.g. more control over taxes, higher freedom to travel) I'd be willing to help depending on your starting point. Background/Credentials: Remote work and travel for 3 years with a low-effort and very legible 9% tax (real, including VAT, social security and such) setup
Getting in tech as a dropout - If you're in the 17-20 age range and wish to skip the college racket and instead just work towards a low six figure job in tech. I did this, helped a few people along in doing this. But it's certainly more cost effective and simpler if you're younger and haven't invested in a diploma mill yet. Background/Credentials: I did this starting age 19, currently working in ML research, but I was mainly working in specialized programming and small team management roles before.
Maybe self hosting and refactoring if anyone here is interested. But kinda niche areas.
Oh, not at all, chronic inflammation could be a thing.
I'm just saying that tracking how your exercise routine affects chronic inflammation is a very min-maxy type of things.
Chronic inflammation could very much be a macro problem that leads to joint pain.
Did you already test basic stuff? Like immune cell counts, homocysteine, uric acid, CRP, fibrinogen, ASLO blah. Basically, a standard blood panel an obsessed GP would give you? If not I'd certainly start with that.
I mean, for all you know this thing could be caused by eating too much meat, or gluten intolerance, or whatever (not saying those reasons are especially likely, just examples of "dietary problem that is easily caught on tests and can be easily resolved)
I did a little quick searching for "knee stability training protocol" and.. found a few things that looked pretty obvious. Quads, hams, calves, etc. More or less what I'd expect. I don't suppose you have any secret sauce beyond that?
No, it's really very much about your individual body and where you have lacks, you need an in-person trainer to be able to see this and over time as you move more you'll become more aware of your body and be able to say "Ah, it's x area that's too stiff, or activating too much, or that should be working but isn't or whatever"
Ie, "train to failure"? If so, I was under the impression that training to failure is now considered less effective/useful.
No, training to failure is a bad idea in that it's both unhealthy (muscle injury, joint issues) and unideal for muscle growth. But from the perspective of most people "training to failure" is actually "training to a few reps close to failure" because outside of people that are fairly advanced it's close to impossible to push yourself to the gun-to-the-head limit. If you want you can try "train until your form fails and you pinky-promise are unable to keep it no matter what", and realistically that should suffice.
I'm not an athlete, but what would the proxies be?
standard markers of inflammation, ESR, CRP, etc. Or you could even look for the curve of cortisol response and clotting factors with a venous catheter, I guess. But again, not at all relevant outside of an academic curiosity unless you are training to be an athlet (which is unhealthy and should be avoided).
If your problem is personal, i.e you're dealing with joint issues, unless you're suffering from a muscle-wasting disease or are over the age of 50, reading about stuff will be low yield.
Long term joint pain is solved by:
strengthening muscles in order to not put a strain on "weak" joints [evidence: solid]
Hormetic effects joint usage [evidence: weak clinical, but look at e.g. people doing yoga, I'd say this is an issue of people not studying the correct demographics]
Zone 2 training, aka cardio, allowing you to more efficiently partition fuel to muscles and thus do more movement without suboptimal muscle usage [evidence: I'd assume moderate but unsure]
Stability training [evidence: not good because everyone disagrees what exactly this involves, but basically all physiotherapists are doing some form of stability training so it's obviously useful | overall you can pick a specific older technique and you will get solid evidence, but newer stuff might actually be better, but less tested]
Now, can you optimize past that? Sure you can.
But unless you are already doing, say, 2 hours of zone 2 4-5 times a week, 30 minutes of resistance training 2-3 times a week (the kind where you are in excruciating pain by the end, i.e. proper resistance training not aerobics masquerading as resistance training), 20-40 minutes of daily stability training (could be morning yoga, could be stretching recommend by a therapist, could be whatever).
Then reading up on joint pain will be useless.
It may be that you are an athlete, in which case discount the above, if you're doing 4-6 hours of effort per day on average then a better model of movement is probably the key. But even then it might make more sense to take a scientific approach and just try different things and be quick to quantify (e.g. don't look for joint pain after trying a new style of movement, look for proxies in your blood).
But again, if you're not an athlete, by reading up on this stuff you are simply running away from the real solution, which involves the hard work of building a pattern of 1-2 hours of varied exercise every day.
This paradigm doesn't matter if the physician has in mind a cost/benefit matrix for the treatment, in which it would be fairly easy to plug in raw experimental data no matter how the researchers chose to analyze it.
The core problem here is "doctors don't know how to interpret basic probabilities", the solution to this is deregulation in order to hoist the work of decision trees from men.
Discussions like this one are akin to figuring out how to get paedophiles to wear condom more often, in principle they could be justified if the benefits/cost were proportionally immense, but they are a step in a tangent direction and moving away focus from the core issue (which is, again, why are your symptoms, traits and preferences not weighted by a decision tree in order to determine medication)
This more broadly applies to any mathematical computation that is left to the human brain to make instead of being offloaded to a computer. It's literally insane that a small minority of very protectionist professions are still allowed (and indeed, to some extent forced) to do this... it's like accountants being forced to make calculation with pen and paper instead of introducing the numbers into a formula in excel.
On a species level though, the specific niche of human intelligence arose and filled an evolutionary niche, but that is not proof the same strategy will be better.
Bears fill an evolutionary niche of being able to last long times without food, having a wide diet and being very powerful, but that's not a conclusion that a bear that's 3x bigger, can eat even more things and can survive even longer without food would fare any better.
Indeed, quite the opposite, if a "better" version of a trait doesn't exist that likely means the trait is optimized to an extreme.
And in terms of inter-species "achievements", if the core things that every species want to do is "survive" then, well, it's fairly easy to conclude cockroaches will outlive us, various grasses will outlive us or at least die with us, same goes for cats... and let's not even go into exteremophiles, those things might have conquered planets far way from ours billions of years before we even existed, and will certainly outlive us.
Now, our goals obviously converge from those animals, so we think "Oh, poor dumb cockroaches, they shan't ever advance as a species lacking x/y/z", but in the umvlet of the cockroach its species has been prospering at an astonishing rate in the most direction that are relevant to it.
Similarly, we are already subpar in many tasks to various algorithms, but that is rather irrelevant since those algorithms aren't made to fit the niches we do, the very need for them comes from us being unable to fill those niches.
Roughly speaking, yes, I'd grant some % error, and I assume most would be cofounders, or one of the first researchers or engineers.
Back then people literally made 1-niche image recognition startups that work.
I mean, even now there are so many niches for ML where a team of rather mediocre thinkers (compared to, say, the guys at deep mind) can get millions in seed funding with basically 0 revenue and very agressive burn, by just proving very abstractly they can solve one problem or another nobody else is solving.
I'm not sure what the deluge of investment and contracts was like in 2008, but basically everyone publishing stuff about convolutions on GPUs is a millionaire now.
It's obviously east to "understand that it was the right direction"... With the benefit of hindsight. Much like now everyone "understands" transformers are the future of NLP.
But in general the field of "AI" has very few real visionaries that by luck or skill bring about progress, and even being able to spot said visionaries and get on the bandwagon early enough is a way to get influential and wealthy beyond belief.
I don't claim I'm among those visionaries, nor that I found a correct band wagon. But some people obviously do, since the same guys are implicated in an awful lot of industry shifting orgs and research projects.
I'm not saying you should only listen to those guys, but for laying out a groundwork, forming mental models on the subject, and distilling facts from media fiction, those are the people you should listen to.
I think "very" is much too strong, and insofar as this is true in the human world, that wouldn't necessarily make it true for an out-of-distribution superintelligence, and I think it very much wouldn't be. For example, all you need is superintelligence and an internet connection to find a bunch of zero-day exploits, hack into whatever you like, use it for your own purposes (and/or make tons of money), etc. All you need is superintelligence and an internet connection to carry on millions of personalized charismatic phone conversations simultaneously with people all around the world, in order to convince them, con them, or whatever. All you need is superintelligence and an internet connection to do literally every remote-work job on earth simultaneously.
You're thinking "one superintelligence against modern spam detection"... or really against 20 years ago spam detection. It's no longer possible to mass-call everyone in the world because, well, everyone is doing it.
Same with 0-day exploits, they exist, but most companies have e.g. IP based rate limiting on various endpoints that make it prohibitively expensive to exploit things like e.g. spectre.
And again, that's with current tech, by the time a superintelligence exists you'd have equally matched spam detection.
That's my whole point, intelligence works but only in zero-sum games against intelligence, and those games aren't entirely fair, thus safeguarding the status quo.
<Also, I'd honestly suggest that you at least read AI alarmists with some knowledge in the field, there are plenty to find, since it generate funding, but reading someone that "understood AI" 10 years ago and doesn't own a company valued at a few hundred millions is like reading someone that "gets how trading works", but works at Walmart and live with his mom>
I think the usual rejoinder on the "AI go foom" side is that we are likely to overestimate x by underestimating what really effective thinking can do
Well, yeah, and on the whole, it's the kind of assumption that one can't scientifically prove or disprove. It's something that can't be observed yet and that we'll see play out (hopefully) this century.
I guess the main issue I see with this stance is not that it's unfounded, but that its likely cause is something like <childhood indoctrination to hold having good grade, analytical thinking, etc as the highest value in life>, as in, it would perfectly explain why it seems to be readily believed by anyone that stumbles upon less wrong, whereas few/no other beliefs (that don't have a real-world observation to prove/disprove them) are so widely shared here (as well as more generally in a lot of nerdy communities).
Granted, I can't "prove" this one way or another, but I think it helps to have some frameworks of thinking that are able to persuade people that start from an "intelligence is supreme" perspective towards the centre, much like the alien story might persuade people that start from an "intelligence can't accomplish much" perspective.
I'm pretty surprised by the position that "intelligence is [not] incredibly useful for, well, anything". This seems much more extreme than the position that "intelligence won't solve literally everything", and like it requires an alternative explanation of the success of homo sapiens.
I guess it depends on how many "intelligence-driven issues" are yet to solve and how important they are, my intuition is that the answer is "not many" but I have very low trust in that intuition. It might also be just the fact that "useful" is fuzzy and my "not super useful" might be your "very useful", and quantifying useful gets into the thorny issue of quantifying intuitions about progress.
The question you should be asking is not if IQ is correlated with success, but if it's correlated with success in spite of other traits. I.e. being taller than your siblings, facial symmetry and having few coloured spots on your skin are also correlated with success... but they are not direct markers, they simply point to some underlying "causes" ("good" embryonal env, which correlates with being born into wealth/safety/etc | lack of cellular damage and/or ability to repair said damage | proper nutrition growing up... etc).
Also, my claim is not that humans don't fetishize or value intelligence, my claim is that this fetish specifically pretains to "intelligence of people that are similar enough to me".
I think the thing missing here is "fierce about what".
Being fierce about spacecrafts, osk therapy or ecological materials is basically good.
Being fierce about Unix, ml, rust or fpgas is morally neutral, but can be good or bad depending on the trends in society and your industry.
Being fierce about my little pony, debating people online, arguing for extremist political views, reading up on past wars, being a ""PUA"" and playing starcraft is bad, bad of society, but more so for the individual which is slowly consumed by it.
Elon musk is annoying because he thinks he knows everything and is often to agressive in imposing his vision, everybody still like Elon musk.
But if someone acted like Elon musk, but couldn't afford a home, raise a family, buy a Tesla, build cool hardware ,or go on wild vacations to Ibiza in order to hook up with models... we'd call that person delusional, we'd recommend they take some meds, do some CBT, see a therapist, get some hobbies and try to make friends.
"Progress" can be a terminal goal, and many people might be much happier if they treated it as such. I love the fact that there are fields I can work in that are both practical and unregulated, but if I had to choose between e.g. medical researcher and video-game pro, I'm close to certain I'd be happier as the latter. I know many people which basically ruined their lives by choosing the wrong answer and going into dead-end fields that superficially seem open to progress (or to non-political work).
Furthermore, fields bleed into each other. Machine learning might well not be the optimal paradigm in which to treat <gestures towards everything interesting going on in the world>, but it's the one that works for cultural reasons, and it will likely converge to some of the same good ideas that would have come about had other professions been less political.
Also, to some extent, the problem is one of "culture" not regulation. EoD someone could have always sold a covid vaccine as a supplement, but who'd have bought it? Anyone is free to make their own research into anything, but who'll take them seriously?... etc
I've been thinking a lot about replacing statistics with machine learning and how one could go about that. I previously tried arguing that the "roots" of a lot of classical statistical approaches are flawed, i.e. they make too many assumptions about the world and thus lead to faulty conclusions and overly complex models with no real insight.
I kind of abandoned that avenue once I realized people back in the late 60s and early 70s were making that point and proposing what are now considered machine learning techniques as a replacement.
I consider myself a skeptic empiricist, to the extent I can, for it's a difficult view to hold.
I don't think this community or Eliezer's ideas are so, they are fundamentally rational:
Timeless decision theory
Assumptions about experimental perfection that lead to EZs incoherent rambling on physics
Everything that's part of the AI doomsday cult views
These are highly rational things, I suspect steming from a pre school "intelligence is useful" prior that most people failed to introspect, and that is pretty correct unless taken to an extreme. But it's reasoning from that uncommon a prior (after al empiricists also start from something, it's just that their starting point is one that's commonly shared by all or most humans, e.g. obvious seen features), and other like it, that lead to the sequences and to most discussion on LW.
Which is not to say that it's bad, I've personally come to believe it's as ok as any religion, but it shouldn't be confused with empiricism and empiricists methods.
Seems to indicate that GPT-2 uses a byte-level BPE, though maybe the impl here is wrong, where I'd have expected it to use something closer to a word-by-wrod tokenizer with exceptions for rare words (i.e. a sub-word tokenizer that's basically acting as a word tokenizer 90% of the time). And maybe GPT-3 uses the same?
Also it seems that sub-word tokenizer split much more aggressively than I'd have assumed before.
His contribution to computing, formalizing problems into code, parallelizing, etc
His mathematical contributions (Feynman diagrams, Feynman integrals)
His contributions to teaching/reasoning methods in general.
I agree that I'd want to learn physics from him, I'm just not sure he was an exceptional physicist. Good, but not Von Neuman. He says as much in his biographies (e.g. pointing out one of his big contributions came from randomly point to a valve on a schematic and getting people to think about the schematic).
He seems to be good at "getting people to think reasonably and having an unabashedly open, friendly, mischievous and perseverant personality", which seems to be what he's famous for and the only thing he thinks of himself as being somewhat good at. Though you could always argue it's due to modesty.
To give a specific example, this is him "explaining magnets", except that I'm left knowing nothing extra about magnets, but I do gain a new understanding of concepts like "level of abstraction" and various "human guide to word"-ish insights about language use and some phenomenology around what it means to "understand".
But the use-case for learning from the best is completely different: you study the best when there are no other options. You study the best when the best is doing something completely different, so they're the only one to learn it from.
I feel like I do mention this when I say one ought to learn from similar people.
If you spent 10 years learning how to <sport> and you are nr 10 in <sport> and someone else is nr 1 in <sport>, the heuristic of learning from someone similar to you applies.
For instance, back in college I spent a semester on a project with the strongest programmer in my class, and I picked up various small things which turned out to be really important (like "choose a good IDE").
What you are describing here though is simply a category error, "the best in class" is not "the best programmer", there were probably hundreds of thousands better than him on all possible metrics.
So I'm not sure how it's relevant.
It might pay to hang out with him, again, based on the similarity criteria I point out: He's someone very much like you, that is somewhat better at the thing you want to learn (programming).
Maybe weird writing on my end, the working out example that I'm referring is the section on professional athletes (aka them never necessarily having learnt how to do casual health-focused workouts). While physics teacher might have forgotten how it is not to know physics 101, but she still did learn physics 101 at some point.
Ah, ok, maybe I was discussing the wrong thing then.
I think sleeping 4-6 hours on some days ought to be perfectly fine, even 0, I'd just argue that keeping the mean around 7-9 is probably ideal for most (but again, low confidence, think it boils down to personalized medicine).
The theory I heard postulated (by the guy that used to record the ssc podcast) is that once people start thinking "better" in reductionist frameworks they fail to account non quantifiable metrics (e.g. death is quantifiable in qaly, being more isolated isn't)
The rest of your arguments (bpm, cortisol..) apply fully to sports as well I believe.
I don't think so. BPM is slower when one practices sports (see athletes heart), in that it will be higher during the activity itself, but mean BPM during the day and especially at night is lower.
Personally I've observed this correlation as well and it seems to be causal~ish, i.e I can do 3 days on / 3 days off physical activity and notice decreased resting & sleeping heart rate on the 2nd day of activity up until the 2nd day of inactivity after which it picks back up.
With cortisol, the mechanism I'm aware of is the same, i.e. exercising increases cortisol afterwards but decreases the baseline. Though here I'm not 100% sure.
This might not hold for the very extreme cases though (strongmen, ultra-marathon runners, etc). Since then you're basically under physical stress for most of the day instead of a few minutes or hours.
re: not encountering info re dangers of oversleep: do you want to comment on the bit about sleep deprivation therapy? Isn't this rather compelling evidence of sleep directly causing bad mood?
Sleep deprivation, I'd assume, would work through cortisol and adrenaline, which do give a "better than awfully depressed mood" but can't build up to great moods and aren't sustainable (at least if I am to trust models ala the one championed by Sapolsky about the effects of cortisol).
Granted, I think it depends, and afaik most people don't feel the need to sleep more than 8-9 hours. The ones I know that "sleep" a lot tend to just hang around in a half-comatosed state after overeating or while procrastinating. I think it becomes an issue of "actual sleep" vs "waking up every 30 minutes, checking phone, remembering life is hard and trying to sleep again | rinse and repeat 2 to 8 times".
I'd actually find it interesting to study "heavy sleepers" in a sleep lab or with a semi-capable portable EEG (even just 2-4 electrodes should be enough, I guess?) and see if what they do is actually "sleep" past the 9 hour mark. But I'm unaware of any such studies.
But I have low confidence in all of these claims and I personally dislike epidemiological evidence, I think that there's a horrible practice of people trying to """control""" shitty experiments with made-up statistics models that come with impossible assumptions built-in. My main decisions about sleep come from pulse-oximeter based monitoring and correlating that with how I feel plus other biomarkers (planning to upgrade to an openbci based eeg soon, been holding out for a freeEEG32 for a while, but I see only radio silence around that). So ultimately the side I fall on is that I dislike the evidence on way or another and think that, much like anything that uses epidemiology as it's almost sole source of evidence, you could just scrap the whole thing in favour of a personalized goal-oriented approach.
The "think" here is more prosaic, as in, it's just not my intuition that this is the case and I think that applies for most other people based on the memeplexes I see circulating out there.
As for why my intuition is that I can boil it down to, say I said in the post: Everyone told me so and everyone warned me about the effects of not sleeping but not vice versa.
Is this correct if analyzed on a rational basis?
I don't know, it's not relevant as far as the post is concerned.
From personal experience I know that for myself I associate shorter sleep with:
increase cortisol (urine, so arguable quality)
Increased time getting into ketosis (and overall lower ketone+glucose balance, i.e. given an a*G + b*BhB = y where y is the number at which I feel ok and seems in-line with what epidemiology would recommend as optimal glucose levels for lifespan | I will tend to fall constantly bellow y given lack of sleep, which manifests as being tired and sometimes feeling lightheaded and being able to walk, lift and swim less ... hopefully that makes sense? I'm not sure how mainstream of a nutrition framework this is )
increased heart rate (significant, ~9bpm controlling~ish for effort and, ~7bpm o.o at night), *
feeling a bit off and feeling time passes faster.
But that's correlated, not causal, e.g. if I smoke or vape during a day I'm likely to sleep less that night, so is smoking/vaping fucking my body or is lack of sleep or is the distinction even possible given the inferential capabilities of biology in the next 500+years? I don't know
More broadly I know that looking back to months with plenty of sleep I feel much better than when i sleep less. But maybe that's because I sleep less overall when I'm feeling down and also feel less happy (by definition) and am less productive, and maybe low sleep is actually a mechanism against feeling even worst.
Overall I assume most people notice these correlations, though probably less in-depth, based on how commonly people seem to complain about bad sleep, needing to get more sleep... etc and how rarely the opposite is true.
That is the best example I had of how one could, e.g, disagree with a scientific field by just erring on scepticism rather than taking the opposite view.
To answer your critique of that point, though again, I think it bares no or little relation to the article itself:
The "predictions" by which the theory is judges here are just as fuzzy and inferentially distant.
I am not a cosmologist, what I've read regrading cosmology have been mainly papers around unsupervised and semi-supervised clustering on noisy data, incidental evidence from those has made me doubt the complex ontologies proposed by cosmologists, given the seemingly huge degree of error acceptable in the process of "cleaning" data.
There are many examples of people fooling themselves into making experiments to confirm a theory and "correcting" or discarding results that don't confirm it (see e.g. phlogiston, mass of the electron, the pushback against proton gradients as a fundamental mechanism of cell energy production, vitalism-confirming experiments, roman takes on gravity)
One way science can be guarded against modelling an idealized reality that no longer is related to the real world is by making something "obviously" real (e.g. electric lightbulb, nuclear bomb, vacuum engines).
Focusing on real-world problem also allows for different types of skin-in-the-game, i.e. going against the consensus for profit, even if you think the consensus is corrupt.
Cosmology is a field that requires dozens of years to "get into", it has no practical applications that validate it's theories, it's only validation comes from observational evidence using data that is supposed to describe objects that are, again, a huge inferential distance away in both time/space and SNR... data which is heavily cleaned based on models created and validated by cosmology.
So I tend to err on the side of "bullshit" provided lack of relevant predictions that can be validate by literally anyone other than a cosmologist or a theoretical physicist, could be someone that's proveably good in high energy physics validating an anomaly (e.g. gravitational anomaly causing a laser to behave weird around the time it was predicted that two black holes would be validated).
Hopefully this completes the picture to exhibit my point better.,
And what I'm saying is that I agree. As in, I'm not arguing that there's no reason for the slope to be the way that is, I'd think most slopes are asymmetric exactly because of very real asymmetric risks/rewards they map to.
It is part of the problem though, it's actually THE problem here.
You can use normal language to describe anything that would be of use to me, anything relevant about the world that I do not understand, in some cases (e.g. an invention) real-world examples would also be required, but in others (e.g. a theory), words, almost by definition ought to be enough.