Posts

Against intelligence 2021-06-08T13:03:49.838Z
If you've learned from the best, you're doing it wrong 2021-03-08T13:14:13.038Z
The slopes to common sense 2021-02-22T19:22:03.931Z
Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? 2021-01-27T14:14:18.556Z
Revelation and mathematics 2021-01-25T12:26:14.056Z
I don't want to listen, because I will believe you 2020-12-28T14:58:34.952Z
What are intuitive ways for presenting certainty/confidence in continuous variable inferences (i.e. numerical predictions)? 2020-12-25T00:55:39.451Z
[Meta?] Using the LessWrong codebase for a blog 2020-12-20T03:05:55.462Z
Machine learning could be fundamentally unexplainable 2020-12-16T13:32:36.105Z
Costs and benefits of metaphysics 2020-11-09T14:31:13.718Z
What was your behavioral response to covid-19 ? 2020-10-08T19:27:07.460Z
The ethics of breeding to kill 2020-09-06T20:12:00.519Z
Longevity interventions when young 2020-07-24T11:25:35.249Z
Divergence causes isolated demands for rigor 2020-07-15T18:59:57.606Z
Science eats its young 2020-07-12T12:32:39.066Z
Causality and its harms 2020-07-04T14:42:56.418Z
Training our humans on the wrong dataset 2020-06-21T17:17:07.267Z
Your abstraction isn't wrong, it's just really bad 2020-05-26T20:14:04.534Z
What is your internet search methodology ? 2020-05-23T20:33:53.668Z
Named Distributions as Artifacts 2020-05-04T08:54:13.616Z
Prolonging life is about the optionality, not about the immortality 2020-05-01T07:41:16.559Z
Should theories have a control group 2020-04-24T14:45:33.302Z
Is ethics a memetic trap ? 2020-04-23T10:49:29.874Z
Truth value as magnitude of predictions 2020-04-05T21:57:01.128Z
When to assume neural networks can solve a problem 2020-03-27T17:52:45.208Z
SARS-CoV-2, 19 times less likely to infect people under 15 2020-03-24T18:10:58.113Z
The questions one needs not address 2020-03-21T19:51:01.764Z
Does donating to EA make sense in light of the mere addition paradox ? 2020-02-19T14:14:51.569Z
How to actually switch to an artificial body – Gradual remapping 2020-02-18T13:19:07.076Z
Why Science is slowing down, Universities and Maslow's hierarchy of needs 2020-02-15T20:39:36.559Z
If Van der Waals was a neural network 2020-01-28T18:38:31.561Z
Neural networks as non-leaky mathematical abstraction 2019-12-19T12:23:17.683Z
George's Shortform 2019-10-25T09:21:21.960Z
Artificial general intelligence is here, and it's useless 2019-10-23T19:01:26.584Z

Comments

Comment by George (George3d6) on What are good resources for gears models of joint health? · 2021-06-18T13:37:58.687Z · LW · GW

If your problem is personal, i.e you're dealing with joint issues, unless you're suffering from a muscle-wasting disease or are over the age of 50, reading about stuff will be low yield.

Long term joint pain is solved by:

  • strengthening muscles in order to not put a strain on "weak" joints [evidence: solid]
  • Hormetic effects joint usage [evidence: weak clinical, but look at e.g. people doing yoga, I'd say this is an issue of people not studying the correct demographics]
  • Zone 2 training, aka cardio, allowing you to more efficiently partition fuel to muscles and thus do more movement without suboptimal muscle usage [evidence: I'd assume moderate but unsure]
  • Stability training [evidence: not good because everyone disagrees what exactly this involves, but basically all physiotherapists are doing some form of stability training so it's obviously useful | overall you can pick a specific older technique and you will get solid evidence, but newer stuff might actually be better, but less tested]

Now, can you optimize past that? Sure you can.

But unless you are already doing, say, 2 hours of zone 2 4-5 times a week, 30 minutes of resistance training 2-3 times a week (the kind where you are in excruciating pain by the end, i.e. proper resistance training not aerobics masquerading as resistance training), 20-40 minutes of daily stability training (could be morning yoga, could be stretching recommend by a therapist, could be whatever).

 

Then reading up on joint pain will be useless.

 

It may be that you are an athlete, in which case discount the above, if you're doing 4-6 hours of effort per day on average then a better model of movement is probably the key. But even then it might make more sense to take a scientific approach and just try different things and be quick to quantify (e.g. don't look for joint pain after trying a new style of movement, look for proxies in your blood).

 

But again, if you're not an athlete, by reading up on this stuff you are simply running away from the real solution, which involves the hard work of building a pattern of 1-2 hours of varied exercise every day.

Comment by George (George3d6) on Shall we count the living or the dead? · 2021-06-14T20:02:31.554Z · LW · GW

It is ultimately about interpretation.

This paradigm doesn't matter if the physician has in mind a cost/benefit matrix for the treatment, in which it would be fairly easy to plug in raw experimental data no matter how the researchers chose to analyze it.

More broadly, see the comment by ChristianKl.

Comment by George (George3d6) on Shall we count the living or the dead? · 2021-06-14T11:02:33.775Z · LW · GW

This to me seems like a non-issue.

The core problem here is "doctors don't know how to interpret basic probabilities", the solution to this is deregulation in order to hoist the work of decision trees from men.

Discussions like this one are akin to figuring out how to get paedophiles to wear condom more often, in principle they could be justified if the benefits/cost were proportionally immense, but they are a step in a tangent direction and moving away focus from the core issue (which is, again, why are your symptoms, traits and preferences not weighted by a decision tree in order to determine medication)

This more broadly applies to any mathematical computation that is left to the human brain to make instead of being offloaded to a computer. It's literally insane that a small minority of very protectionist professions are still allowed (and indeed, to some extent forced) to do this... it's like accountants being forced to make calculation with pen and paper instead of introducing the numbers into a formula in excel.

Comment by George (George3d6) on Against intelligence · 2021-06-10T21:05:35.092Z · LW · GW

On a species level though, the specific niche of human intelligence arose and filled an evolutionary niche, but that is not proof the same strategy will be better.

Bears fill an evolutionary niche of being able to last long times without food, having a wide diet and being very powerful, but that's not a conclusion that a bear that's 3x bigger, can eat even more things and can survive even longer without food would fare any better.

Indeed, quite the opposite, if a "better" version of a trait doesn't exist that likely means the trait is optimized to an extreme.

And in terms of inter-species "achievements", if the core things that every species want to do is "survive" then, well, it's fairly easy to conclude cockroaches will outlive us, various grasses will outlive us or at least die with us, same goes for cats... and let's not even go into exteremophiles, those things might have conquered planets far way from ours billions of years before we even existed, and will certainly outlive us.

Now, our goals obviously converge from those animals, so we think "Oh, poor dumb cockroaches, they shan't ever advance as a species lacking x/y/z", but in the umvlet of the cockroach its species has been prospering at an astonishing rate in the most direction that are relevant to it.

Similarly, we are already subpar in many tasks to various algorithms, but that is rather irrelevant since those algorithms aren't made to fit the niches we do, the very need for them comes from us being unable to fill those niches.

Comment by George (George3d6) on Against intelligence · 2021-06-10T09:29:50.341Z · LW · GW

Roughly speaking, yes, I'd grant some % error, and I assume most would be cofounders, or one of the first researchers or engineers.

Back then people literally made 1-niche image recognition startups that work.

I mean, even now there are so many niches for ML where a team of rather mediocre thinkers (compared to, say, the guys at deep mind) can get millions in seed funding with basically 0 revenue and very agressive burn, by just proving very abstractly they can solve one problem or another nobody else is solving.

I'm not sure what the deluge of investment and contracts was like in 2008, but basically everyone publishing stuff about convolutions on GPUs is a millionaire now.

It's obviously east to "understand that it was the right direction"... With the benefit of hindsight. Much like now everyone "understands" transformers are the future of NLP.

But in general the field of "AI" has very few real visionaries that by luck or skill bring about progress, and even being able to spot said visionaries and get on the bandwagon early enough is a way to get influential and wealthy beyond belief.

I don't claim I'm among those visionaries, nor that I found a correct band wagon. But some people obviously do, since the same guys are implicated in an awful lot of industry shifting orgs and research projects.

I'm not saying you should only listen to those guys, but for laying out a groundwork, forming mental models on the subject, and distilling facts from media fiction, those are the people you should listen to.

Comment by George (George3d6) on Against intelligence · 2021-06-09T13:20:40.117Z · LW · GW

I think "very" is much too strong, and insofar as this is true in the human world, that wouldn't necessarily make it true for an out-of-distribution superintelligence, and I think it very much wouldn't be. For example, all you need is superintelligence and an internet connection to find a bunch of zero-day exploits, hack into whatever you like, use it for your own purposes (and/or make tons of money), etc. All you need is superintelligence and an internet connection to carry on millions of personalized charismatic phone conversations simultaneously with people all around the world, in order to convince them, con them, or whatever. All you need is superintelligence and an internet connection to do literally every remote-work job on earth simultaneously.

You're thinking "one superintelligence against modern spam detection"... or really against 20 years ago spam detection. It's no longer possible to mass-call everyone in the world because, well, everyone is doing it.

Same with 0-day exploits, they exist, but most companies have e.g. IP based rate limiting on various endpoints that make it prohibitively expensive to exploit things like e.g. spectre.

And again, that's with current tech, by the time a superintelligence exists you'd have equally matched spam detection.

That's my whole point, intelligence works but only in zero-sum games against intelligence, and those games aren't entirely fair, thus safeguarding the status quo.

<Also, I'd honestly suggest that you at least read AI alarmists with some knowledge in the field, there are plenty to find, since it generate funding, but reading someone that "understood AI" 10 years ago and doesn't own a company valued at a few hundred millions is like reading someone that "gets how trading works", but works at Walmart and live with his mom>

Comment by George (George3d6) on Against intelligence · 2021-06-08T16:54:44.432Z · LW · GW

I think the usual rejoinder on the "AI go foom" side is that we are likely to overestimate x by underestimating what really effective thinking can do

Well, yeah, and on the whole, it's the kind of assumption that one can't scientifically prove or disprove. It's something that can't be observed yet and that we'll see play out (hopefully) this century.

I guess the main issue I see with this stance is not that it's unfounded, but that its likely cause is something like <childhood indoctrination to hold having good grade, analytical thinking, etc as the highest value in life>, as in, it would perfectly explain why it seems to be readily believed by anyone that stumbles upon less wrong, whereas few/no other beliefs (that don't have a real-world observation to prove/disprove them)  are so widely shared here (as well as more generally in a lot of nerdy communities).

Granted, I can't "prove" this one way or another, but I think it helps to have some frameworks of thinking that are able to persuade people that start from an "intelligence is supreme" perspective towards the centre, much like the alien story might persuade people that start from an "intelligence can't accomplish much" perspective.

Comment by George (George3d6) on Against intelligence · 2021-06-08T16:45:26.022Z · LW · GW
  • I'm pretty surprised by the position that "intelligence is [not] incredibly useful for, well, anything". This seems much more extreme than the position that "intelligence won't solve literally everything", and like it requires an alternative explanation of the success of homo sapiens.

 

I guess it depends on how many "intelligence-driven issues" are yet to solve and how important they are, my intuition is that the answer is "not many" but I have very low trust in that intuition. It might also be just the fact that "useful" is fuzzy and my "not super useful" might be your "very useful", and quantifying useful gets into the thorny issue of quantifying intuitions about progress.

Comment by George (George3d6) on Against intelligence · 2021-06-08T16:41:40.328Z · LW · GW

The question you should be asking is not if IQ is correlated with success, but if it's correlated with success in spite of other traits. I.e. being taller than your siblings, facial symmetry and having few coloured spots on your skin are also correlated with success... but they are not direct markers, they simply point to some underlying "causes" ("good" embryonal env, which correlates with being born into wealth/safety/etc | lack of cellular damage and/or ability to repair said damage | proper nutrition growing up... etc).

Also, my claim is not that humans don't fetishize or value intelligence, my claim is that this fetish specifically pretains to "intelligence of people that are similar enough to me".

Comment by George (George3d6) on Against intelligence · 2021-06-08T16:37:42.312Z · LW · GW

I guess I should have worded it as "while most people", I certainly agree some people can "think the pain away" and hypnosurgery is a thing and has been for over 100 years, so yeah.

Comment by George (George3d6) on Re: Fierce Nerds · 2021-05-20T14:12:27.380Z · LW · GW

I think the thing missing here is "fierce about what".

Being fierce about spacecrafts, osk therapy or ecological materials is basically good.

Being fierce about Unix, ml, rust or fpgas is morally neutral, but can be good or bad depending on the trends in society and your industry.

Being fierce about my little pony, debating people online, arguing for extremist political views, reading up on past wars, being a ""PUA"" and playing starcraft is bad, bad of society, but more so for the individual which is slowly consumed by it.

Elon musk is annoying because he thinks he knows everything and is often to agressive in imposing his vision, everybody still like Elon musk.

But if someone acted like Elon musk, but couldn't afford a home, raise a family, buy a Tesla, build cool hardware ,or go on wild vacations to Ibiza in order to hook up with models... we'd call that person delusional, we'd recommend they take some meds, do some CBT, see a therapist, get some hobbies and try to make friends.

Comment by George (George3d6) on Death by Red Tape · 2021-05-02T18:06:44.425Z · LW · GW

"Progress" can be a terminal goal, and many people might be much happier if they treated it as such. I love the fact that there are fields I can work in that are both practical and unregulated, but if I had to choose between e.g. medical researcher and video-game pro, I'm close to certain I'd be happier as the latter. I know many people which basically ruined their lives by choosing the wrong answer and going into dead-end fields that superficially seem open to progress (or to non-political work).

Furthermore, fields bleed into each other. Machine learning might well not be the optimal paradigm in which to treat <gestures towards everything interesting going on in the world>, but it's the one that works for cultural reasons, and it will likely converge to some of the same good ideas that would have come about had other professions been less political.

Also, to some extent, the problem is one of "culture" not regulation. EoD someone could have always sold a covid vaccine as a supplement, but who'd have bought it? Anyone is free to make their own research into anything, but who'll take them seriously?... etc

Comment by George (George3d6) on George's Shortform · 2021-04-05T11:06:39.410Z · LW · GW

I've been thinking a lot about replacing statistics with machine learning and how one could go about that. I previously tried arguing that the "roots" of a lot of classical statistical approaches are flawed, i.e. they make too many assumptions about the world and thus lead to faulty conclusions and overly complex models with no real insight.

I kind of abandoned that avenue once I realized people back in the late 60s and early 70s were making that point and proposing what are now considered machine learning techniques as a replacement.

So instead I've decided to just focus any further anger at bad research and people using nonsensical constructs like p-value on trying to popularize better approaches based on predictive modeling.

Comment by George (George3d6) on Predictive Coding has been Unified with Backpropagation · 2021-04-04T16:19:33.698Z · LW · GW

There is no relationship between neurons and the "neurons" of an ANN. It's just a naming mishap at this point.

Comment by George (George3d6) on Rationalism before the Sequences · 2021-03-31T09:40:49.644Z · LW · GW

I consider myself a skeptic empiricist, to the extent I can, for it's a difficult view to hold.

I don't think this community or Eliezer's ideas are so, they are fundamentally rational:

  • Timeless decision theory
  • Assumptions about experimental perfection that lead to EZs incoherent rambling on physics
  • Everything that's part of the AI doomsday cult views

These are highly rational things, I suspect steming from a pre school "intelligence is useful" prior that most people failed to introspect, and that is pretty correct unless taken to an extreme. But it's reasoning from that uncommon a prior (after al empiricists also start from something, it's just that their starting point is one that's commonly shared by all or most humans, e.g. obvious seen features), and other like it, that lead to the sequences and to most discussion on LW.

Which is not to say that it's bad, I've personally come to believe it's as ok as any religion, but it shouldn't be confused with empiricism and empiricists methods.

Comment by George (George3d6) on MIRI comments on Cotra's "Case for Aligning Narrowly Superhuman Models" · 2021-03-11T09:31:37.856Z · LW · GW

Sometimes, those tokens represent words and sometimes they represent single characters.

Hmh, ok, quick update to my knowledge that I should have done before: https://huggingface.co/transformers/tokenizer_summary.html

Seems to indicate that GPT-2 uses a byte-level BPE, though maybe the impl here is wrong, where I'd have expected it to use something closer to a word-by-wrod tokenizer with exceptions for rare words (i.e. a sub-word tokenizer that's basically acting as a word tokenizer 90% of the time). And maybe GPT-3 uses the same?

Also it seems that sub-word tokenizer split much more aggressively than I'd have assumed before.

Complaint retracted.

Comment by George (George3d6) on MIRI comments on Cotra's "Case for Aligning Narrowly Superhuman Models" · 2021-03-11T08:12:19.569Z · LW · GW

<retracted>

Comment by George (George3d6) on If you've learned from the best, you're doing it wrong · 2021-03-09T10:58:52.422Z · LW · GW

interesting.

Comment by George (George3d6) on If you've learned from the best, you're doing it wrong · 2021-03-09T10:53:52.016Z · LW · GW

Wasn't Feynman basically known for:

  1. His contribution to computing, formalizing problems into code, parallelizing, etc
  2. His mathematical contributions (Feynman diagrams, Feynman integrals)
  3. His contributions to teaching/reasoning methods in general.

I agree that I'd want to learn physics from him, I'm just not sure he was an exceptional physicist. Good, but not Von Neuman. He says as much in his biographies (e.g. pointing out one of his big contributions came from randomly point to a valve on a schematic and getting people to think about the schematic).

He seems to be good at "getting people to think reasonably and having an unabashedly open, friendly, mischievous and perseverant personality", which seems to be what he's famous for and the only thing he thinks of himself as being somewhat good at. Though you could always argue it's due to modesty.

To give a specific example, this is him "explaining magnets", except that I'm left knowing nothing extra about magnets, but I do gain a new understanding of concepts like "level of abstraction" and various "human guide to word"-ish insights about language use and some phenomenology around what it means to "understand".

Comment by George (George3d6) on If you've learned from the best, you're doing it wrong · 2021-03-09T10:46:14.763Z · LW · GW

But the use-case for learning from the best is completely different: you study the best when there are no other options. You study the best when the best is doing something completely different, so they're the only one to learn it from.

I feel like I do mention this when I say one ought to learn from similar people.

If you spent 10 years learning how to <sport> and you are nr 10 in <sport> and someone else is nr 1 in <sport>, the heuristic of learning from someone similar to you applies. 

For instance, back in college I spent a semester on a project with the strongest programmer in my class, and I picked up various small things which turned out to be really important (like "choose a good IDE").

What you are describing here though is simply a category error, "the best in class" is not "the best programmer", there were probably hundreds of thousands better than him on all possible metrics.

So I'm not sure how it's relevant.

It might pay to hang out with him, again, based on the similarity criteria I point out: He's someone very much like you, that is somewhat better at the thing you want to learn (programming).

Comment by George (George3d6) on If you've learned from the best, you're doing it wrong · 2021-03-08T15:58:35.983Z · LW · GW

Maybe weird writing on my end, the working out example that I'm referring is the section on professional athletes (aka them never necessarily having learnt how to do casual health-focused workouts). While physics teacher might have forgotten how it is not to know physics 101, but she still did learn physics 101 at some point.

Hopefully that makes it more clear?

Comment by George (George3d6) on Fun with +12 OOMs of Compute · 2021-03-02T07:15:42.891Z · LW · GW

The aliens seem to have also included with their boon:

  • Cheap and fast eec GPU RAM with minute electricity consumptions
  • A space time disruptor that allows you to have CMOS transistors smaller than electrons to serve as the L1&L2
  • A way of getting rid of electron tunneling at a very small scale
  • 12 OOMs better SSDs and fiber optic connections and cures for the host of physical limitations plaguing mere possibility of those 2.
Comment by George (George3d6) on The slopes to common sense · 2021-03-02T06:21:03.641Z · LW · GW

Ah, ok, maybe I was discussing the wrong thing then.

I think sleeping 4-6 hours on some days ought to be perfectly fine, even 0, I'd just argue that keeping the mean around 7-9 is probably ideal for most (but again, low confidence, think it boils down to personalized medicine).

Comment by George (George3d6) on Takeaways from one year of lockdown · 2021-03-02T05:56:23.639Z · LW · GW

The theory I heard postulated (by the guy that used to record the ssc podcast) is that once people start thinking "better" in reductionist frameworks they fail to account non quantifiable metrics (e.g. death is quantifiable in qaly, being more isolated isn't)

Comment by George (George3d6) on The slopes to common sense · 2021-03-01T23:16:48.518Z · LW · GW

The rest of your arguments (bpm, cortisol..) apply fully to sports as well I believe.

 

I don't think so. BPM is slower when one practices sports (see athletes heart), in that it will be higher during the activity itself, but mean BPM during the day and especially at night is lower.

Personally I've observed this correlation as well and it seems to be causal~ish, i.e I can do 3 days on / 3 days off physical activity and notice decreased resting & sleeping heart rate on the 2nd day of activity up until the 2nd day of inactivity after which it picks back up.

With cortisol, the mechanism I'm aware of is the same, i.e. exercising increases cortisol afterwards but decreases the baseline. Though here I'm not 100% sure.

This might not hold for the very extreme cases though (strongmen, ultra-marathon runners, etc). Since then you're basically under physical stress for most of the day instead of a few minutes or hours.

re: not encountering info re dangers of oversleep: do you want to comment on the bit about sleep deprivation therapy? Isn't this rather compelling evidence of sleep directly causing bad mood?

Sleep deprivation, I'd assume, would work through cortisol and adrenaline, which do give a "better than awfully depressed mood" but can't build up to great moods and aren't sustainable (at least if I am to trust models ala the one championed by Sapolsky about the effects of cortisol).

Granted, I think it depends, and afaik most people don't feel the need to sleep more than 8-9 hours. The ones I know that "sleep" a lot tend to just hang around in a half-comatosed state after overeating or while procrastinating. I think it becomes an issue of "actual sleep" vs "waking up every 30 minutes, checking phone, remembering life is hard and trying to sleep again | rinse and repeat 2 to 8 times".

I'd actually find it interesting to study "heavy sleepers" in a sleep lab or with a semi-capable portable EEG (even just 2-4 electrodes should be enough, I guess?) and see if what they do is actually "sleep" past the 9 hour mark.  But I'm unaware of any such studies.

But I have low confidence in all of these claims and I personally dislike epidemiological evidence, I think that there's a horrible practice of people trying to """control""" shitty experiments with made-up statistics models that come with impossible assumptions built-in. My main decisions about sleep come from pulse-oximeter based monitoring and correlating that with how I feel plus other biomarkers (planning to upgrade to an openbci based eeg soon, been holding out for a freeEEG32 for a while, but I see only radio silence around that). So ultimately the side I fall on is that I dislike the evidence on way or another and think that, much like anything that uses epidemiology as it's almost sole source of evidence, you could just scrap the whole thing in favour of a personalized goal-oriented approach.

Comment by George (George3d6) on The slopes to common sense · 2021-03-01T22:44:10.839Z · LW · GW

Oh gosh, you're right... both here and on my blog. Sometimes things go wrong during translation to markdown and I don't even notice. Thanks for pointing it out, corrected.

Comment by George (George3d6) on The slopes to common sense · 2021-03-01T17:33:50.737Z · LW · GW

The "think" here is more prosaic, as in, it's just not my intuition that this is the case and I think that applies for most other people based on the memeplexes I see circulating out there.

As for why my intuition is that I can boil it down to, say I said in the post: Everyone told me so and everyone warned me about the effects of not sleeping but not vice versa.

Is this correct if analyzed on a rational basis?

I don't know, it's not relevant as far as the post is concerned.

From personal experience I know that for myself I associate shorter sleep with:

  •  increase cortisol (urine, so arguable quality)
  • Increased time getting into ketosis (and overall lower ketone+glucose balance, i.e. given an a*G + b*BhB = y  where y is the number at which I feel ok and seems in-line with what epidemiology would recommend as optimal glucose levels for lifespan | I will tend to fall constantly bellow y given lack of sleep, which manifests as being tired and sometimes feeling lightheaded and being able to walk, lift and swim less ... hopefully that makes sense? I'm not sure how mainstream of a nutrition framework this is )
  • increased heart rate (significant, ~9bpm controlling~ish for effort and, ~7bpm o.o at night), * 
  • feeling a bit off and feeling time passes faster.

But that's correlated, not causal, e.g. if I smoke or vape during a day I'm likely to sleep less that night, so is smoking/vaping fucking my body or is lack of sleep or is the distinction even possible given the inferential capabilities of biology in the next 500+years? I don't know

More broadly I know that looking back to months with plenty of sleep I feel much better than when i sleep less. But maybe that's because I sleep less overall when I'm feeling down and also feel less happy (by definition) and am less productive, and maybe low sleep is actually a mechanism against feeling even worst.

Overall I assume most people notice these correlations, though probably less in-depth, based on how commonly people seem to complain about bad sleep, needing to get more sleep... etc and how rarely the opposite is true. 

Comment by George (George3d6) on The slopes to common sense · 2021-02-23T19:45:51.549Z · LW · GW

That is the best example I had of how one could, e.g, disagree with a scientific field by just erring on scepticism rather than taking the opposite view.

***

To answer your critique of that point, though again, I think it bares no or little relation to the article itself:

  • The "predictions" by which the theory is judges here are just as fuzzy and inferentially distant.
  • I am not a cosmologist, what I've read regrading cosmology have been mainly papers around unsupervised and semi-supervised clustering on noisy data, incidental evidence from those has made me doubt the complex ontologies proposed by cosmologists, given the seemingly huge degree of error acceptable in the process of "cleaning" data.
  • There are many examples of people fooling themselves into making experiments to confirm a theory and "correcting" or discarding results that don't confirm it (see e.g. phlogiston, mass of the electron, the pushback against proton gradients as a fundamental mechanism of cell energy production, vitalism-confirming experiments, roman takes on gravity)
  • One way science can be guarded against modelling an idealized reality that no longer is related to the real world is by making something "obviously" real (e.g. electric lightbulb, nuclear bomb, vacuum engines).
  • Focusing on real-world problem also allows for different types of skin-in-the-game, i.e. going against the consensus for profit, even if you think the consensus is corrupt.

Cosmology is a field that requires dozens of years to "get into", it has no practical applications that validate it's theories, it's only validation comes from observational evidence using data that is supposed to describe objects that are, again, a huge inferential distance away in both time/space and SNR... data which is heavily cleaned based on models created and validated by cosmology. 

So I tend to err on the side of "bullshit" provided lack of relevant predictions that can be validate by literally anyone other than a cosmologist or a theoretical physicist, could be someone that's proveably good in high energy physics validating an anomaly (e.g. gravitational anomaly causing a laser to behave weird around the time it was predicted that two black holes would be validated).

Hopefully this completes the picture to exhibit my point better.,

Comment by George (George3d6) on The slopes to common sense · 2021-02-23T18:06:15.133Z · LW · GW

And what I'm saying is that I agree. As in, I'm not arguing that there's no reason for the slope to be the way that is, I'd think most slopes are asymmetric exactly because of very real asymmetric risks/rewards they map to.

Comment by George (George3d6) on The slopes to common sense · 2021-02-23T08:51:31.091Z · LW · GW

yes, that's what I had intended, thanks for correcting that.

Comment by George (George3d6) on The slopes to common sense · 2021-02-23T04:20:46.627Z · LW · GW

Isn't that kind of the point I'm making?

That most people want to sleep as much as they can and sleep hacking is the controversial choice, but nobody will bat an eyelid if you want to try sleeping more than average.

Comment by George (George3d6) on A non-mystical explanation of "no-self" (three characteristics series) · 2021-01-31T09:07:16.092Z · LW · GW

That was not part of the original problem.

It is part of the problem though, it's actually THE problem here.

You can use normal language to describe anything that would be of use to me, anything relevant about the world that I do not understand, in some cases (e.g. an invention) real-world examples would also be required, but in others (e.g. a theory), words, almost by definition ought to be enough.

But anyway, as far as I can see we're probably in part talking past each other, not due to ill intention, and I'm not exactly sure how, but I was recently quite immersed reading the comment chains here: https://www.lesswrong.com/posts/mELQFMi9egPn5EAjK/my-attempt-to-explain-looking-insight-meditation-and && https://www.lesswrong.com/posts/tMhEv28KJYWsu6Wdo/kensh, and it seems like we're probably talking past each other in very similar ways.

Comment by George (George3d6) on Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? · 2021-01-29T22:41:34.541Z · LW · GW

What a weird statement. Of course history rules out 99.9% of hypotheses about how the world came to be. We can quibble over the remaining hypotheses, but obvious ones like "the world is 10000 years old" and "human populations levels reached 10 billion at some point in the past" are all easily falsified. Yes, there is some subjectivity in history, but overall, it still reduces the hypothesis space by many many orders of magnitude. 

I will note that the 10,000 years-old thing is hardly ruled out by "history", more so by geology or physics, but point taken, even very little data and bad models of reality can lead to ruling out a lot of things with very high certainty.

We know that many thousands of years of history never had anything like the speed of technological development as we had in the 20th century. There was clearly something that changed during that time. And population is not sufficient, since we had relatively stable population levels for many thousands of years before the beginning of the industrial revolution, and again before the beginning of agriculture.   

This is however the kind of area where I always find history doesn't provide enough evidence, which is not to say this would help my point or harm yours. Just to say that I don't have enough certainty that statements like the above have any meaning, and in order to claim what I'd have wanted (what I was asking the question about) I would have to make a similar claim regarding history.

In brief I'd want to argue with the above statement by pointing out:

  1. Ongoing process since the ancient Greeks, with some interruptions. But most of the "important stuff" was figured out a long time ago (I'm fine living with Greek architecture, crop selection, heating, medicine and even logic and mathematics).
  2. "Progress" bringing about issues that we solve and call "progress", i.e. smallpox and the bubonic plague up until we "progressed" to cities that could make them problematic. On the whole there's no indication lifespan or happiness has greatly increased, the increases in lifespan exist, but once you take away "locked up in a nursing home" as "life" and exclude "death of kids <1 year" (or, alternatively, if you want to claim kids <1 year are as precious as a fully developed conscious human, once you include abortions into our own death statistics)... we haven't made a lot of "progress" really.
  3. A "cause" being attributed to the burst of technology in some niches in the 20th century, instead of it just being viewed as "random chance", i.e. the random chance of making the correct 2 or 3 breakthroughs at the same time.

And those 3 points are completely different threads that would dismantle the idea you present, but I'm just bringing them up as potential threads that would dismantle your idea. Overall I hold very little faith in them besides (3), I think your view of history is more correct. But there's no experiment I can run to find out, no way I can collect further data, nothing stopping me from overfitting a model to agree with some subconscious bias I have.

In day to day life, if I believe something (e.g. neural networks are the best abstractions for generic machine learning) and I'm face with an issue (e.g. loads of customers are getting bad accuracy from my NN based solution) I can at least hope to be open minded enough to try other things and see that I might have been wrong (e.g. gradient tree boosting might be a better abstraction than NNs in many cases) or, failing to find a better working hypothesis that provides experimental evidence, I can know I don't know (e.g. go bankrupt and never get investor money again because I squanderd it away).

With the study of history I don't see how I can go through that process, I feel a siren call that says "I like this model of the world", and I can fit historical evidence to it without much issue. And I have no way to properly weighting the evidence and ultimately no experimental proof that could increase or decrease my confidence in a significant way. No "skin in the game", besides wanting to get a warm fuzzy feeling from my historical models.

But again, I think this is not to say that certain hyptohesis (e.g. the Greek invented a vaccum based steam engine) can't be certainly discounted, and I think that in of itself can be quite useful, you are correct there.

Comment by George (George3d6) on Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? · 2021-01-29T00:50:47.570Z · LW · GW

We've seen drastic differences between different societies and different species in this respect, so there clearly is some kind of property here

Is there?

Writing, agriculture, animal husbandry, similar styles of architecture and most modern inventions from flight to nuclear energy to antibiotics seem to have been developed in a convergent way given some environmental factors.

But I guess it boils down to a question of studying history, which ultimately has no good data and is only good for overfitting bias. So I guess it may be that there's no way to actually argue against or for either of the positions here, now that I think about it.

So thanks for your answer, it cleared a few things up for me, I think, when constructing this reply.

Comment by George (George3d6) on Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? · 2021-01-29T00:32:13.302Z · LW · GW

You can send DNA sequences to businesses right now that will manufacture the proteins that those sequences encode

Have you ever tried this ? I have, it comes with loads of *s

Developing a good theory of proteins seems pretty much a pure-Reason problem

Under the assumption that we know all there is about proteins, which I've seen no claims of made by anyone. Current knowledge is limited and in-vitro, and doesn't generalize to "weird" families of proteins".

"Protein-based nanotechnology" requires:

  • weird proteins with properties not encountered yet
  • complex in-vivo behavior, i.e. where we still have no clue about basically anything, since you can't crystalize a protein to tell how it's folding, those nice animation you saw on youtube, I'm afraid, are pure speculation

So no, not really, you can maybe get a tomato to be a bit spicy, I saw a stream where one of the most intelligent and applied biology-focused polymath I ever saw (Though Emporium) tried to figure out if there was a theoretical way to do it and gave up after 30 minutes.

You can get stuff to glow, that too, and it can be really useful, but we've been doing that for 200+ years.

You can make money by simply choosing a good product on Alibaba, making a website that appeals to people, using good marketing tactics and drop-shipping, no need for any physical interaction. The only thing you need is a good theory of consumer psychology. That seems like an almost-pure-Reason problem. 

It seems completely obvious to me that reason is by far the dominant bottleneck in obtaining control over the material world.

I think the thing you fail to understand here is randomness, chance.

You think that "Ok, this requires little physical labour, so 100% of it is thinking" but you fail to grasp even the possibility that there could be things where there is not enough information for reason to be useful, or even worst, that almost everything falls into that category.

If I chose 1,000,000 random products from alibab and resell them on amazon at 3x, I'm bound to hit gold.

But if I hit gold with 1/100 products, I'm still,  on the whole, losing 98% of my investment.

You think "but I know a guy that sold X and he chose X based on reason and he made 3x his money back"

And yes, yes you might, but that doesn't preclude the existence of another 99 guys you don't know of that tried the same thing and lost because they usually don't make internet videos telling you about it.

Granted, I'm being coy here, realistically the way e.g. reselling works is on a "huge risk of collapse" model (most things make back 1.1x but you're always 1x deep into the thing you are buying not coming through, not being in demand or otherwise not facilitating the further sale), but the above model is easier to understand.

And again, the important thing here is that "will X resell on amazon" can be something that is literally impossible to figure out without buying "X" and trying to sell it on amazon.

And "will 10X resell on amazon" and "will 100X resell on amazon" are, similarly, no the same question, there's some similarity between them, but figuring out how that number before "X" scales might in itself only be determinable by experiments.

 

***

Then again, I wouldn't claim to be an expert in any of those fields, but neither are you, and the thing that I don't get is why you are so certain that "reason" is the main bottleneck when in any given field, the actual experts seem to be clamoring for more experiments, more, better labs... and smart grad students go for a dime a dozen.

Or, forget the idea of consensus, who's to say what the consensus is. But why assume you can see the bottleneck at all? Why not think "I have no idea what the bottleneck is"? To be perfectly fair, if you queried me for long enough, that's probably the answer I'd give.

The perspective you expose paints a world that makes no sense, where a deep conspiracy has to be at play for the most intelligent people not to have taken over the world.

Comment by George (George3d6) on Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? · 2021-01-28T14:15:12.808Z · LW · GW

I believe this echos out my thoughts perfectly, I might quote it in full if I ever do get around to reviving that draft.

The bit about "perfect" as not giving slack for development, I think, could be used even in the single individual scenario if you assume any given "ideal" action as lower chance of discovering something potential useful than a "mistake". I.e. adding:

  • Actions have unintended and unknown consequences that reveal an unknown landscape of possibilities
  • Actions have some % chance of being "optimal", but one can never be 100% certain they are so, just weigh them as having a higher or lower chance of being so
  • "optimal" is a shifting goal-post and every action will potentially shift it

I think the "tinkerer" example is interesting, but even that assumes "optimal" to be goals dictates by natural-selection, and as per the sperm bank example, nobody care about those goals per-say, their subconscious machinery and the society they live in "cares". So maybe a complementary individual-focused example would be a world in which inactivity is equivalent to the happiest state possible (i.e. optimal for the individual) or at least some form of action that does not lend to itself or something similar to it's host being propagated through time.

Comment by George (George3d6) on Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? · 2021-01-28T14:12:34.591Z · LW · GW

This, I assume, you'd base on a "hasn't happened before, no other animal or thing similar to us is doing it as far as we know, so it's improbable we will be able to do it" type assumption? Or something different?

Comment by George (George3d6) on Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? · 2021-01-28T14:09:48.784Z · LW · GW

I don't necessarily think you have to take the "AI" example for the point to make sense though.

I think "reasoning your way to a distant inference", as a human, is probably a far less controversial example that could be used here. In that most people here seem to assume there are ways to make distant inferences (e.g. about the capabilities of computers in the far off future), which historically seems fairly far fetched, it almost never happens when it does it is celebrated, but the success rate seems fairly small and there doesn't seem to be a clear formula for it that works.

Comment by George (George3d6) on Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? · 2021-01-28T14:05:43.204Z · LW · GW

The point of focusing on AI Alignment isn't that it's an efficient way to discover new technology but that it's a way that makes it less likely that humanity will develop technology that destroys humanity. 

 

Is "proper alignment" not a feature of an AI system, i.e. something that has to be /invented/discovered/built/?

This sound like semantics vis-a-vis the potential stance I was referring to above. 

Comment by George (George3d6) on A non-mystical explanation of "no-self" (three characteristics series) · 2021-01-25T13:27:24.515Z · LW · GW

No, and a caveman would have no use for them. 

I'd instead try to explain brick making or crop selection or reaching high temperatures using clay or maybe some geometry (?)

But if your claim is "What I have here is a technique so powerful that it's akin to inventing computing in prehistoric times"

Then it begs the question: "So why aren't you emperor of the world yet?"

Comment by George (George3d6) on How can I find trustworthy dietary advice? · 2021-01-17T18:47:48.133Z · LW · GW

Have you considered the reason for this is that optimal diet is highly variable, both genetic and environmental.

For example whatever a 16yo school going boy from Jordan and a 56yo recently retired woman from Nebraska ought to eat to maximize health/performance/happiness might be entirely different and, if switched, have disastrous effects.

But these effects could also be present in much more similar individuals and even in the same individual over time. Most humans used to eat entirely different diets depending on the weather.

Comment by George (George3d6) on What is going on in the world? · 2021-01-17T18:05:05.740Z · LW · GW

Bold claims about objective reality, unbacked by evidence, seemingly not very useful or interesting (though this is subjective) and backed by appeal to tribal values (declaring AI alignment as a core issue, blue tribe neoliberalism assumed as being the status quo...etc).

This seems to go against the kind of things I ever assumed could make it to less wrong sans-covid, yet here it is heavily upvoted.

Is my take here overly critical ?

Comment by George (George3d6) on A non-mystical explanation of "no-self" (three characteristics series) · 2021-01-01T01:48:24.768Z · LW · GW

Can you give me one example of an invention that couldn't be communicated using the language of the time ?

For example, "a barrel with a fire and tiny wheel inside that spins by exploiting the gust of wind drawn towards the flame after it consumes all inside, and using an axel can be made to spin other wheels"... Is a barbaric description of a 1 chamber pressure based steam engine (and I could add more paragraphs worth of detail), but it's enough to explain it to people 2000 years before the steam engine was invented.

Comment by George (George3d6) on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T22:20:31.171Z · LW · GW

I can reduce "pushing the envelope" to other pre existing concepts. It's a shorthand not a whole new invention (which really would make little sense, new language is usually created to describe new physical phenomenon or to abstract over existing language, maybe and exception or two exist, but I assume they are few)

Comment by George (George3d6) on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T22:18:29.622Z · LW · GW

Your metaphor doesn't quite work, because you are trying really hard to show me the color red, only to then argue I'm a fool for thinking there is such a thing as red.

As in, it might be that no person on Earth has such a naive concept of subjective experience, but they are not used to expressing it in language, then when you try to make them express subjective experience in language and/or explain it to them, they say

  • Oh, that makes no sense, you're right

Instead of saying:

  • Oh yeah, I guess I can't define this concept central to everything about being human after 10 seconds of thinking in more than 1 catchphrase.

But again, what I'm saying above is subjective, please go back and consider my statement regarding language, if we disagree there, then there's not much to discuss (or the discussion is rather much longer and moves into other areas), because at the end of the day, I literally can not know what your talking about. Maybe I have a vague impression from years of meditation as to what you are referring to...or maybe not, maybe whatever you had in your experience is much more different and we are discussing two completely different things, but since we are very vague when referring to them, we think we have a disagreement in what we see, when instead we're just looking in completely different places.

Comment by George (George3d6) on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T19:32:47.415Z · LW · GW

I can't say something is right or wrong or probable unless I have a system of logic to judge those under.

Language is a good proxy for a system of logic, though sometimes (e.g. math and science) it's not rigorous enough. But for most discussion it seems to do kind of fine.

If you are introducing new concepts that can't be expressed using the grammar and syntax of the English language, I'm not sure there's a point in discussing the idea.

Using new terms or even syntax to "reduce" a longer idea is fine, but you have to be able to define the next terms or syntax using the old one first.

Doesn't that seem kind of obvious?


Just to be clear here, my stance is that you can actually describe the feeling of "being self" in a way that makes sense, but that way is bound to be somewhat unique to the individual and complicated.

Trying to reduce it to a 10 word sentence results in something nonsensical because the self is a more complex concept, but one's momentary experience needn't be invalid because it can't be explained in a few quick words.

Nor am I denying introspection as powerful, but introspection in the typical Buddhist way that you prescribe seems to simplist to me, and empirically it just leads to people contempt with being couch potatoes.

If you tried solving the problem, instead of calling paradox based on a silly formulation, if you tried rescuing the self, you might get somewhere interesting...or maybe not, but the other way seems both nonsensical (impossible to explain in a logically consistent way) and empirically leads to meh ish result unless your wish in life is to be a meditation or yoga teacher.

Comment by George (George3d6) on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T17:05:47.456Z · LW · GW

As for paraphrasing your argument, that's the thing, I can't, my point here is that you don't have an argument, you are abusing language without realizing it.

I'm not saying you're doing so maliciously or because you lack understanding of English, what I call "abuse" here would pass in most other essays but in this case, the abuse ends up throwing a bunch of unsolvable phenomenological issues that would normally raise to oppose your viewpoint under the rug.

 Let me try to give a few examples:

from behind their eyes; but they are actually aware of the sensation, as opposed to being aware from it

The English language lacks the concept of "being aware from a sensation", actually, the English language lacks any concept around "sensation" other than "experiencing it". 

"I am experiencing the world from behind my eyes" and "I am experiencing a pain in my foot" are the exact same in terms of "self" that is "having" a "sensation". This is very important, since in many languages, such as those that created various contemplative religion, "body" and "soul" are different things with "soul" seeing and "body" feeling and "self" being "soul" (I'm not a pali scholar, just speculating as to why the sort of expression above might have made sense to ancient hindus/budhists). In English languages (and presumably in English speakers, since otherwise, they'd feel the need for two terms) this idea is not present. The same "I" is seeing the world and experiencing pain.

Maybe you disagree, fine, but you have to use an expression that is syntactically correct in the English language, at least, instead of saying:

being aware from a sensation

This is a minimum amount of rigor necessary, it's not the most rigorous you can get (that would be using a system of formal logic), but it's the minimum amount of rigor necessary.

***

Another example, more important to your overall argument but the mistake here is less suttle:

It is a computational representation of a location, rather than being the location itself

First, very important, what is "It", the subject of this sentence, try to define "It" and you see the problem vanishes or the sentence no longer makes sense. But one way you can see this is by examining the phrase: 

"being the location itself"

A {location} can't {be}, not in the sense you are using {be} as {conscious as the}.

***

Etc, these sort of mistakes are present throughout this paragraph and neighboring ones, and I think they go unnoticed because usually, it's acceptable to break a few syntactic rules in order to be more poetic or fast in what you're communicating, but in this case, breaking the rules of syntax you end up subtly saying things that make no sense and can make no sense no matter how much you'd try to make them so. Hence why I'm trying to encourage you to try to be more explicit. 

First, just try to put the whole text into a basic syntax checker (e.g. Grammarly) and make it syntactically correct, and I'm fairly sure you will be enlightened by this exercise.

***

I'd speculate that the generator of the spelling mistakes is the fact that you are subtly shifting in your thinking from a perspective that says "An external world exists in a metaphysical way completely separated from my brain" and one that says "Everything in the external world, including my body, is an appearance in consciousness". And while both of these views are valid on their own, using both viewpoints in a unified argument is ignoring a few "hard problems of {X}".

But maybe I'm mistaken that this is what you are doing, I can't see inside your mind. However, I am fairly certain that simply try to be syntactically correct will show you that whatever you are trying to express makes no sense in our language. And if you try to go deeper, blame the language, and abstract it with a system of formal logic... then you will either run into an inconsistency or became the most famous neuroscientist (heck, scientist) in the history of mankind.  

Comment by George (George3d6) on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T16:23:27.131Z · LW · GW

That makes more sense if I use the term "phenomenological frameworks"

Comment by George (George3d6) on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T12:30:33.706Z · LW · GW

I'm not sure how to make it more clear, I can suggest rereading your own words again and trying to see if you can spot any inconsistency.

Comment by George (George3d6) on A non-mystical explanation of "no-self" (three characteristics series) · 2020-12-31T08:08:54.461Z · LW · GW

Haven't read the article fully, but I'm familiar with the general ideas presented thus far, one of the most philosophically naive is one I've also heard in various ways from loch Kelly, sam harris and douglas harding:

looking at the world from behind their eyes; but they are actually aware of the sensation, as opposed to being aware from it. It is a computational representation of a location, rather than being the location itself. Still, once this representation is fed into other subsystems in the brain, those subsystems will treat the tagged location as the one that they are “looking at the sense data from”, as if they had been fed a physical map of their surroundings with their current location marked.

Here you're basically switching phenomenological frameworks to perform a magic trick.

Either "the world" exists, truly exists outside my brain and then there is "something looking out at the world" and that representation is correct.

OR

"The world" is just "inside my brain", but that world the includes the physical representation of my body, which is part of it, and that physical representation is still "outside and looking out at the world".

Both these viewpoints can be correct, simulateneously, they are different perspectives in which you can collapse the concept of "the outside world".

But shifting between the two without taking into account that a shift between two unreconcilable perspectives has happened, is missing a VERY important point.

I think that, in having to reconcile those perspectives, a lot of truth can be found, though it may be truth that doesn't exactly confirm a Buddhist worldview.