When to assume neural networks can solve a problem 2020-03-27T17:52:45.208Z · score: 13 (4 votes)
SARS-CoV-2, 19 times less likely to infect people under 15 2020-03-24T18:10:58.113Z · score: 2 (4 votes)
The questions one needs not address 2020-03-21T19:51:01.764Z · score: 15 (9 votes)
Does donating to EA make sense in light of the mere addition paradox ? 2020-02-19T14:14:51.569Z · score: 6 (3 votes)
How to actually switch to an artificial body – Gradual remapping 2020-02-18T13:19:07.076Z · score: 9 (5 votes)
Why Science is slowing down, Universities and Maslow's hierarchy of needs 2020-02-15T20:39:36.559Z · score: 19 (16 votes)
If Van der Waals was a neural network 2020-01-28T18:38:31.561Z · score: 19 (7 votes)
Neural networks as non-leaky mathematical abstraction 2019-12-19T12:23:17.683Z · score: 17 (7 votes)
George's Shortform 2019-10-25T09:21:21.960Z · score: 3 (1 votes)
Artificial general intelligence is here, and it's useless 2019-10-23T19:01:26.584Z · score: 0 (16 votes)


Comment by george3d6 on When to assume neural networks can solve a problem · 2020-03-30T01:54:01.477Z · score: 1 (1 votes) · LW · GW
Yes, I was in fact. Seeing where this internet argument is going, I think it's best to leave it here.

So, in that case.

If your original chain of logic is:

1. An RL-based algorithm that could play any game could pass the turing test

2. An algorithm that can pass the Turing test is "AGI complete", thus it is unlikely that (1) will happen soon

And you agree with the statement:

3. An algorithm did pass the Turing test in 2014

You either:

a) Have a contradiction

b) Must have some specific definition of the Turing test under which 3 is untrue (and more generally, no known algorithm can pass the Turing test)

I assume your position here is b and I'd love to hear it.

I'd also love to hear the causal reasoning behind 2. (maybe explained by your definition of the Turing test ?)

If your definitions differ from commonly accepted definitions and your you rely on causality which is not widely implied, you must at least provide your versions of the definitions and some motivation behind the causality.

Comment by george3d6 on When to assume neural networks can solve a problem · 2020-03-29T19:53:02.522Z · score: 1 (1 votes) · LW · GW
Turing test, which is to say AGI-complete

You are aware chatbots have been "beating" the original Turing test since 2014, right? (And arguably even before)

Also, AGI-complete == fools 1/3 of human judges in an x minute conversation via text? Ahm, no, just no.

That statement is meaningless unless you define the Turing test and keeps being meaningless even if you define the turing test, there is literally no definition for "AGI complete". AGI is more of a generic term used to mean "kinda like a human", but it's not very concrete.

On the whole, yes, some games might prove too difficult for RL to beat... but I can't think of any in particular. I think the statement hold for basically any popular competitive game (e.g. one where there are currently cash prizes above > 1000$ to be won). I'm sure one could design an adversarial game specifically designed to not be beaten by RL but doable by a human... but that's another story. Turing test, which is to say AGI-complete

Comment by george3d6 on When to assume neural networks can solve a problem · 2020-03-28T11:50:59.682Z · score: 1 (1 votes) · LW · GW
Also if you read almost anything on the subject, people will be constantly saying how they don't think superhuman intelligence is inevitable or close

If it's "meaningfully close enough to do something about it" I will take that as being 'close". I don't think Bostrom puts a number on it, or I don't remember him doing so, but he seems to address a real possibility rather than a hypothetical that is hundreds or thousands of years away.

What do you mean, you've never seen a consistent top-to-bottom reasoning for it? This is not a rhetorical question, I am just not sure what you mean here. If you are accusing e.g. Bostrom of inconsistency, I am pretty sure you are wrong about that.

I mean, I don't see a chain of conclusions that leads to the theory being "correct" , Vaniver mentioned bellow how this is not the correct perspective to adopt and I agree with that.... or I would, assuming that the hypothesis would be Popperian (i.e. that one could do something to disprove AI being a large risk in the relative near future).

If you are just saying he hasn't got an argument in premise-conclusion form, well, that seems true but not very relevant or important. I could make one for you if you like.

If you could make such a premise-conclusion case I'd be more then welcome to hear it out.

ease of data collection? Cost of computing power? Usefulness of intelligence? -- but all three of these things seem like things that people have argued about at length, not assumed

Well, I am yet to see the arguments

Also the case for AI safety doesn't depend on these things being probable, only on them being not extremely unlikely.

It depends on you being able to put number on those probabilities though, otherwise you are in a Pascal wager's scenario, where any event that is not almost certainly ruled out should be taken into account with an amount of seriousness proportional to it's fictive impact.

Comment by george3d6 on When to assume neural networks can solve a problem · 2020-03-28T09:56:18.815Z · score: 1 (1 votes) · LW · GW
moreover I think Stuart Russell is too

Yes, I guess I should have made the clarification about that, I don't think Stuart Russell is necessarily much divergent from Bostrom in his views. Rather, he's most poniente arguments seem not to be very related to that view, so I think his books is a good guide for what I labeled as the second view in the article.

But he certainly tries to uphold both.

However the article was already too long and going into that would have made it even longer.... in hindsight I've decided to just split it into two, but the version here I shall leave as is.

Comment by george3d6 on When to assume neural networks can solve a problem · 2020-03-27T22:37:08.969Z · score: 5 (2 votes) · LW · GW

I will probably be stealing the perspective of the view being disjunctive as a way to look at why it's hard to pin down.

And thus, just like the state of neural networks in 2010 was only weakly informative about what would be possible in 2020, it seems reasonable to expect the state of things in 2020 will be only weakly informative and about will be possible in 2030.

This statement I would partially disagree with.

I think the idea of training on a GPU was coming to the forefront by 2010 and also the idea of CNNs for image recognition: (see both in that 2006 paper)y K. et al. (2006)

I'd argue it's fairly easy to look at today' landscape and claim that by 2030 the things that are likely to happen include:

  • ML playing any possible game better than humans assuming a team actually works on that specific game (maybe even if one doesn't), with huma-like inputs and human-like limitations in terms of granularity of taking inputs and giving outputs.
  • ML achieving all the things we can do with 2d images right now for 3d images and short (e.g. < 5 minute) videos.
  • Algorithms being able to write e.g. articles summarizing various knowledge it gathers from given sources and possibly even find relevant sources via searching based on keywords (so you could just say "Write an article about Peru's economic climate in 2028, rather than feed a bunch of articles about Peru's economy in 2028)... the second part is already doable, but I'm mentioning them together since I assume people will be more impressed with the final product
  • Algorithms being able to translate from and to almost any language about as well as human, but still not well enough to translate sources which require a lot of interpretation (e.g. yes for translating a biology paper from english to hindi or vice versa, no for translating a phenomenology paper from english to hindi or vice versa)
  • Controlling mechanical systems (e.g. robotic arms) via networks trained using RL.
  • Generally speaking, algorithms being used in areas where they already out-perform humans but where regulations and systematic inefficiencies combined with issues of stake don't currently allow them to be used (e.g. accounting, risk analysis, setting insurance policies, diagnosis, treatment planning). Algorithms being jointly used to help in various scientific fields by replacing the need for humans to use classical statistics and or manually fitting equations in order to model certain processes.

I'd wager points 1 to 4 are basically a given, point 5 is debatable since it depends on human regulators and cultural acceptance for the most part.

I'd also wager than, other than audio processing, there won't be much innovation beyond those 5 points that will create load of hype by 2030. You might have ensembles of those 4 things building up to something bigger, but those 5 things will be at the core of it.

But that's just my intuition, partially based on the kind of heuristics above about what is easily doable and what isn't. But alas, the point of the article was to talk about what's doable in the present, rather than what to expect from the future, so it's not really that related.

Comment by george3d6 on George's Shortform · 2020-03-27T08:53:15.522Z · score: 3 (2 votes) · LW · GW

I find it interesting what kind of beliefs one needs to question and in which ways in order to get people angry/upset/touchy.

Or, to put it in more popular terms, what kind of arguments make you seem like a smart-ass when arguing with someone.

For example, reading Eliezer yudkowsky's Rationality from AI to Zombies, I found myself generally speaking liking the writing style and to a karge extent the book was just reinforcing the biases I already had. Other then some of his poorly thought out metaphysics based on which he bases his ethics argument... I honestly can't think of a single thing from that book I disagree with. Same goes for Inadequate Equilibria.

Yet, I can remember a certain feeling popping up in my head fairly often when reading it, one that can be best described in an image:


One seeming pattern for this is something like:

  • Arguing about a specific belief
  • Going a level down and challenging a pillar of the opponent's belief that was not being considered as part of the discussion.

E.g: "Arguing about whether or not climate change is a threat, going one level down and arguing that there's not enough proof climate change is happening to being with"

You can make this pattern even more annoying by doing something like:

  • Arguing about a specific belief
  • Going a level down and challenging a pillar of the opponent's belief that was not being considered as part of the discussion.
  • Not entertaining an opposite argument about one of your own pillars being shaky.

E.g.: After the previous climate change argument, not entertaining the idea that "Maybe acting upon climate change as if it were real and as if it were a threat, would actually result in positive consequences even if those two things were unture"

You can make this pattern even more annoying by doing something like:

  • Arguing about a specific belief
  • Going a level down and challenging a pillar of the opponent's belief that was not being considered as part of the discussion.
  • Doing so with some evidence that the other party is unaware or cannot understand

E.g.: After the previous climate change argument, back up your point about climate change not being real by citing various studies that would take hours to fact check and might be out of reach knowledge-wise for either of you.


I think there's other things that come into account.

For example there's some specific fields which are considered more sacrosanct then others, trying to argue against a standard position in that field as part of your argument seems to much more easily put you into the "smartass" camp.

For example, arguing against commonly held religious or medical knowledge, seems to be almost impossible, unless you are taking an already-approved side of the debate.

E.g. You can argue ibuprofen against paracetamol as the go to for common cold since there's authoritative claims for each, you can't argue for a 3rd lesser backed NSAID or for using corticosteroids or no treatment instead of NSAIDs.

Other fields such as ethics or physics or computer science seem to be fair game and nobody really minds people trying to argue for an unsanctioned viewpoint.


There's obviously the idea of politics being overall bad, and the more politicized a certain subject is the less you can change people's minds about it.

But to some extent I don't feel like politics really comes into play.

It seems that people are fairly open to having their minds changed about economic policy but not about identity policy.... no matter which side of the spectrum you are on. Which seem counter intuitive, since the issue of "should countries have open borders and free healthcare" seems like one much more deeply embedded in existing political agendas and of much more import than "What gender should transgender people be counted in when participating in the olympics".


One interesting thing that I observed: I've personally been able to annoy a lot of people when talking with them online. However, IRL, in the last 4 years or so (since I actually begun explicitly learning how to communicate), I can't think of a single person that I've offended.

Even though I'm more verbose when I talk. Even though the ideas I talk about over coffee are usually much more niche and questionable in their verity then the ones I write about online.

I wonder if there's some sort of "magic oratory skill" I've come closer to attaining IRL that either can't be attained on the internet or is very different... granted, it's more likely it's the inherent bias of the people I'm usually discussing with.

Comment by george3d6 on The questions one needs not address · 2020-03-23T11:22:21.712Z · score: 1 (1 votes) · LW · GW

Well, not really, since the way they get talked about is essentially searching for a "better" definition or trying to make all definitions coincide.

Even more so, some of the terms allow for definitions but those definitions in themselves run into the same problem. For example, could you try to come up with one or multiple definitions for the meaning of "free will" ? In my experience it either leads to very boring ones (in which case the subject would be moot) or, more likely, to a definition that is just as problematic as 'free will' itself.

Comment by george3d6 on The questions one needs not address · 2020-03-22T18:59:44.202Z · score: 3 (1 votes) · LW · GW
We can now say that trying to answer questions like "what is the true nature of god" isn't going to work

I mean, I don't think and I'm not arguing we can do that. I just think that the question in itself is mistakenly formulate, the same way "How do we handle AI risk ?" is a mistaken formulation (see Jau Molstad's answer to the post which seems to address this).

All that I am claiming is that certain ill-defined question on which no progress can be made exist and that they can be to some extent easily spotted because they would make no sense if de-constructed or if an outside observe were to judge your progress on them.

Celebrating the people who dedicated their lives to building the first steam engine, while mocking people who tried to build perpetual motion machines before conservation of energy was understood, is just pure hindsight

Ahm, I mean, Epicurus and Thales would have had pretty strong intuitions against this, and conservation of energy has been postulated in physics since Issac Newton and even before him, when the whole thing wasn't even called "physics".

Nor is there a way to "prove" conservation of energy other than purely philosophically, or in an empirical way by saying: "All our formulas make sense if this is a thing, so let's assume the world works this way, and if there is some part of the world that doesn't we'll get to it when we find it".

Also, building a perpetual motion machine (or trying to) is not working on an unsanswerable problem/question of the sort I refer to.

As in, working on one will presumably lead you to build better and better engines, and/or see your failure and give up. There is a "failure state", and there's no obvious way of getting into "metaphysics" from trying to research perpetual motion.

Indeed, "Can we build a perpetual motion machine ?" is a question I see as entirely valid, not worth pursuing, but it's at worst harm-neutral and it has proven so in the last 2,000+ years of people trying to answer it.

Comment by george3d6 on George's Shortform · 2020-02-28T13:36:56.910Z · score: 5 (3 votes) · LW · GW

Walking into a new country where people speak very little English reminds me of the dangers of over communication.

Going into a restaurant and saying: "Could I get the turkish coffee and an omelette with a.... croissant, oh, and a glass of water, no ice and, I know this is a bit weird, but I like cinnamon in my turkish coffee, could you add a bit of cinnamon to it ? Oh, actually, could you scratch the omelette and do poached eggs instead"

Is a recipe for failure, at best the waiter looks at you confused and you can be ashamed of your poor communication skills and start over.

At worst you're getting an omelette, with a cinnamon bun instead of a croissant, two cups of turkish coffee, with some additional poached eggs and a room-temperature bottle of water.

Maybe a far fetched example, but the point is: The more instructions you give, the flourishes you put into your request, the higher the likelihood that the core of the requests gets lost.

If you can point at the items on the menu and hold a number of fingers in the air to indicate the quantity, that's an ideal way to order.

But it's curios that this sort of over communication never happens in, say, Japan. In places where people know very little to no English and where they don't mind telling you that what you just said made no sense (or at least they get very visibly embarrassed, more so than their standard over-the-top anxiety, and the fact that it made no sense is instantly obvious to anyone).

It happens in the countries where people kinda-know English and where they consider it rude to admit to not understanding you.

Japanese and Taiwanese clerks, random pedestrians I ask for directions and servers, know about as much English as I know Japanese or Chinese. But we can communicate just fine via grunts, smiles, pointing, shaking of heads and taking out a phone to google translate if the interactions is baring close to the 30s mark with no resolution in sight.

The same archtypes in India and Lebanon speak close to cursive English though, give them 6-12 months in the UK or US plus a panache for learning and they'd be a native speaker (I guess it could be argued that many people in India speak 100% perfect English, but their own dialect, but for the intents and purposes of this post I'm referring to English as UK/US city English).

Yet it's always in the second kind of country where I find my over communicative style fails me. Partially because I'm more inclined to use it, partially because people are less inclined to admit I'm not making any sense.

I'm pretty sure it's this phenomenon is a very good metaphor or instantiation of a principle that applies in many other situations, especially in expert communication. Or rather, in how expert-layman vs expert-expert vs expert-{almost expert} communication works.

Comment by george3d6 on George's Shortform · 2020-02-28T13:15:47.880Z · score: 1 (1 votes) · LW · GW

This just boils down to "showing off" though. But this makes little sense considering:

a) both genders engage in bad practices. As in, I'd expect to see a lot of men doing cross fit, but it doesn't make sense when you consider there's a pretty even gender split. "Showing off health" in a way that's harmful to health is not evolutionary adaptive for women (where it arguably pays off to live for a long time, evolutionarily speaking). This is backed up by other high-risk behaviors being mainly a men's thing

b) sports are a very bad way to show off, especially the sports that come with high risk of injury and permanent degradation when practiced in their current extreme (e.g. weight lifting, climbing, gymnastics, rugby, hokey). The highest pay-off sports I can think of (in terms of social signaling) are football, american football, basketball and baseball... since they are popular and thus the competition is both intense and achieving high rank is rewarding. Other than american football they are all pretty physically safe as far as sports go... when there are risks, they come from other players (e.g. getting a ball to the head) not from over-training or over-performing.

So basically, if it's genetic miss-firing then I'd expect to see it misfire almost only in men, and this is untrue.

If it's "rational" behavior (as in, rational from the perspective of our primate ancestor) then I'd expect to see the more dangerous forms of showing off bring the most social gains rather than vice-versa.

Granted, I do think handicap principle can be partially to blame for "starting" the thing, but I think it continues because of higher level memes that have little to do with social signaling or genetics.

Comment by george3d6 on George's Shortform · 2020-02-22T22:37:51.836Z · score: 1 (1 votes) · LW · GW

Should discomfort be a requirement for important experiences ?

A while ago I was discussing with a friend maligning about the fact that there doesn't exist some sort of sublingual DMT, with an absorption profile similar to smoking DMT, but without the rancid taste.

(Side note, there are some ways to get sublingual DMT: , but you probably won't find it for sale at your local drug dealer and effects will differ a lot from smoking. In most experiences I've read about I'm not even convinced that the people are experiencing sublingual absorption rather than just slowly swallowing DMT with MAOIs and seeing the effects that way)

My point where something along the way of:

I wish there was a way to get high on DMT without going through the unpleasant experience of smoking it, I'm pretty sure that experience serves to "prime" your mind to some extent and leads to a worst trip.

My friend's point was:

We are talking about one of the most reality-shattering experiences ever possible to a human brain that doesn't involve death or permanent damage, surely having a small cost of entry for that in terms of the unpleasant taste is actually a desirable side-effect.

I kind of ended up agreeing with my friend and I think most people would find that viewpoint appealing


You could make the same argument for something like knee surgery (or any life-changing surgery, which is most of them).

You are electing to do something that will alter your life forever and will result in you experiencing severe side-effects for years to come... but the step between "decide to do it" and "support major consequences" has 0 discomfort associate to it.

That's not to say knee surgery is good, much like a DMT trip, I have a lot of prior of it being good for people (well, in this case assuming that doctor recommends you to do it).

But I do find it a bit strange that this is the case with most surgery, even if it's life altering, when I think of it in light of the DMT example.


If you've visited South Korea and seen the progressive note mutilation going on in their society (I'm pretty sure this has a fancier name... see some term they use in the study of super-stimuli, seagulls sitting on gigantic painted balls kinda king), I'm pretty sure the surgery example can become blurrier.

As in, I think it's pretty easy to argue people are doing a lot of unnecessary plastic surgery, and I'm pretty sure some cost of entry (e.g. you must feel mild discomfort for 3 hours to get this done... equivalent to say, getting a tattoo on your arm), would reduce that number a lot and intuitively that seem like a good thing.

It's not like you could do that though, as in, in practice you can't really do "anesthesia with controlled pain level" it's either zero or operating within a huge error range (see people's subjective reports of pain after dental anesthesia with similar quantities of lidocaine).

Comment by george3d6 on George's Shortform · 2020-02-22T21:09:10.688Z · score: 1 (1 votes) · LW · GW

Hmh, I actually did not think of that one all-important bit. Yeap, what I described as a "meta model for Dave's mind" is indeed a "meta model for human minds" or at least a "meta model for American minds" in which I plugged in some Dave-specific observations.

I'll have to re-work this at some point with this in mind, unless there's already something much better on the subject out there.

But again, I'll excuse this with having been so tried when I wrote this that I didn't even remember I did until your comment reminded me about it.

Comment by george3d6 on George's Shortform · 2020-02-21T02:13:54.516Z · score: 5 (3 votes) · LW · GW

90% certainty that this is bs because I'm waiting for a flight and I'm sleep deprive, but:

For most people there's not a very clear way or incentive to have a meta model of themselves in a certain situation.

By meta model, I mean one that is modeling "high level generators of action".

So, say that I know Dave:

  • Likes peanut-butter-jelly on thin cracker
  • Dislikes peanut-butter-jelly in sandwiches
  • Likes butter fingers candy

A completely non-meta model of Dave would be:

  • If I give Dave a butter fingers candy box as a gift, he will enjoy it

Another non-meta model of Dave would be:

  • If I give Dave a box of Reese's as a gift, he will enjoy it, since I thing they are kind of a combination between peantu-butter-jelly and butter fingers

A meta model of Dave would be:

  • Based on the 3 items above, I can deduce Dave likes things which are sweet, fatty, smooth with a touch of bitter (let's assume peanut butter has some bitter to it) and crunchy but he doesn't like them being too starchy (hence why he dislikes sandwiches).
  • So, if I give Dave a cup of sweet milk ice cream with bits of crunchy dark chocolate on top as a gift, he will love it.

Now, I'm not saying this meta-model is a good one (and Dave is imaginary, so we'll never know). But my point is, it seems highly useful for us to have very good meta-models of other people, since that's how we can predict their actions in extreme situations, surprise them, impress them, make them laugh... etc

On the other hand, we don't need to construct meta-models of ourselves, because we can just query our "high level generators of action" directly, we can think "Does a cup of milk ice cream with crunchy dark chocolate on top sound tasty ?" and our high level generators of action will strive to give us an estimate which will usually seem "good enough to us".

So in some way, it's easier for us to get meta models of other people, out of simple necessity and we might have better meta models of other people than we have of our own self... not because we couldn't construct a better one, but because there's no need for it. Or at least, based on the fallacy of knowing your own mind, there's no need for it.

Comment by george3d6 on George's Shortform · 2020-02-20T02:19:19.057Z · score: 7 (4 votes) · LW · GW

Physical performance is one thing that isn't really "needed" in any sense of the word for most people.

For most people, the need for physical activity seems to boil down to the fact that you just feel better, live longer and overall get less health related issues if you do it.

But on the whole, I've seen very little proof that excelling in physical activity can help you with anything (other than being a professional athlete or trainer, that is). Indeed, it seems that the whole relation to mortality basically breaks down if you look at top perform. Going from things like strongman competitions and american football where life expectancy is lower, to things like running and cycling where some would argue but evidence is lacking, to football and tennis where it's a bit above average.

If the subject interests you, I've personally looked into it a lot, and I think this is the definitive review:

But it's basically a bloody book, I personally haven't read all of it, but I often go back to it for references.

Also, there's the much more obvious problem with pushing yourself to the limits, injury. I think this is hard to quantify and there's few studies looking at it. In my experience I know a surprising amount of "active" people that got injured in life-altering ways from things like skating, skying, snowboarding and even football (not in the paraplegic sense, more in the "I have a bar of titanium going through my spine and I can't lift more than 15kg safely" sort of way). Conversely, 100% of my couch-dwelling buddies in average physical shape doesn't seem to suffer from any chronic pain.

To some extent, this annoys me, though I wonder if poor studies and anecdotal evidence is enough to warrant that annoyance.

For example, I frequent a climbing gym. Now, if you look at climbing, it's relatively safe, there's two things people complain about most sciatica and "climbers back" (basically a very weird looking but not that harmful form of kyphosis).

I honestly found the idea rather weird... since one of the main reason I climb (besides the fact that it's fun) is that it helps and helped me correct my kyphosis and basically got rid of any back/neck discomfort I felt from sitting too much at a computer.

I think this boils down to how people climb, especially how they do bouldering.

A reference as to how the extreme kind of bouldering looks like:

The two issues I see here is:

  • Hurling limbs at tremendous speeds to try and crab onto something tiny.
  • Falling on the mat, often and from large heights. Climbing goes two ways up and down, most people doing bouldering only care about up

Indeed, a typical bouldering run might look something like: "Climb carefully and skillfully as much as possible, hurl yourself with the last bit of effort you have hoping you reach the top, fall on the mat rinse and repeat".

This is probably one of the stupidest things I've seen from a health perspective. You're essentially praying for articulation damage, dislocating a shoulder/knee, tearing a muscle (doesn't look pretty, I assume doesn't feel nice, recovery times are long and sometimes fully recovering is a matter of years) and spine damage (orthopedics don't agree on much, but I think all would agree the worst thing you can do for your spine is fall from a considerable height... repeatedly, like, dozens of time every day).

But the thing is, you can pretty much do bouldering without this, as in, you can be "decent" at it without doing any of this. Personally I approach bouldering as slowly and steadily climbing... to the top, with enough energy to also climb down + climbing down whenever I feel that I'm to exhausted to continue. Somehow, this approach to the sport is the one that give you strange looks. The people pushing themselves above the limits risking injury and getting persistent spine damage from falling... are the standard.

Another things I enjoy is weight lifting, I especially enjoy weighted squats. Weighted squats are fun, they wake you up in the morning, they are a lazy person exercise when you've got nothing else in during that day.

I've heard people claim you can get lower back pain and injury from weighted squats, again, this seems confusing to me. I actually used to have minor lower back pain on occasions (again, from sitting), the one exercise that seemed to have permanently fixed that is a squat. A squat is what I do when I feel that my back is a bit stiff and I need some help.

But I think, again, this is because I am "getting squats wrong", my approach to a squat is "Let me load a 5kg ergonomic bar with 25kg, do a squat like 8 times, check my posture on the last 2, if I'm able to hold it and don't feel tired, do 5-10 more, if I still feel nice and energetic after a 1 minute break, rinse and repeat".

But the correct squat, I believe, looks something like this:

Loading a bar with a few hundred kg, at least 2.5x your body weight, putting on a belt so that your intestines don't fall out and lowering it "ONCE", because fuck me you're not going to be able to do that twice in a day. You should at least get some nosebleed every 2 or 3 tries if you're doing this stuff correctly.

I've seen this in gyms, I've seen this in what people recommend, if I google "how much weight should I squat", the first thing I get is:

If you weigh 165 pounds and have one of the following fitness levels, the standard for your squat one-rep max is:
Untrained: 110 pounds
Novice: 205 pounds
... etc

To say this seems insane is far fetched, basically the advice around the internet seems to be "If you've never done this before, aim for 40-60kg, if you've been to the gym a few times, go for 100+"

Again, it's hard to find data on this, but as someone that's pretty bloody tall who has been using weight to train for years, the idea of starting with 50kg for a squat as an average person seem insane. I do 45kg from time to time to change things up, I'd never squat anything over 70kg even if you paid me... I can feel my body during the move, I can feel the tentative pressure on my lower back if my posture slips for a bit... that's fine if you're lifting 30kg, that seems dangerous as heck if you're lifting more than your body weight, it even feels dangerous at 60kg.

But again, I'm not doing squats correctly, I am in the wrong here as far as people doing weight training are concerned.

I'm also wrong when it comes to every sport. I'm a bad runner because I give up once my lungs are burning for 5 minutes straight. I'm a horrible swimmer because I alter styles and stick with low-speed ones that are overall better for toning all muscles and have less risk of injury... etc

Granted, I don't think that people are too pushy about going to extremes. The few times people tell me some version of "try harder" phrased as a friendly encouragement. I finish what I'm doing, say thanks and lie to them that I have a slight injury and I'd rather not push it.

But deep inside I have a very strong suspicion that I'm not wrong on this thing. That somehow we've got ourselves into a very unhealthy memetic loop around sports, where pushing yourself is seen as the natural thing to do, as the thing you should be doing every day.

A very dangerous memetic loop, dangerous to some extent in that it causes injury, but much more dangerous because it might be discouraging people from sports. Both in that they try once, get an injury and quit. Or in that they see it, they think it's too hard (and, I think it is, the way most people do it) and they never really bother.

I'm honestly not sure why it might have started...

The obvious reason is that it physically feels good to do it, lifting a lot of running more than your body tells you that you should is "nice". But it's nice in the same way that smoking a tiny bit of heroine before going about your day is nice (as in, quite literally, it seems to me the feelings are related and I think there's some pharmacological evidence to back that up). It's nice to do it once to see how it is, maybe I'll do it every few months if I get the occasion and I feel I need a mental boost... but I wouldn't necessarily advise it or structure my life around it.

The other obvious reason is that it's a status thing, the whole "I can do this thing better than you thus my rank in the hierarchy is higher". But then... why is it so common with both genders, I'd see some reason for men to do this, because historically we've been doing it, but women competing in sports is a recent things, hardly "built into our nature" and most of the ones I know that practice things like climbing are among the most chilled out dudes I've ever meet.

The last reason might be that it's about breaking a psychological barrier, the "Oh, I totally thought I couldn't do that, but apparently I can". But it seems to me like a very very bad way of doing that. I can think of many other safer better ways from solving a hard calculus problem to learning a foreign language in a month to forcing yourself to write an article every day... you know, things that have zero risks of paralysis and long term damage involved.

But I think at this point imitation alone is enough to keep it going.

The "real" reason if I take the outside view is probably that that's how sports are supposed to be done and I just got stuck with a weird perspective because "I play things safe".

Comment by george3d6 on Does donating to EA make sense in light of the mere addition paradox ? · 2020-02-19T23:23:51.197Z · score: 1 (1 votes) · LW · GW
To the extent that you're pursuing topics that EA organizations are also pursuing, you should probably donate to their recommended charities rather than trying to do it yourself or going through less-measured charities.

Well yes, this is basically the crux of my question.

As in, I obviously agree with the E and I tend do agree with the A , buy my issue is why how A seems to be defined in EA (as in, mainly around improving the lives of people that you will never interact with or 'care' about on a personal level).

So I agree with: I should donate to some of my favorite writers/video-makers that are less popular and thus might be kept in business by 20$ monthly on pateron is another hundred people think like me. (efficient as opposed, to, say, donating to an org that helps all artists or donating to well-off creators).

I also agree with: It's efficient to save a life halfway across the globe for x,000$ as opposed to one in the EU where it would cost x00,000$ to achieve a similar addition in healthy life years.

Where I don't understand how the intuition really works is "Why is it better to save the life of a person you will never know/meet than to help 20 artists that you love" (or some such equivalence).

As in, I get there some intuition about it being "better" and I agree that might be strong enough in some people that it's just "obvious", but my thinking was that there might be some sort of better ethic-rooted argument for it.

Comment by george3d6 on Does donating to EA make sense in light of the mere addition paradox ? · 2020-02-19T18:58:20.530Z · score: 4 (2 votes) · LW · GW

No worries, I wasn't assuming you were a speaker for the EA community here, I just wanted to better understand possible motivations for donating to EA given my current perspective on ethics. I think the answer you gave outline on such line of reasoning quite well.

Comment by george3d6 on Does donating to EA make sense in light of the mere addition paradox ? · 2020-02-19T17:00:44.172Z · score: 3 (1 votes) · LW · GW
Utilitarianism is not the only system that becomes problematic if you try to formalize it enough; the problem is that there is no comprehensive moral system that wouldn't either run into paradoxical answers, or be so vague that you'd need to fill in the missing gaps with intuition anyway.

Agree, I wasn't trying to imply otherwise

Any decision that you make, ultimately comes down to your intuition (that is: decision-weighting systems that make use of information in your consciousness but which are not themselves consciously accessible) favoring one decision or the other. You can try to formulate explicit principles (such as utilitarianism) which explain the principles behind those intuitions, but those explicit principles are always going to only capture a part of the story, because the full decision criteria are too complex to describe.

Also agree, as in, this is how I usually formulate my moral decision and it's basically a pragmatic view on ethics, which is one I generally agree with.

is just "the kinds where donating to EA charities makes more intuitive sense than not donating"; often people describe these kinds of moral intuitions as "utilitarian", but few people would actually endorse all of the conclusions of purely utilitarian reasoning.

So basically, the idea here is that it actually makes intuitive moral sense for most EA donors to donate to EA causes ? As in, it might be that they partially justify it with one moral system or another, but at the end of the day it seems "intuitively right" to them to do so.

Comment by george3d6 on How to actually switch to an artificial body – Gradual remapping · 2020-02-19T12:56:54.469Z · score: 3 (3 votes) · LW · GW
This fear of continuity breaks is also why I would probably stay clear of any teleporters and the like in the future.

In case you haven't read it:

But overall I agree, this "feeling" is partially the reason why I'm a fan of the insert slightly-invasive mechanical components + outsource to external device strategy. As in, I do believe it's the most practical since it seems to be roughly doable with non-singularity levels of technology, but it's also the one where no continuation errors can easily happen.

Comment by george3d6 on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-18T18:44:24.361Z · score: 3 (1 votes) · LW · GW

What exactly do you mean by "the factors I listed" though ?

As in, I think that my basic argument goes:

"There's reason to think most kids would feel unsafe in a college environment, desire a social circle and job security, not the kind of transcendent self-actualization style goals that fuel research". I think this generally holds for anyone at the age of 18-22 outside of outlier, hence why I cited the pyramid of needs, because the research behind that basically points to us needing different things in an age-correlated way (few teenagers feel like they need self actualization). I think this is somewhat exaggerated in the US because of debt&distance but should be noticeable everywhere.

Next, there's reason to believe research inside universities is slowing down in certain areas, I have no reason to believe the lack of people desiring self-actualization is the cause for this though, except for a gut feeling that self-actualization is a better motivation to research nature than, say, wanting your paycheck at the end of the day. Most famous researcher seem to have been slightly crazy and not driven by societal goals but rather by an inner wish to "set things right" in one way or another, or to leave a mark on the world"

So basically, the best I can do to "prove" any of this would be something like:

  • Take some sort of comparative research output metric, these are hard to find, and are going to be very confounded with country-wealth (some examples: ... "small socialist countries" produce a surprising amount of researcher per capita, but maybe that's something inherent to being a small rich country, not to have stronger communities and social support.
  • See if this correlates with % of the population working, quality of social security, some index measuring security, some index measuring happiness. Assume more research will come out of countries that perform well on this.

This will generally be true in terms of research, publications, books... etc (see Switzerland, Netherlands, Sweeden, Norway, Iceland... which seem to have a disproportionate amount of e.g. nature publications in report to the population), but you will also get outliers (see Israel, which produced a lot of research even dozens of years back when professors&students would be called on a yearly basis to fight to death against an out-numbering enemy that wanted to murder them).

However, you can't **** draw conclusion from numbers of publications, and things such as "security index" and "happiness index" and even "quality of social security" are very hard to measure. Plus, they are confounded by the wealth of the country.

On the other hand, there's good data on the idea that research is slowing down overall, that is much easier to place on "universities as a whole", since by all metrics is seems that research is heavily correlated with academia (see, where most researchers work, where the people that get noble prizes work ... etc).

So making the general assumption, of "research is slowing down" is much easier than doing the correlation on a per country basis.

If you can claim there is a valid way to measure basic needs that has a per-country statistic, and a various way to measure "research output" on a per country basis... than I'd be very curios in seeing that, I can even run an analysis based on various standard methods to see if there's a correlation.

So the generic claim "kids are not researcher and don't want to be researcher, universities can't do multiple things at once better than doing one thing, thus if universities have to take care of kids they will have less time to focus on actual research" is easy to look at wholistically, but harder to look at on a per-country basis.

Impossible ? I don't think so

Worthwhile ? I don't know. As in, this whole article is closer to "here's an interesting perspective, say, one that might warrant thinking about, when doing research" rather than "here's a factual claim about how stuff works". To make it any better, it would have to be elevated to a factual claim, but then I would basically have to trust the kind of analysis mentioned above (which again, I think would be impossible to run and get significant results since all the metrics I can think of are very leaky).

Honestly, it might have been a better perspective to approach this topic, I might even try to see if there's relevant data on the subject and update the article if there is, barring that, I literally don't see how this sort of hunch + basic evidence about generic human psychology plus observing a trend opinion piece differs from anything here. Maybe I've been misjudging the epistemic strength of the claims being seen in article around here... in which case, ahm... "sorry ?", but also, I don't really see your argument here.

Yes, assuming magical data fell out of the sky or our time to gather data was infinite every single piece of human thought could be improved, but I'm not sure why the stopping condition for this article would be "analysis comparing countries"... as opposed to any other random goalpost.

Comment by george3d6 on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-18T14:12:51.066Z · score: 3 (1 votes) · LW · GW
To the extend that you are interested in knowing whether your thesis is true, it would make sense to check.

How would I specifically go about checking this though ? As in, I do have data and knowledge on US and UK universities, I don't have data on Germany Universities.

If you have data on German university research output, then I think it's worth looking at, if not, I feel like what you're basically doing is saying: "Hey, you don't have data on this specific thing, it might go either way, your hypothesis is null and void".

Provided data on German universities existed, why not ask for data about every single country with universities.

You could argue "Well, you should become an expert in the field and have all possible data handy before making any claims", but then that claim would invalidate literally every single original thought on LessWrong that uses facts and even most academic papers.

Also, German Universities constitute a pretty bad example in my opinion, as in:

a) Murdering, exiling or routing out your highest IQ demographic and most public intellectuals

b) Having the rest taken away by the US, Russia and UK

c) Living for dozens of years in a country that's been morally, geographically and culturally divided ravaged by WW2 (plus 1/3 of it living under a brutal~ish communist dictatorship)

Would make for a pretty weird outlier in all of this no matter what.

As in, if we were to compare other rich academic systems I'd rather do Japan, Italy, France, Spain or Switzerland

Comment by george3d6 on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-18T13:34:46.008Z · score: 3 (1 votes) · LW · GW
It seems that your comment tries to take it apart by looking at whether you like the way the system is designed and not by looking at effects of it. That means instead of trying to see whether what you are seeing is true, you expand on your ideas of how things should be.

What exactly should my reply contain ?

As in, my argument in the original post is basically:

a) Universities evolved to install and provide primary needs (safety and social circle) instead of a more niche need for self-actualization

b) Research is slowing down overall, it could partially be because universities no longer focus on self-actualization and instead focus on providing safety and a social circle.

What I was basically saying is that I'm not sure if a applies to German universities, as in, I agree that they are probably less-so incentivized to focus on providing safety and a social circle.

I have no idea if b applies or not, as in, I'm not sure how well German universities have been doing and it's hard to measure their progress since the 30s and 40s obviously had a pretty huge negative effect on the whole upper education system.

I do overall think the example of German universities specifically (and Austrian ones, to some extent), because there's so many of them and many of them are vocation-focused specifically, giving a place to go for people that just want security, not a place in academia, is a good counter to my ideas here.

But also, my knowledge of the German education system is so poor overall, that I can't really make very specific claims here.

Comment by george3d6 on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-18T06:26:37.579Z · score: 1 (1 votes) · LW · GW

I think German upper education is hardest to pick on, partially because:

a) A small % of the population attends higher education relative to other countries at the income level:

b) From my knowledge a lot of what is called "tertiary education" in Germany is basically just a practical 1-2 year professional course that people can get before they're even through grade 12

c) Anyone living in big cities does indeed experience less environmental change when attending university, though I wouldn't call it close to zero, unless you happen to live next to the university and unless a lot of your high school friends attend the same university (though again, it could be argued that you get to keep your friend group, since they also live in <insert big city>

d) -- There's no student debt attached to it, but as I mentioned for European institutions in general, debts is analogous economically to the higher taxes one has to pay. Though indeed the "mental" effects of having that debt is non-existent (maybe partially analogous in that it makes "blue collar" professions seems less appealing ? since people end up paying > 50% of their paycheck and thus might value comfort over money more, and university is basically 4 years of comfort that promises future comfortable jobs, whereas in the American model one could work a trade job starting at 16-18 and easily retire at 40.... but I think that's stretching it, I doubt most students are even aware that taxes are a thing)

Comment by george3d6 on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-16T21:42:44.700Z · score: 1 (1 votes) · LW · GW

The point of my question at the end there is that I would expect any New Improved University Replacement to suffer the same process.

That seems reasonable, I'd assume the same.

As in, if I could think of an implementable solution I'd have tried implementing it.

My point here as to describe the problem from a certain angle, which is easy, I lay no claim on the harder task of prescribing a solution.

Comment by george3d6 on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-16T14:49:51.799Z · score: 1 (1 votes) · LW · GW

I mean, I think the basic argument I would have here is:

If universities are optimizing for 5, and we can agree that 5 leads to research and that universities are one of the leaders in anything scientific-research related. Why is research slowing down ? And, respectively, why is so little of the interesting research coming out of university.

See points 1-2 and arguably 3/4 in the article.

I think there's also some evidence universities didn't optimize for 2&3 until recently, because until recently their appeal was much narrower and focused on the very intelligent and/or very well-off (i.e. people that usually want or even need self-actualization).

Comment by george3d6 on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-16T14:28:28.391Z · score: 9 (2 votes) · LW · GW

I was alluding to that.

But at the same time, I'm pretty sure the simpler explanation might apply and people just don't understand why this study would be valuable + IQ is a sensitive topic, thus the material is hard to find.

Hence why I said I will post any studies anyone finds, I have a pretty high prior that a few exist and I'm just not seeing them.

I have a low prior they will show anything else other than "University is indeed confounded by IQ and/or IQ + income in money earning potential", but alas I based that on small-sample empirical evidence... so, eh.

Comment by george3d6 on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-16T13:19:39.925Z · score: 3 (2 votes) · LW · GW
Maybe just getting a job will (on average) actually result in learning more valuable things, but frankly I don't see any reason to believe that. (More things valuable for becoming a cog in someone else's industrial machine, maybe, though even that isn't obvious.)

Ok, well I certainly wouldn't argue that a generic alternative exists, I mean, that's my original point, that they are wasteful via the fact that they steal signal-strength from any alternative that would crop up.

In my personal experience, getting a job on average is better for learning, if you look for jobs that can provide de-facto mentors/teachers, but that might be because so few young people get a job. Or maybe me and the people I know that took my advice and quite university are just very good at learning from other practitionares rather than professors.

Maybe we need different ways of optimizing 18-20-year-olds' lives for learning new and valuable things. I'd be interested to see concrete proposals. An obvious question I hope they'd address: why expect that in practice this will end up better than universities?

Well, my proposal in the article is basically that we had such a system, it was called a university, but it got slowly eroded as it went the way of a safety/community provision institution (or at least provisioning an illusion of those two).

My argument for why it worked better in the past are point 1-2 and arguably 3 and 4.

Comment by george3d6 on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-16T12:54:13.037Z · score: 1 (1 votes) · LW · GW
Students' youth? Even supposing that time spent at university is worthless, it's only a few years per person.

Is that period not important though ?

As en, even assuming universities are not "magical tower that remove 4 years of life to validate IQ > 100 and conscientiousness in the 80th percentile, you could hardly argue what they teach is perfect.

But those "few years" are basically the most critical years of development we have, as in, the brain is developed enough to actually do stuff yet still plastic.

I won't go into myelination, because I'm lazy and finding good references is hard, as far as I know Giedd has a few studies on grey matter changes that everyone cites, but maybe there's better references:

Gist of it is: We loss neuronal bodies as we age starting around the age of 5. The loss doesn't happen in the prefrontal cortex until we enter our teens and seems to keep happening until the reach 20.

I don't know of any good studies going after 20, there's a lot of meh studies and if you aggregate them you get this: (see fig2 and fig3). Though note that many of these use secondary markers or rather oudated imaging methods and basically nobody is doing brain byopsies on living humans... best you can get is DV-MRI and FMRI and postmortem biopsies (which is probably very biased, because only the very poor or the very educated will be fine with their recently dead child's brain being quickly removed and analyzed for the sake of neuroscience... come to think of it, the other 2 probably are too, either in the same way or by selecting for people with mental disorders)

This process is roughly associated with pruning, essentially making networks more efficient and/or optimizing for resource consumption. This goes in tandem with myelination:

I.e. If a neuron is not pruned, the likelihood of it's various axons being heavily myelinated is increased and vice versa.

So now, assume that the front-cortex is indeed "what makes us different from animals", what gives us most of our ability to be intelligent in the two math and write and gather evidence part.

Assume that people with pruned cortexes are indeed noticeably smarter in everything related to engineering/science (think people in the 10+ age range vs people in the 10< age range).

Assume that indeed the studies above are true and take into account the fact that we have empirical cultural evidence (see stereotypes of learning new things as people age, underpayment of older workers in non-tenure thinking related jobs like programming and accounting, can't teach a dog new tricks... etc).

I think these are all pretty safe assumption, not true in the sense of scientific truth in physics, but true in the sense of "safe to operate using them as a rough guidelines", or at least if they are not based on your model, then I also invite you to throw out all of psychology with them.

Youth is indeed very important, the 15-25 +/- 3 year age range is critical for the development required to be a scientists, engineer, doctor or any other profession where unusual intelligence is required.

So University time might be "only 3 or 4 year per person", though let's be honest thins like med school take 6 to 10 depending on location and an alarming number of people are putting in an extra 1-3 years getting a masters. But those are 3-10 years of a person's most valuable time in life as far as brain plasticity.

<And yes, one could make the same argument about high school, but that would basically be arguing that high schools serve the triple role of counter-biasing aggressive tendencies in people that would otherwise basically be criminals, cultural indoctrination and learning... and that's a much more taboo argument to make so I'm not making it>


That's just answering your question though, it's not the point I'm making in the article, the point of the article is that universities basically have a lot of signaling power for "If you are smart and want to self-actualize this is the place". So if you want to think in terms of scarce resource being wasted, that's the way I'd have put it there, universities are wasting critical signaling mechanisms.

Comment by george3d6 on A Simple Introduction to Neural Networks · 2020-02-11T04:41:18.930Z · score: 3 (2 votes) · LW · GW

I always thought there ought to be a good way to explain neural networks starting at backprop.

Since to some extent the criterion for architecture selection always seems to be whether or no gradients will explode very easy.

As in, I feel like the correct definition of neural networks in the current climate is closer to: "A computation graph structured in such a way that some optimize/loss function combination will make it adjust its equation towards a seemingly reasonable minima".

Because in practice that's what seems to define a good architecture design.

Comment by george3d6 on George's Shortform · 2020-02-10T12:16:11.852Z · score: 1 (1 votes) · LW · GW
Actually, this is backwards; by investing in companies that are worth more in worlds you like and worth less in worlds you don't, you're increasing variance

If you treat the "world you dislike" as one where you can still get about the same bang for you buck, yes.

But I think this wouldn't be the case with a lot of good/bad visions of the future pairs.


BELIEF: You believe healthcare will advance past treating symptoms and move into epigenetically correcting the mechanisms that induce tissue degeneration.

a) You invest in this vision, it doesn't come to pass. You die poor~ish and in horrible suffering at 70.

b) You invest in a company that would make money on the downside of this vision (e.g. palliative care focused company). The vision doesn't come to pass. You die rich but still in less horrible but more prolonged suffering at 76 (since you can afford more vacations, better food and better doctors).

c) You invest in this vision, it does come to pass. You have the money to afford the new treatments as soon as they are out on the market, now at 70 you regain most functionality you had at 20 and can expect another 30-40 years of healthy life, you hope that future developments will extent this.

d) You invest in a company that would make money on the downside of this vision, it does come to pass. You die poor~ish and in horrible suffering at 80 (because you couldn't afford the best treatment), with the added spite for the fact that other people get to live for much longer.


To put it more simply, money has more utility-buying power in "good" world than in "bad" world, assuming the "good" is created by the market (and thus purchasable).

Comment by george3d6 on George's Shortform · 2020-02-09T18:50:52.349Z · score: 1 (1 votes) · LW · GW

I'm wondering if the idea of investing in "good" companies make sense from a purely self-centered perspective.

Assuming there's two types of companies: A and B.

Assume that you think a future in which the vision of "A" comes true is a good future and future in which the vision of "B" comes true is a bad future.

You can think of A as being whatever makes you happy, some examples might be: longevity, symbolic AI, market healthcare, sustainable energy, cheap housing... thing that you are very certain you want in the future and that you are unlikely not to want (again, these are examples, I am NOT saying I think everyone or even a majority of people would agree with this, replace them with whatever company you think is doing what you want to see more of).

You can think of B as being neutral or bad, some examples might be: MIA companies, state-backed rent-seeking companies (e.g. student debt, US health insurance), companies exploiting resources which will become scarce in the long run... etc.

It seems intuitive that if you can find company A1 and company B1 with similar indicators as to their market performance, you would get better yield investing in A1 as opposed to B1. Since in the future scenario where B1 is a good long term investment, the future looks kinda bleak anyway and the money might not matter so much. In the future scenario where A1 is a good long term investment, the future is nice and has <insert whatever you like>, so you have plenty of nice things to do with said money.

Which would seem to give a clear edge to the idea of investing in companies doing things which you consider to be "good", assuming they are indistinguishable from companies doing things which you consider to be "bad" in terms of relevant financial metrics. Since you're basically risking the loss of money you could have in a hypothetical future you wouldn't want to live in anyway, and you are betting said money to maximize your utility in the future you want to live in.

Then again, considering that a lot of "good" companies on many metrics are newer and thus more risky and possibly overpriced I'm not sure how easy this heuristic could be applied with success in the real world.

Comment by George3d6 on [deleted post] 2020-02-08T00:48:25.595Z

See my answer to ChristianKl, my understanding was that high IQ on it's own is not a good predictor of equivalent success on any social hierarchy.

That is to say, a high IQ is more likely in people that are successful in societal-terms (money, status... etc), but not required or correlated (i.e. a billionaires IQ is not correlated with his ranking compared to other billionaires, and assuming there's a mean "X" of billionaires IQ, there's likely many more people at "X" that are not billionaires, or even successful in any other particular social hierarchy).

However, as per my reply there, I think I don't have the literature to back up the claim, hence why I've retracted the post. I haven't found evidence to the contrary, but since many people seem to disagree with this, I think I'd be fair for you not to trust that stance unless you find some evidence to back it up or I come up with said evidence at a later point.

Comment by George3d6 on [deleted post] 2020-02-08T00:43:30.715Z

Hmh, fair enough, I would say SAT scores and tabloid speculations are not necessarily evidence, and the very high ashkenazi IQ + success rate in spite of hostile environments is indeed true.

All things considered, I was talking mainly out of memory, I will probably removed/redact this post and maybe try my hand at it again if I manage to dig through the data and if there's indeed a case to be made that high intelligence is correlated but not causal or required for success on various measurable "social metrics".

Comment by George3d6 on [deleted post] 2020-02-07T20:20:04.703Z
So what would a real comparison of intelligence with something else look like? I think the question "Is intelligence good?" is not that meaningful.
What we can do is ask "is there a way to X given only Y" For instance "is there a way to make a fire, given only the ability to contract mucles of a human body in a forest?" or "is there a way to destroy the moon, given only the ability to post 10k charicters to" These are totally formalizable questions and could in principle be answered by simulating an exponential number of universes.

I agree with the first statement but not the later.

Unless we can ask "Is something good", than why would we consider that subject to be important ?

Most thing that we hold to be of value, we do so because they are almost universally considered good (or because they are used to guard against something that's universally considered bad).

We can certainly ask "Can <manipulation Y of class ABC of T-cell> be <good> ?" and we could get a pretty universal "Yes, because that will help us cure this specific type of tumor and this specific type of tumor, when viewed through the subjective lens of any given animal, is bad".

We can then say there are a wide variety of tasks and goals that humans can fulfill given our primitive action of muscle contraction. Given that chimps have a similar musculature, but less intelligence and can't do most of these tasks, and many of the routes to fulfillment of the goals go through layers of indirection, then it seems that an intelligence comparable to humans with some other output channel would be similarly good at achieving goals.

Again, here I think your analogy suffers of the problem I was trying to tackle, you are taking a human-centric view and assuming that chimps are inferior in the range of actions they can take.

Chimps can do feats of acrobatics that seem fun and impressive, with seemingly little risk and effort involved. I would love to be able to do that ? Would I love that more than being able to, say, not die from cancer since I have chemotherapy ? Or more than being able to drive a car ? I don't know... I can certainly see a valid viewpoint that being able to spend my life swinging through trees in the Congos would be "better" than having cars and chemotherapy and the other 1001 wonders that our brains help produce.

Comment by George3d6 on [deleted post] 2020-02-07T20:11:40.133Z
Some of your comparisons make even less sense, like ability to survive in extreme environments. Comparing a fish and an untooled human in ability to survive in the ocean is a straight contest of fish evolution vs human evolution. If the human drowns before they have a chance to think anything, the power of the human brain is not shown in the slightest.

I was not though, I was comparing humans + the tools that they build using their intelligence to other forms of life.

Comment by George3d6 on [deleted post] 2020-02-07T20:10:34.538Z
Claiming that you can get a dean of Harvard or the chairman of the CCP with an IQ of 100 seems to me a pretty implausible claims and you do nothing to argue why your readers should believe it.

I think you might have taken that to literally, the way I worded the claim is:

Intelligence is not causal for any markers of social status or money, though it is correlated.

I would argue, if you look at his writing & upbringing, that someone like Mao or Stalin were indeed pretty close to the mean of the distribution... but that's besdies the point.

It's likely that people in the 0.0001% of any hierarchy are in the top 1% of intelligent people, but unless that correlation can be bough to 0.0001% to 0.0001%, there's 3 zeros to account for there.

It's clear based on the intelligence research that the most rich people in the world, for example, are not the highest IQ people in the world.

But I didn't want to go on citing IQ research partially because IQ doesn't fully reflect what we think of as "intelligence". So the claim "Warren buffet is not the most intelligent person in the world, because intuitively we see people which seem to be much smarter" doesn't seem to be weaker than the claim "Warren Buffet is not the smartest person in the world because he score 126 on an IQ test"... one invites a subjective judgement of intelligence, the other invites a subjective judgement of how much IQ reflects intelligence.

Finally, I agree I should have present more clear evidence if this was meant as an academic article, but it was not, it was meant as a "Take a look at this perspective". I couldn't have done that if I endeavoured upon a meta-review of IQ research.

If you can site sourced that claim IQ is causal for obtaining status or wealth, as in, more causal than say, the family or country you were born in, I will retract my claim. All the literature I have read indicates it's correlated, but it's correlated up to a point and it acts more like a filter (i.e. all professors have an IQ of over 100, but the professor's IQ isn't strongly correlated with how many citations he gets, what his salary is, or how many grants or nobel prizes he receives)

Comment by george3d6 on Plausibly, almost every powerful algorithm would be manipulative · 2020-02-06T13:59:40.742Z · score: 1 (1 votes) · LW · GW

I'm not sure what you mean by "learn to call the programmers" ? As in, in your analogy this sounds similar to reaching an error state... but algorithms are not optimized to reach and error state or to avoid reaching and error state.

You *could*, if you were selecting from loads of algorithms or running the same one many times end up selecting algorithms that reach and error state very often (which we already do, one of the main meta-criteria for any ML algorithm is basically to fail/finish fast), but that's not necessarily a bad thing.

Comment by george3d6 on Book Review: Human Compatible · 2020-02-03T10:21:58.976Z · score: 2 (2 votes) · LW · GW
That seems plenty big enough to merit throwing similar ML techniques at designing a pill with no active ingredient but that passed various kinds of basic tests for whether a medication is genuine.

Ahm... the way we test pills that are FDA approved is by feeding them to humans and seeing if they have the desired effect upon disease-related markers based on assays that imperfectly capture those markers.

So, this is already happening I'm afraid, no drugs are designed to cure anything in particular, they are designed to optimize for the marker we can test which lead us to think they will pass the tests that say they are a cure for a diseases.

Drugs that are arguably not very useful or even harmful (e.g. statins) have been designed this way already.

Comment by george3d6 on Book Review: Human Compatible · 2020-02-03T10:16:20.353Z · score: 7 (2 votes) · LW · GW

This book remind me of a discussion I had with someone recently regarding open sourcing anonymous medical data. His position after exchanging a few replies crystalized to be along the lines of: "The possible downsides of scientific advances based on medical data outweigh the benefits due to entities like banks and governments using it to better determine credit ratings and not lend to at risk people"... my reply was along the lines of "Basically every new technological advance that helps us compute more data or understand the human body, psychology or human societies better will help banks discriminate better creditors on more criteria they have no control over, so why not generalize your position to be against all scientific advancement ?"... I still haven't gotten a reply.

I think this is a widespread phenomenon, people that are afraid of <insert technology> when what they really mean is that they are afraid of other humans.

Take for example: surveillance, drones, deepfakes, algorithmic bias, job loss to automation, social media algorithms... 0 of these are AI problems, all of these are human problems.

Surveillance is an issue of policy and democracy, there's active politicians at all levels that would do all they can to ban mass surveillance and make it transparent where it's not banned. Surveillance happens because people want it to happen.

Drones are an issue because people killing other people is an issue, people have been killing other people without drones and runaway killers (e.g. disease, nuclear waste, mines, poisoned areas) have resulted from people killing other people for a long time.

Algorithmic bias reflects the fact that people are biased (and arguably people can't be made un-biased, since as Scott mentions it's impossible to agree on what "un-biased" is)

Deepfakes are an issue because people can use deepfakes to incriminate other people, people have been using photo altering to incriminate or change the narrative against other people since at least the 30s (see soviet era photo-altering to remove/incriminate "enemies of the state").

Job loss to automation is an issue in the same way jobs moving to cheaper countries or job disappearing to non-AI automation is, it's an issue in the sense that we don't have a society-wide mechanism for taking care of people that aren't useful in a market economy or in an otherwise market-adapted social circle. This has been an issue since forever, or at least since wee keep historical records of societies, most of which include the idea of beggars and thieves doing it to survive.

Social media algorithms are an issue because people voluntarily and with knowledge of the algorithm and with full knowledge of the results expose themselves to them. There's hundreds of alternative websites and thousands of alternative systems that don't make use of algorithms meant to stimulate people but at the same time provide them no useful information and on the whole make their life shittier since they are based on promoting fear and anger. This has been an issue since... arguable, some would say the written word, some would say it's a thing only since modern fear-mongering journalism came about around the 19th century.

But all these are human problems, not AI problems. AI can be used to empower humans thus making human generated problems worse than before, but so can be said about literally any tool we've built since the down of time.

Grg'nar has discovered fire and sharp stone, fire and sharp stone allows Grg'nar to better hunt and prepare animal meat thus making him able to care for and father more descendants.

Fire and sharp stone allow Grg'nar to attract more mates and friends since they like the warmth of Grg'nar's fire and appreciate the protection of Grg'nar's sharp stone.

This give Grg'nar an unfair advantage in the not-dying and reproducing market over his competition, this allows Grg'nar to unfairly discriminate against people he dislikes by harming them with sharp stone and scaring them with fire.

Thus, I propose that fire and sharp stone should be tightly controlled by a regulatory council and if need be fire and sharp stone should be outlawed or severely limited and no further developments towards bigger fire, sharper stone and wheel should be undertaken for now, until we can limit the harmful effects of fire and stone.

You can apply the technology-conservative argument to anything. I'm not saying the technology-conservative argument is bad, I can see it making sense in certain scenarios, though I would say it's hard to apply (see industrial era Japan and China). But masking technology-conservative opinions behind the veil of AI is just silly.

Comment by george3d6 on George's Shortform · 2020-02-02T00:52:16.995Z · score: 5 (3 votes) · LW · GW

This shortform is a bit of a question/suggestion for anyone that might happen to read it.

It seems to me that public discussion has an obvious disadvantage of the added "signaling" one is doing when speaking to a public.

I wouldn't blame the vast majority of LW I read articles from or interacted with of this, as in, I think most people go to great length not to aim their arguments towards social signaling. But on the other hand, social signaling is so ingrained in the brain it's almost impossible **not** to do it. I have a high prior on the idea that even when thinking to yourself, "yourself" is partially the closest interpretation you have of a section of the outside world you are explaining your actions/ideas to.

However, it seems that there's a lot of things that can reduce your tendency to socially signal, the 4 main ones I've observed are:

  • MDAM
  • Load of ethanol
  • Anonymity
  • Privacy (i.e. the one between a small group of people)

The problem with option 1 and 2 is that they are poisonous with frequent exposure, plus the fact that ethanol makes me think about sex and politics, and MDMA makes me think about how I could ever enjoy sex and politics more than I currently enjoy the myriad of tactile sensations I feel when gently caressing this patch of dirt. I assume most people have problems along these same lines with any drug-induced states of open communication.

Anonymity works, but it works in that it showcases just how vile human thoughts are when they are inconsequentially shouting over one another into a void (see 4chan, kiwifarm, reddit front page...etc).

Privacy seems to work best, I've had many interesting discussions with friends that I could have hardly replicate on the internet. However, I doubt that I'm alone in not having a friend that would be knowledgeable/opinionated/interested enough in any subject I'd fancy discussing.

So I'd argue it might be worth-while to try something like an internet discussion-topic form with an entry barrier, where people can get paired up to discuss two different sides of a topic (with certain restrictions, e.g. no discussing politics, so that it doesn't automatically turn into a cesspool no matter what)

The question would be what the entry barrier should be. I.e. if LW opened such a form, and the entry barrier would just be "you must type the url into your search bar", it might work for a bit, but it would have the potential of degenerating pretty fast (see anonymity issue).

I could see several solutions to this issue, which one could mix and match, each with their own specific downsides:

  • Use some sort of internet-points that denote someone's positive involvement in the community as the entry barrier (e.g. karma one something like LW or reddit)
  • Use a significant but not outrageous amount of money (e.g. 100$), that are held in escrow by a moderator or an algorithm. The money is awarded to the other person if they discuss the topic with you at some length and provide satisfactory arguments, lost in the void (e.g. donated to an EA-picked charity) if this is arguably not the case or refunded if your counterpart was obviously discussing in bad faith or lacking relevant knowledge.
  • Use some sort of real-life identification, that is not public to anyone but the database and the people you are discussing with, but is used a verification and as a "threat" that vile conduct could be punished by the moderators making said identity public.
  • Use some sort of real-life credentials (e.g. PhD, proof of compensation received to work in a certain field, endorsements from members of a field almost everyone would consider respectable, history of work published in relevant journals and/or citation count... etc). This would lend itself well if you segment the discussion-search form into different fields of interest.
  • Have the tow parties meet IRL, or send physical letters, or some other thing which has a high cost of entry because the means of communication is inefficient and expensive.

I'm curios if something similar to this already exist, as in, something where one can find a quality of discussion similar to a place like LW, or a private chat with a research colleague, not something like reddit CMV.

Alternatively I wonder what the potential templates and downsides for this kind of environment might be and why one doesn't exist yet.

Comment by george3d6 on High-precision claims may be refuted without being replaced with other high-precision claims · 2020-01-31T17:15:17.932Z · score: 3 (2 votes) · LW · GW

I wholeheartedly agree with this article to the point of being jealous of not having written it myself.

Comment by george3d6 on High-precision claims may be refuted without being replaced with other high-precision claims · 2020-01-31T17:10:32.212Z · score: 8 (2 votes) · LW · GW
Floating point arithmetic in computers is usually not precise, and has many failure modes that are hard to understand even for experts.

Floating point arithmetic might not be precise, but it's in-precise in KNOWN ways.

As in, for a given operation done given a certain set of instruction you can know that cases X/Y/Z have undefined behavior. (e.g. using instruction A to multiply c and d will only give a precise result up to the nth decimal place)

By your same definition basically every single popular programming language is not precise since they can manifest UB, but that doesn't stop your kernel from working since it's written in such a way to (mainly) avoid any sort of UB.

Pragmatically speaking, I can take any FP computation library and get deterministic results even if I run a program millions of times on different machines.

Heck, even with something like machine learning where your two main tools are fp operations and randomness you can do something like (example for torch):

                torch.manual_seed(2)                 torch.backends.cudnn.deterministic = True                 torch.backends.cudnn.benchmark = False

and you will get the same results by running the same code millions and millions of time over on many different machine.

Even if the library gets itself in an UB situation (e.g. number get too large and go to nan) , it will be precisely reach that UB at the exact same point each time.

So I think the better way to think of FPA is "defined only in a bounded domain", but the implementations don't bother to enforce those definitions programatically, since that would take too long. Saying nan is cheaper than checking if a number is nan each time kinda thing.

Comment by george3d6 on George's Shortform · 2020-01-31T09:03:30.857Z · score: 1 (1 votes) · LW · GW
I disagree with the "you" in this sentence. (It may work as a question. )

As in, with a question mark at the end ? That's what I originally intended I believe, but I ended up thinking the phrasing already conveys the "questionnes" of it.

Comment by george3d6 on George's Shortform · 2020-01-30T21:32:04.895Z · score: 0 (3 votes) · LW · GW

Retracted, as it made allusions to subjects with too much emotional charge behind them.

Comment by george3d6 on If Van der Waals was a neural network · 2020-01-28T21:36:40.047Z · score: 1 (1 votes) · LW · GW

Seems alright to me, thanks for the help.

Comment by george3d6 on George's Shortform · 2020-01-24T22:05:44.194Z · score: 3 (2 votes) · LW · GW

I mean, I'd argue the pro/against global warming meme isn't worth holding either way, if you already hold the correct "Defer to overwhelming expert consensus in matters where the possible upside seem gigantic and the possible downside irrelevant" (i.e. switching from coal & oil based energy to nuclear, hydro, solar, geothermal and wind... which doesn't bring severe downsides but has the obvious upside of possibly preventing global warming, having energy sources that are more reliable long-term, don't pollute their surroundings and have better yield per resources spent... not to mention useable in a more decentralized way and useable in space).

So yeah, I'd argue both the global warming and the against global warming memes are at least pointless, since you are having the wrong f*** debate if you hold them. The debate should center around:

  • Upsides and Downsides of renewable energy (ignoring the potential effect of GM)
  • How to model the function of faith in expert consensus and what parameters should go into it.
Comment by george3d6 on George's Shortform · 2020-01-23T10:13:33.011Z · score: 1 (1 votes) · LW · GW

I wouldn't say #1 and #2 state the same thing, since #1 basically says "If a meme is a new, look for proof of benefits or lack thereof", #2 says "If a meme is old, look for proof of harm or lack thereof".

I could combine them in "The newer a wide-spread meme is, the more obvious it's benefits should be", but I don't think your summary does justice to those two statements.

Comment by george3d6 on George's Shortform · 2020-01-21T02:13:14.722Z · score: 2 (2 votes) · LW · GW

I wonder why people don't protect themselves from memes more. Just to be clear, I mean meme in the broad memetic theory of spreading ideas/thoughts sense.

I think there's almost an intuitive understanding, or at least one existed in the environment I was bought up in, that some ideas are virulent and useless. I think that from this it's rather easy to conclude that those ideas are harmful, since you only have space for so many ideas, so holding useless ideas is harmful in the sense that it eats away at a valuable resource (your mind).

I think modern viral ideas also tend more and more towards the toxic side, toxic in the very literal sense of "designed to invoke a raise in cortisol and/or dopamine that makes them more engaging yet is arguably provably harmful to the human body. Though I think this is a point I don't trust that much, speculation at best.

It's rather hard to figure out what memes one should protect themselves from under these conditions, some good heuristics I've come up with is:

  • 1. Memes that are new and seem to be embedded in the minds of many people, yet don't seem to increase their performance on any metric you care about. (e.g. wealth, lifespan, happiness)
  • 2. Memes that are old and seem to be embedded in the minds of many people, yet seem to decrease their performance on any metric you care about.
  • 3. Memes that are being recommended to you in an automated fashion by a capable algorithm you don't understand fully.

I think if a meme ticks one of these boxes, it should be taken under serious consideration as harmful. Granted, there's memes that tick all 3 (e.g. wearing a warm coat during winter), but I think those are so "common" it's pointless to bring them into the discussion, they are already deeply embedded in our minds, so it's pointless to discuss them.

A few examples I can think of.

  • Crypot currency in 2017&2018, passes 2 and 3, passes or fails 1 depending on the people you are looking at, => Depends
  • All ads and recommendation on pop websties (e.g. reddit, medium, youtube). Obviously fail at 3, sometimes fail at 1 if the recommendation is "something that went viral". => Avoid
  • Extremist "Western" Religions, passes 1 and 3. Usually fails at 2. => Avoid
  • Contemplative practices, passes 2 and 3, fails 1 depending on the people you are looking at in the case of modern practices, doesn't fail 1 in the case of traditional practices. => Depends
  • Intermittent fasting, passes 2 and 3, very likely passes 1 => Ok
  • Foucault, passes 3, arguably passes 1/2, but it depends on where you draw the "old" line => Depends
  • Instagram, passes 2, fails 3 and arguably fails 1 => Avoid
  • New yet popular indie movies and games, pass 2 and 3, arguably fails at 1 => Avoid (pretty bad conclusion I'd say)
  • Celebrity worshiping, passes 2, kinda fails 3, certainly fails 1 => Avoid
  • Complex Analysis, passes 3 and 1, very easy to argue it passes 2 => Ok

Granted, I'm sure there are examples where these rules of thumb fail miserably, my brain is probably subconsciously coming up with ones where they works. Even more so, I think the heuristic here are kind of obvious, but they are also pretty abstract and hard to defend if you were to scrutinize them properly.

Still, I can't help but wonder if it "safety measures" (taken by the individual, no political) against toxic memes shouldn't be a subject that's discussed more. I feel like it could bring many benefits and it's such a low hanging fruit.

Then again, protecting ourselves against the memes we consider toxic might be something we all inherently do already and something we do pretty well. So my confusion here is mainly about how some people end up *not* considering certain memes to be toxic, rather than how they are unable to defend themselves from them.

Comment by george3d6 on George's Shortform · 2020-01-11T01:08:37.804Z · score: 2 (2 votes) · LW · GW

In regards to 1), I don't necessarily think that older developments that are re-emerging can't be interesting (see the whole RL scene nowadays, which to my understanding is very much bringing back the kind of approaches that were popular in the 70s). But I do think the particular ML development that people should focus on is the one with the most potential, which will likely end up being newer. My grips with GPT-2 is that there's no comparative proof that it has potential to generalize compared to a lot of other things (e.g. quick architecture search methods, custom encoders/heads added to a resnet), actually I'd say the sheer size of it and the issue one encounters when training it indicates the opposite.

I don't think 2) is a must, but going back to 1), I think that training time is one of the important criterions to compare the approaches we are focusing on. Since training time on a simple task is arguably the best you can do to understand training time for a more complex task.

As for 3) and 4)... I'd agree with 3), I think 4) is too vague, but I wasn't trying to bring either point across in this specific post.

Comment by george3d6 on George's Shortform · 2020-01-10T11:38:55.032Z · score: 4 (4 votes) · LW · GW
The question is how long - 10 years? Solving chess via analyzing the whole tree would take too much time, so no one does it. Would it learn in a remotely feasible amount of time?

Well yeah, that's my whole point here. We need to talk about accuracy and training time !

If the GPT-2 model was trained in a few hours, and losses 99% of games vs a decision tree based model (ala deep blue) that was trained in a few minutes on the same machine, then it's worthless. It's exactly like saying "In theory, given almost infinite RAM and 10 years we could beat deep blue (or alpha chess or whatever the cool kids are doing nowadays) by just analyzing a very large subset of all possible moves + combinations and arranging them hierarchically".

Comment by george3d6 on George's Shortform · 2020-01-10T11:36:42.795Z · score: 2 (2 votes) · LW · GW

Just an example of a library that can be used to do hyperparameter search quickly.

But again, there are many tools and methodologies and you can mix and match, this is one (methodology/idea of architecture search) that I found kinda of interesting for example: