Posts

Causality and its harms 2020-07-04T14:42:56.418Z · score: 16 (9 votes)
Training our humans on the wrong dataset 2020-06-21T17:17:07.267Z · score: 4 (3 votes)
Your abstraction isn't wrong, it's just really bad 2020-05-26T20:14:04.534Z · score: 32 (12 votes)
What is your internet search methodology ? 2020-05-23T20:33:53.668Z · score: 13 (8 votes)
Named Distributions as Artifacts 2020-05-04T08:54:13.616Z · score: 23 (10 votes)
Prolonging life is about the optionality, not about the immortality 2020-05-01T07:41:16.559Z · score: 7 (4 votes)
Should theories have a control group 2020-04-24T14:45:33.302Z · score: 3 (1 votes)
Is ethics a memetic trap ? 2020-04-23T10:49:29.874Z · score: 6 (3 votes)
Truth value as magnitude of predictions 2020-04-05T21:57:01.128Z · score: 3 (1 votes)
When to assume neural networks can solve a problem 2020-03-27T17:52:45.208Z · score: 13 (4 votes)
SARS-CoV-2, 19 times less likely to infect people under 15 2020-03-24T18:10:58.113Z · score: 2 (4 votes)
The questions one needs not address 2020-03-21T19:51:01.764Z · score: 15 (9 votes)
Does donating to EA make sense in light of the mere addition paradox ? 2020-02-19T14:14:51.569Z · score: 6 (3 votes)
How to actually switch to an artificial body – Gradual remapping 2020-02-18T13:19:07.076Z · score: 9 (5 votes)
Why Science is slowing down, Universities and Maslow's hierarchy of needs 2020-02-15T20:39:36.559Z · score: 19 (16 votes)
If Van der Waals was a neural network 2020-01-28T18:38:31.561Z · score: 19 (7 votes)
Neural networks as non-leaky mathematical abstraction 2019-12-19T12:23:17.683Z · score: 17 (7 votes)
George's Shortform 2019-10-25T09:21:21.960Z · score: 3 (1 votes)
Artificial general intelligence is here, and it's useless 2019-10-23T19:01:26.584Z · score: 0 (16 votes)

Comments

Comment by george3d6 on Causality and its harms · 2020-07-09T22:50:15.184Z · score: 1 (1 votes) · LW · GW

I believe the thing we differ on might just be a semantic, at least as far as redefinition goes. My final conclusion is around the fact that the term is bad because it's ill-defined, but with a stronger definitions (or ideally multiple definitions for different cases) it would be useful, it would also, however, be very foreign to a lot of people.

Comment by george3d6 on Causality and its harms · 2020-07-06T18:38:15.256Z · score: 1 (1 votes) · LW · GW

Corrected the wording to be a bit "weaker" on that claim, but also, it's just a starting point and the final definition I dispute against doesn't rest on it.

Comment by george3d6 on How far is AGI? · 2020-07-06T04:26:30.857Z · score: -2 (2 votes) · LW · GW

1. The problem with theories along the vein of AIXI is that they assume exploration is simple (as it is, in RL), but exploration is very expensive IRL

So if you want to think based on that framework, well, then AGI is as far away as it takes to build a robust simulation of the world in which we want it to operate (very far away)

2. In the world of mortals, I would say AGI is basically already here, but it's not obvious because it's impact is not that great.

We have ML-based systems that could in theory do almost any job, the real problem lies in the fact that they are much more expensive than humans to "get right" and in some cases (e.g. self driving) there are regulatory hurdles to cross.

The main problem with a physical human-like platform running an AGI is not that designing the algorithms for it to perform useful tasks is hard, the problem is that designing a human like platform is impossible with current technology and the closest alternatives we've got are still more expensive to build and maintain than just hiring a human.

Hence why companies are buying checkout machines to replace employees rather than buying checkout robots.

3. If you're referring to "superintelligence" style AGI, i.e. something that is much more intelligent than a human, I'd argue we can't tell how far away this is or if it can even exists (i.e. I think it's non obvious that the bottleneck at the moment is intelligence and not physical limitations, see 1 + corrupt incentives structures, aka why smart humans are still not always used to their full potential).

Comment by george3d6 on Have general decomposers been formalized? · 2020-07-02T00:25:51.768Z · score: 3 (2 votes) · LW · GW

I was asking why because I wanted to understand what you mean by "decomposition".

a system is a decomposer if it can take a thing and break it down into sub-things with a specific vision about how the sub-things recombin

Defines many things.

Usually the goal is feature extraction (think Bert) or reducing the size of a representation (think autoencoders or simpler , PCA)

You need to narrow down your definition, I think, to get a meaningful answers.

Comment by george3d6 on Have general decomposers been formalized? · 2020-06-27T21:26:12.152Z · score: 1 (1 votes) · LW · GW

Why is the literature into reversible encoders/autoencoders/embedding generators not relevant for your specific usecase ?

Give an answer to that it might be easier to recommend stuff.

Comment by george3d6 on Do Women Like Assholes? · 2020-06-23T00:32:06.842Z · score: 7 (5 votes) · LW · GW

I don't want to get into the whole CW thing around this topic, *but*:

1. Since you so off handedly decided not to use p values, why do you:

a) Use linear models for the analysis provided such low r2 scores

b) Why use r2 at all ? Does it seem meaningful for this case, otherwise, if your whole shtik is being intuitive why not use mae or even some pct based error ?

c) Are you overfitting those regression models instead of doing cross validation ?

d) If the answer to c is no, then: provide nr of folds and variation of the coefficients given the folds, this is an amazing messure to determine a confidence value regarding the coeficient associated not being spurious (i.e of the variation is 0.001-0.1 then that means said coeficient is just overfitting on noise).

f) If the answer is no, why ? I mean, cross validation is basically required for this kind of analysis, if you're just overfitting your whole dataset that basically makes the rest of your analysis invalid, you're just finding noise that can be approximated using a linear function summation.

Also, provided the small effect sizes you found, why consider the data relevant at all ?

If anything this analysis shows that all the metrics you care about depends mostly on some hidden variable neither you nor the pseudoscientists you are responding to have found.

Maybe missing something here though, it's 3:30am here, so do let me know if I'm being uncharitable here or underspecfying some of my questions/contention-points.

Comment by george3d6 on Training our humans on the wrong dataset · 2020-06-22T11:28:42.931Z · score: 1 (1 votes) · LW · GW

The more specific case I was hinting at was figuring out the loss <--> gradient landscape relationship.

Which yes, a highschooler can do for a 5 cell network, but for any real network it seems like it's fairly hard to say anything about it... I.e. I've read a few paper delving into the subject and they seem complex to me.

Maybe not PhD level ? I don't know. But hard enough that most people usually choose to stick with a loss that makes sense for the task rather than optimize it such that the resulting gradient is "easy to solve" (aka yields faster training and/or converges on a "more" optimal solution).

But I'm not 100% sure I'm correct here and maybe learning the correct 5 primitives makes the whole thing seem like childplay... though based on people's behavior around the subject I kinda doubt it.

Comment by george3d6 on Training our humans on the wrong dataset · 2020-06-21T22:22:14.679Z · score: 3 (2 votes) · LW · GW

TL;DR Please provide references in order for me to give a more cohesive reply, see papers bellow + my reasoning & explanation as to why you are basically wrong and/or confusing things that work in RL with things that work in SL and/or confusing techniques being used to train with scarce data for ones that would work even when the data is large enough that compute is a bottleneck (which is the case I'm arguing for, i.e. that compute should first be thrown at the most relevant data)

Maybe you do this, but me, and many people in ML, do our best to avoid ever doing that. Transfer learning powers the best and highest-performing models. Even in pure supervised learning, you train on the largest dataset possible, and then finetune. And that works much better than training on just the target task. You cannot throw a stick in ML today without observing this basic paradigm.

I would ask for a citation on that.

Never in any ML literature have I ever heard of people training models on datasets other than those they wanted to solve as a more efficient alternative to training on the dataset itself. Of course, provided more time once you converge on your data training on related data can be helpful, but my point is just that training on the actual data is the first approach one takes (obviously, depending on the size of the problem you might start with weight transfer directly)

People transfer weights all the time, but that's because it shortens training time.

New examples of unrelated data (or less-related data) does not make a model converge faster on validation data assuming you could instead create a new example of problem-specific data.

In theory it could make the model generalize better, but when I say "in theory" I mean in layman's terms since doing research on this topic is hard and there's scarce little in supervised learning.

Most rigorous research on this topic seems to be in RL, e.g.: https://arxiv.org/pdf/1909.01331.pdf and it's nowhere near clear cut.

Out of the research that seems to apply better to SL I find this theory/paper to be most rigorous and up to date: https://openreview.net/pdf?id=ryfMLoCqtQ ... and the findings here as in literally any other paper by a respected team or university you will find on the subject can be boiled down to:

"Sometime it helps with generalization on the kind of data not present in the training set and sometime it just results in a shittier models and it depends a lot on the SNR of the data the model was trained on relative to the data you are training for now"

There are GAN papers, among others, which do pretty much this for inferring models & depth maps.

Again, links to papers please. My bet is that the GAN papers do this:

a) Because they lack 3d rendering of the objects they want to create.

b) Because they lack 3d renderings of most of the objects they want to create.

c) Because they are trying to showcase an approach that generalizes to different classes of data that aren't available at training time (I.e. showing that a car 3d rendering model can generalize to do 3d renderings of glasses, not that it can perform better than one that's been specifically trained to generate 3d renderings of glasses).

If one can achieve better results with unrelated data than with related data in similar compute time (i.e. up until either of the models has converged on a validation dataset/runs or in a predefined period of time), or even if one can achieve better results by training on unrelated data *first* and then on related data rather than vice versa... I will eat my metaphorical hat and retract this whole article. (Provided both models use appropriate regularization or that at least the relevant-data model uses it, otherwise I can see a hypothetical where a bit of high-noise data can serve as a form of regularization, but even this I would think to be highly unlikely)

No. You don't do it 'just' to save computation. You do it because it learns superior representations and generalizes better on less data. That finetuning is a lot cheaper is merely convenient.

Again see my answers above and please provide relevant citations if you wish to claim the contrary, it seems to me that what you are saying here goes both against common sense. i.e. given a choice between problem-specific data and less-related data your claim is that at some point using less-related data is superior.

A charitable reading of this is that introducing noise in the training data helps generalize (see e.g. techniques involving introducing noise in the training data, l2 regularization and dropout), which seems kind of true but far from true on that many tasks and I invite you to experiment with it an realize it actually doesn't really apply to everything nor are the effect sizes large unless you are specifically focusing on adversarial examples or datasets where the train set covers only a minute portion of potential data.

Comment by george3d6 on Should we stop using the term 'Rationalist'? · 2020-06-01T00:34:19.731Z · score: 1 (1 votes) · LW · GW

To address 2) specifically, I would say that philosophical "Rationalists" are a wider group but they would generally include the kind of philosophical views that most people on e.g. LW hold, or at least they include a pathway to reaching those view.


See the philsophers listed in the wikipedia article for example:


Pythagoras -- foundation for mathematical inquiry into the world and mathematical formalism creating in general

Plato -- foundation for "modern" reasoning and logic in general, with a lot of ***s

Aristotle -- (outdated) foundation for observing the world and creating theories and taxonomies. The fact that he's mostly "wrong" about everything and the "wrongness" is obvious also gets you 1/2 of the way to understand Kuhn

René Descartes -- "questioning" more fundamental assumptions that e.g. Socrates would have had problems seeing as assumptions. Also foundational for modern mathematics.

Baruch Spinoza -- I don't feel like I can summarize why reading "Spinoza" leads one to the LW-brand of rationalism. I think it boils down to this obsession with internal consistency and his obsession to burn any bridge for the sake of reaching a "correct" conclusion.

Gottfried Leibniz -- I mean, personally, I hate this guys. But it seems to me that the interpretations of physics that I've seen around here, and also those that important people in the community (e.g. Eliezer and Scott) use are heavily influenced by this work. Also arguably one of the earliest people to build computers and think about them so there's that.

Immanuel Kant -- Arguably introduced the Game Theoretical view to the world. Also helped correcting/disproving a lot of biased reasoning in philosophy that leads to e.g. arguments for the existence of good based on linguistic quirks.


I think, at least in regards to philosophy until Kant, if one were to read philosophy following this exact chain of philosopher, they would basically have a very strong base from which to approach/develop rationalist thought as seemingly espoused by LW.

So in that sense, the term "Rationalist" seems well fitting if wanting to describe "The general philosophical direction" most people here are coming from.

Comment by george3d6 on Obsidian: A Mind Mapping Markdown Editor · 2020-05-27T17:11:13.269Z · score: 1 (1 votes) · LW · GW

But is there some functionality that this would provide that a wiki doesn't ? (or some nice interface for that functionality that a wiki doesn't).

Or is just the simplicity of installation and/or the simplicity of the data format ?

Comment by george3d6 on Obsidian: A Mind Mapping Markdown Editor · 2020-05-27T16:41:52.959Z · score: 1 (1 votes) · LW · GW

Do you think this is better than having e.g. a personal wiki ?

Comment by george3d6 on Your abstraction isn't wrong, it's just really bad · 2020-05-27T12:18:28.770Z · score: 3 (2 votes) · LW · GW

I mean, I basically agree with this criticism.

However, my problem isn't that in the literal sense new theories don't exist, my issue is that old theories are so calcified that one can't really do without knowing them.

E.g. if I as a programmer said "Fuck this C nonsense, it's useless in the modern world, maybe some hermits in an Intel lab need to know it, but I can do just fine by using PHP" then they can become Mark Zuckerberg. I don't mean that in the "become rich as *** sense" but in the "become the technical lead of a team developing one of the most complex software products in the world" sense.

Or, if someone doesn't say "fuck C" but says "C seems to complex, I'm going to start with something else" then they can do that and after 5 years of coding in high level languages they have acquired a set of skills that allowed them to dig back down and learn C very quickly.

And you can replace C with any "old" abstraction that people still consider to be useful and PHP with any new abstraction that makes things easier but is arguably more limited in various key areas (Also, I wouldn't even claim PHP is easier than C, PHP is a horrible mess and C is beautiful by comparison, but I think the general consensus is against me here, so I'm giving it as an example).

In mathematics this does not seem to be an option, there's no 2nd year psychology major that decided to take a very simple mathematical abstraction to it's limits and became the technical leader of one of the most elite teams of mathematicians in the world. Even the mere idea of that happening seems silly.

I don't know why that is, maybe it's because, again, math is just harder and there's not 3-month crash course that will basically give you mastery of a huge area of mathematics the same way a 3-month crash course in PHP will give you the tools needed to build proto-facebook (or any other piece of software that defines a communication and information interpretation & rendering protocol between multiple computers).

Mathematics doesn't have useful abstractions that allow the user to be blind to the lower level abstractions, nonstandard analysis exists but good luck trying to learn it if you don't know a more kosher version of analysis already, you can't start at nonstandard analysis... or maybe you can ? But then that means this is a very under-exploited idea and it gets back to the point I was making.

I'm using programming as the bar here since it seems that, from the 40s onward, the requirements to be a good programmer has been severely lowered due to the new abstraction we introduce. In the 40s you had to be a genius to even understand the idea of computer. In modern times you can be a kinda smart but otherwise unimpressive person and create revolutionary software or write an amazing language of library. Somehow, even though the field got more complex, the entry cost went from 20+ years including the study of mathematics, electrical engineering and formal logic to a 3-month bootcamp or like... reading 3 books online. In mathematics it seems that the entry cost gets higher as time progresses and any attempts to lower that are just tiny corrections or simplifications of existing theory.

And lastly, I don't know if there's a process "harming" math's complexity that could easily be stopped, but there are obvious processes harming programming's complexity that seems, at least in principle, stopable. E.g. if you look at things like coroutines vs threads vs processes, which get thought as separate abstractions, yet are basically the same **** thing if you move to all but a few kernels that have some niche ideas about asyncio and memory sharing.

That is to say, I can see a language that says "Screw coroutines vs threads vs processes nonsense, we'll try to auto-detect the best abstraction that the kernel+CPU combination you have supports for this, maybe with some input from the user, and go from there" (I think, at least in part, Go has tried this, but in a very bad fashion, and at least in principle you could write a JVM + JVM language that does this, but the current JVM languages and implementations wouldn't allow for this).

But if that language never comes, and every single programmers learn to think in terms of those 3 different parallelism abstractions and their off-shots, then we've just added some arguably-pointless complexity, that makes sense for our day and age but could well become pointless in a better-designed future.

And at some point you're bound to be stuck with things like that and increase the entry cost, though hopefully other abstractions are simplified to lower it and the equilibrium keeps staying at a pretty low number of hours.

Comment by george3d6 on Why aren’t we testing general intelligence distribution? · 2020-05-27T10:22:56.080Z · score: 2 (2 votes) · LW · GW

Basically, the way I would explain it, you are right, using a bell curve and using various techniques to make your data fit it is stupid.

This derives from two reasons, one is am artifact, the fact that distributions were computation-simplyfing mechanisms in the past, even though this is no longer true. More on this here: https://www.lesswrong.com/posts/gea4TBueYq7ZqXyAk/named-distributions-as-artifacts

This is the same mistake, broadly speaking, as using something like pearsonr instead of an arbitrary estimator (or even better, 20 of them) and a k-fold-crossvalidation in order to determine "correlation" as a factor of the predictive power of the best models.

Second, and see an SSC post on this that does the subject better justice (completely missing the point), we love drawing metaphorical straight line, we believe and give social status to people that do this.

If you were to study intelligence with an endpoint/goal in mind, or with the goal of explaining the world, the standard dist would be useless. Except for one goal, that of making your "findings" seem appealing, of giving them extra generalizability/authorizativeness that they lack, normalizing tests and results to fit the bell curve does exactly that.

Comment by george3d6 on Your abstraction isn't wrong, it's just really bad · 2020-05-27T10:10:40.811Z · score: 1 (1 votes) · LW · GW

When you use them to mentally sort things for general knowledge of what's out there and memory storage like in biology, if it works it works. Kingdoms seem to work for this.

Could you expand this a bit ?

Comment by george3d6 on Your abstraction isn't wrong, it's just really bad · 2020-05-27T10:07:54.372Z · score: 3 (2 votes) · LW · GW

3000 is a bit of an exaggeration, seeing as the vast majority of mathematics was invented from the 17th century onwards, it's more fair to call it 400 years vs programming's 70-something.

Though, if we consider analogue calculators, e.g. the on leibniz made, then you argue programming is about as old as modern math...but I think that's cheating.

But, well, that's kind of my point. It may be that 400 years calcifies a field, be that math or programming or anything else.

Now, the question remains as to whether this is good or not, intuitively it seems like something bad.

Comment by george3d6 on Movable Housing for Scalable Cities · 2020-05-27T00:45:28.368Z · score: 7 (3 votes) · LW · GW

I don't really understand how this helps outside of a world consisting of an idealized plane.

The main issue with housing is that it has to conform to the environment:

  • Rainfall , both maxima over a few seconds and average of days.
  • Earthquakes
  • Flooding
  • Torandoes
  • Temperature
  • Air composition, wind patterns
  • Things like humidity that are mainly a combination of the above

But also things like:

  • State/country specific regulations (e.g. fire hazard rulings, environmental rulings deciding what kind of air conditioning you can use and how many solar panel you'd roof needs)
  • Accounting mess because property taxes might get weird and when things get weird the IRS policy states that it's the taxpayer's job to navigate the complexity.

This idea sounds like something extremely hard to implement and extremely fragile. It only brings marginal benefits, since at the eod the foundation is still immutable.

Also, irrelevant while the pop of almost all US cities is growing, since this becomes efficient compared to rent only when assuming loads of vacant housing.

Something, something, Uber for puppies.

Comment by george3d6 on Baking is Not a Ritual · 2020-05-26T23:58:00.583Z · score: 1 (1 votes) · LW · GW

To me this doesn't seem too far off from the mentality/approach one should take when cooking or when making metaphorical bathtub drugs. Though it's probably in between the two regarding complexity.

The on thing that annoys me about baking as opposed to cooking is that for most of the process you can't taste things and adjust based on that, whereas with cooking there's usually more feedback you can get via constant tasting , which goes a long way especially when your only making the dish for yourself or yourself + people which have culinary preferences well known to you.

On the other hand, isn't it very easy for baking to fall into a taste/health trade-off where the better your pastries the more likely you are to regret eating them 10 years from now ?

Comment by george3d6 on Your abstraction isn't wrong, it's just really bad · 2020-05-26T23:40:51.633Z · score: 4 (1 votes) · LW · GW

Alright, I think what you're saying make more sense, and I think in principle I agree if you don't claim the existence of a clear division between , let's call them design problems and descriptive problems.

However it seems to me that you are partially basing this hypothesis on science being more unified than it seems to me.

I.e. if the task of physicists was to design an abstraction that fully explained the world, then I would indeed understand how that's different from designing an abstraction that is meant to work very well for a niche set of problems such as parsing ASTs or creating encryption algorithms (aka things for which there exists specialized language and libraries).

However, it seems to me like, in practice, scientific theory is not at all unified and the few parts of it that are unified are the ones that tend to be "wrong" at a closer look and just serve as an entry point into the more "correct" and complex theories that can be used to solve relevant problems.

So if e.g. there was one theory to explain interactions in the nucleus and it was consistent with the rest of physics I would agree that maybe it's hard to come up with another one. If there's 5 different theories and all of them are designed for explaining specific cases and have fuzzy boundaries where they break and they kinda make sense in the wider context if you squint a bit but not that much... then that feels much closer to the way programming tools are. To me it seems like physics is much closer to the second scenario, but I'm not a physicist, so I don't know.

Even more so, it seems that scientific theory, much like programming abstraction, is often constrained by things such as speed. I.e. a theory can be "correct" but if the computations are too complex to make (e.g. trying to simulate macromolecules using elementary-particle based simulations) than the theory is not considered for a certain set of problems. This is very similar to e.g. not using Haskell for a certain library (e.g. one that is meant to simulate elementary-particle based physics and thus requires very fast computations), even though in theory Haskell could produce simpler and easier to validate (read: with fewer bugs) code than using Fortran or C.

Comment by george3d6 on Your abstraction isn't wrong, it's just really bad · 2020-05-26T22:30:09.902Z · score: 4 (1 votes) · LW · GW
There is a major difference between programming and math/science with respect to abstraction: in programming, we don't just get to choose the abstraction, we get to design the system to match that abstraction. In math and the sciences, we don't get to choose the structure of the underlying system; the only choice we have is in how to model it.

The way I'd choose to think about it is more like:

1. Language, libraries ...etc are abstractions under an underlying system (some sort of imperfect Turing machine), that programmers don't have much control over

2. Code is an abstraction over a real world problem meant to regorize-it to the point where it can be executed by a computer (much like math in e.g. physics is an abstraction meant to do... exactly the same thing, nowadays)

Granted, what the "immutable reality" and the "abstraction" are depends on who's view you take.

The main issue is that reality has structure (especially causal structure), and we don't get to choose that structure.

Again, I think we do get to chose structure. If your requirement is e.g. building a search engine and one of the abstractions you chose is "the bit that stores all the data for fast querying", because that more or less interacts with the rest only through a few well defined channels, then that is exactly like your cell biology analogy, for example.

To draw a proper analogy between abstraction-choice in biology and programming: imagine that you were performing reverse compilation. You take in assembly code, and attempt to provide equivalent, maximally-human-readable code in some other language. That's basically the right analogy for abstraction-choice in biology.

Ok, granted, but programmers literally write abstractions to do just that when they write code for reverse engineering... and as far as I'm aware the abstractions we have work quite well for it and people doing reverse engineering have the same abstraction-choosing and creating rules every other programmer has.

Picture that, and hopefully it's clear that there are far fewer degrees of freedom in the choice of abstraction, compared to normal programming problems. That's why people in math/science don't experiment with alternative abstractions very often compared to programming: there just aren't that many options which make any sense at all. That's not to say that progress isn't made from time to time; Feynman's formulation of quantum mechanics was a big step forward. But there's not a whole continuum of similarly-decent formulations of quantum mechanics like there is a continuum of similarly-decent programming languages; the abstraction choice is much more constrained

I mean, this is what the problem boils down to at the end of the day, nr of degrees of freedom you have to work with, but the fact that sciences have few of them seems non obvious to me.

Again, keep in mind that programmers also work within constraints, sometimes very very very tight constraints, e.g. a banking software's requirements are much stricter (if simpler) than those of a theory that explains RNA Polymerase binding affinity to various sites.

It seems that you are trying to imply there's something fundamentally different between the degrees of freedom in programming and those in science, but I'm not sure I can quite make it out from your comment.

Comment by george3d6 on What is your internet search methodology ? · 2020-05-25T10:25:07.391Z · score: 1 (1 votes) · LW · GW

The Gwern article I was unaware of, I will check it out.

In addition, I wouldn't bother trying to search sci-hub directly from Google. Instead, find the actual journal article you're looking for, copy its DOI number, and paste that into sci-hub.

I was speaking of a sci-hub addon, which auto detects the DOI in a page you are reading and opens the article in scihub (i.e. to make find DOI -> open sci hub -> past DOI and search a single step of "click addon button")

Comment by george3d6 on What is your internet search methodology ? · 2020-05-24T20:49:35.126Z · score: 2 (2 votes) · LW · GW

Can't you simply e.g. donate 200$ each year to offset this ? E.g. google charge (I think) ~1$/click for a US demographic (some exceptions, blah blah) and how many search engine ads do you click ? For me it's ~0, but let's say... 100 a year ? add to that like 1$ hundred impressions + 10,000 searches a year. Granted, this is a very rough number, but I'm being rather charitable with the profit here, I think, considering a large part of that is actually operational costs.

It seems like your search data is hardly worth more than that, and the advantages of using google are many in terms of time saving. Enough to be e.g. worth 200$.

I get why one wouldn't want to use google for ethical reasons, but at the eod all the search engines which use a centralized structure are equally bad, they just happen not to hold a monopoly (however, in that case, if you're just anti-monopoly, you might as well use e.g. Bing which seems closest to google in terms of quality)

Comment by george3d6 on Making a Crowdaction platform · 2020-05-18T00:18:15.590Z · score: 1 (1 votes) · LW · GW

You can use e.g. WordPress + some poll plugins to build this yourself.

The problem is:

  • If it's centralized it will be fundamentally unsafe since the people controlling it can use it as a way to get free labour behind a thing they benefit from (see democratic governments)
  • If it's decentralized it's either expensive to vote and/or start an issue (see captialist economies) or your back to problem one.
  • Getting people to use it is a coordination problem in of itself.

The closest you can get to something work-able is to look at various block chain project for implementing democratic voting and put some pretty trappings around them. But it doesn't quite solve issue 1 and 2, might add the issue of registration being hard (e.g. id checking smart contract) and doesn't solve 3.

Comment by george3d6 on That Alien Message · 2020-05-16T19:27:34.263Z · score: 1 (1 votes) · LW · GW

It seems to me that the stipulations made here about the inferential potential or little information is made from the naive viewpoint that piece of information are independent.

The idea of the plenitude of information with inferential ability that is readily accessible to a smart enough agent doesn't hold if that information consists of things which are mostly dependent on each other.

A <try to taboo this word whenever you see it> hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple.  It might guess it from the first frame, if it saw the statics of a bent blade of grass.

This statement could be true, however, this doesn't mean that upon seeing a second blade of grass it could generate a new hypothesis, or upon seeing all that is on earth on a macroscopic or even on a microscopic (up to limit of current instruments).

Heck, if you see a single bit, as long as you have the ideas of causality, you can generate infinite hypothesis for why that bit was caused to be zero or one... you can even assign probabilities to them based on their complexity. A single bit is enough to generate all hypothesis about how the universe might work ,but you're just left with an infinite and very flat search space.

So, this view of the world boils down to:

  • Most properties of the world can be inferred with a very small probability from a very small amount of information. This is literally an inversion of the basic scientific assumption that observations about properties of the world carry over into other systems. If one can find properties that are generalizable, one can at least speculate as to what they are even by observing a single one of the things they generalize to.
  • However, new information serves to shrink the search space and increase our probability for a hypothesis being true

Which is... true, but it's such an obvious thing that I don't think anyone would disagree with it. It's just formulated in a very awkward way in this article to make it seem "new". Or at least, I've got no additional insight from this other than the above.

Comment by george3d6 on Named Distributions as Artifacts · 2020-05-04T21:18:51.598Z · score: 3 (2 votes) · LW · GW
And the latter is usually what we actually use in basic analysis of experimental data - e.g. to decide whether there's a significant different between the champagne-drinking group and the non-champagne-drinking group

I never bought up null-hypothesis testing in the liver weight example and it was not meant to illustrate that... hence why I never bought up the idea of signfiance.

Mind you, I disagree that signficance testing is done correctly, but this is not the argument against it nor is it related to it.

(The OP also complains that "We can't determine the interval for which most processes will yield values". This is not necessarily a problem; there's like a gazillion versions of the CLT, and not all of them depend on bounding possible values. CLT for e.g. the Cauchy distribution even works for infinite variance.)

My argument is not that you can't come up with a distribution for every little edge case imaginable, my argument is exactly that you CAN and you SHOULD but this process should be done automatically, because every single problem is different and we have the means to dynamically see the model that best suits every problem rather than stick to choosing between e.g. 60 names distributions.

Even here, we can apply a linearity -> normality argument as long as the errors are small relative to curvature.

I fail to see your argument here, as in, I fail to see how it deals with the interconnected bit of my argument and I fail to see how noise being small is something that ever happens in a real system, in the sense you use it here, as in, noise being everything that's not inference we are looking for.

There absolutely is a property of mathematics that tells us what a slightly-off right-angled triangle is: it's a triangle which satisfies Pythagoras' formula, to within some uncertainty.

But, by this definition that you use here, any arbitrary thing I want to define mathematically, even if it contains within it some amount of hand wavyness or uncertainty, can be a property of mathematics ?

I fully support quoting Wikipedia, and it is inherently bad to use complex models instead of simple ones when avoidable. The relevant ideas are in chapter 20 of Jaynes' Probability Theory: The Logic of Science, or you can read about Bayesian model comparison.

Your article seems to have some assumption that increase complexity == proneness to overfitting.

Which in itself is true if you aren't validating the model, but if you aren't validating the model it seems to me that you're not even in the correct game.

If you are validating the model, I don't see how the argument holds (will look into the book tomorrow if I have time)

Intuitively, it's the same idea as conservation of expected evidence: if one model predicts "it will definitely be sunny tomorrow" and another model predicts "it might be sunny or it might rain", and it turns out to be sunny, then we must update in favor of the first model. In general, when a complex model is consistent with more possible datasets than a simple model, if we see a dataset which is consistent with the simple model, then we must update in favor of the simple model. It's that simple. Bayesian model comparison quantifies that idea, and gives a more precise tradeoff between quality-of-fit and model complexity.

I fail to understand this argument and I did previously read the article mentioned here, but maybe it's just a function of it being 1AM here, I will try again tomorrow.

Comment by george3d6 on Prolonging life is about the optionality, not about the immortality · 2020-05-04T08:35:15.351Z · score: 3 (2 votes) · LW · GW

I mean, It's not a claim I will defend per say, it was more "Here's a list of arguments I've already heard around the issue, to give some context to where I'm placing mine".

I think I agree with this claim, but I'm not 100% sure by any stretch and I don't have the required sources to make a good case for it, other than my intuition which tells me it's right, but that's not a very good source of truth.

Comment by george3d6 on Is ethics a memetic trap ? · 2020-04-26T17:50:33.896Z · score: 0 (2 votes) · LW · GW
Even if you are not a god-emperor, you would still be required to give your stuff away until you are no more miserable than everyone else.

But that is the crazy interpretation of consequentialism which places 0 value on the ethics of care, that nobody practices, because even true psychopaths still have a sweet spot for themselves, so, is it worth bringing to the table ?

So it's not really true that all ethical systems are undemanding, more that the undemanding forms are undemanding. People might be motivated to water down their ethical systems to make them more popular, and that might lead to a degree of convergence, but that isn't very interesting.

This is actually a good point ( though I don't believe extreme U-ism is the best example here, since again, literally nobody is practicing it.

But that's only important under consequentialism, which is only one system.

See above, the vast majority of actions under normative ethics have no inherent value, they only help in that they place you in a situation where you are better positioned when taking an action that will have normative value.

Sure they do. Prayer? Eating pork?

Also true.

But are the kind of religious systems where most actions have normative values even "in the discussion" for... anyone that potentially reads LW ?

I guess I bought them up by citing new england style christianity when I should have really just said Quakers or some other light-weight, value-of-life & tolerance focused christian sect where "god" is closer to a "philosopher's god" or "prime mover", rather than an "ominpotent demanding father"

Comment by george3d6 on Don't Use Facebook Blocking · 2020-04-22T13:29:52.689Z · score: 1 (1 votes) · LW · GW

As in, whatever you would all the self-contained ad distribution and behavior monitoring networks being build by the big tech companies. E.g. you can sell stuff on facebook, follow news, chat with friends, use a dating app, use maps, store data... etc, so it becomes a self-contained ecosystem, and maybe you leave to play games, but it's on a facebook controlled VR system, or you are young and you want to have a space for only you and your young friend to share stuff, so you go to FB controlled Instagram, or you don't use FB messanger so you message via facebook controlled whatsapp... etc

Facebook being just an example, you have similar setups for verzion, amazon, twitter and google (and probably more), and they are all collaborative to some extent.

adternet as just and tongue in cheek way of saying "internet but built for the express purpose of targeting ads".

Comment by george3d6 on Don't Use Facebook Blocking · 2020-04-21T16:50:14.929Z · score: 1 (1 votes) · LW · GW

I initially though this was going to be a critique about facebook container...glad to know there's nothing wrong with that.

But blocking in general never made much sense to me on a website... though I guess facebook is getting to be more and more of a self-contained adterner rather than a website.

Comment by george3d6 on Review of "Lifecycle Investing" · 2020-04-12T13:34:54.979Z · score: 4 (2 votes) · LW · GW

Or, to put your comment more succinctly, the book talks about several variables as if though they are independent (e.g. market mid-term ROI, personal income, amount of leverage most brokers provide), but historically speaking these variables have always been heavily correlated.

Comment by george3d6 on When to assume neural networks can solve a problem · 2020-03-30T01:54:01.477Z · score: 1 (1 votes) · LW · GW
Yes, I was in fact. Seeing where this internet argument is going, I think it's best to leave it here.

So, in that case.

If your original chain of logic is:

1. An RL-based algorithm that could play any game could pass the turing test

2. An algorithm that can pass the Turing test is "AGI complete", thus it is unlikely that (1) will happen soon

And you agree with the statement:

3. An algorithm did pass the Turing test in 2014

You either:

a) Have a contradiction

b) Must have some specific definition of the Turing test under which 3 is untrue (and more generally, no known algorithm can pass the Turing test)

I assume your position here is b and I'd love to hear it.

I'd also love to hear the causal reasoning behind 2. (maybe explained by your definition of the Turing test ?)

If your definitions differ from commonly accepted definitions and your you rely on causality which is not widely implied, you must at least provide your versions of the definitions and some motivation behind the causality.

Comment by george3d6 on When to assume neural networks can solve a problem · 2020-03-29T19:53:02.522Z · score: 1 (1 votes) · LW · GW
Turing test, which is to say AGI-complete

You are aware chatbots have been "beating" the original Turing test since 2014, right? (And arguably even before)

Also, AGI-complete == fools 1/3 of human judges in an x minute conversation via text? Ahm, no, just no.

That statement is meaningless unless you define the Turing test and keeps being meaningless even if you define the turing test, there is literally no definition for "AGI complete". AGI is more of a generic term used to mean "kinda like a human", but it's not very concrete.


On the whole, yes, some games might prove too difficult for RL to beat... but I can't think of any in particular. I think the statement hold for basically any popular competitive game (e.g. one where there are currently cash prizes above > 1000$ to be won). I'm sure one could design an adversarial game specifically designed to not be beaten by RL but doable by a human... but that's another story. Turing test, which is to say AGI-complete

Comment by george3d6 on When to assume neural networks can solve a problem · 2020-03-28T11:50:59.682Z · score: 1 (1 votes) · LW · GW
Also if you read almost anything on the subject, people will be constantly saying how they don't think superhuman intelligence is inevitable or close

If it's "meaningfully close enough to do something about it" I will take that as being 'close". I don't think Bostrom puts a number on it, or I don't remember him doing so, but he seems to address a real possibility rather than a hypothetical that is hundreds or thousands of years away.

What do you mean, you've never seen a consistent top-to-bottom reasoning for it? This is not a rhetorical question, I am just not sure what you mean here. If you are accusing e.g. Bostrom of inconsistency, I am pretty sure you are wrong about that.

I mean, I don't see a chain of conclusions that leads to the theory being "correct" , Vaniver mentioned bellow how this is not the correct perspective to adopt and I agree with that.... or I would, assuming that the hypothesis would be Popperian (i.e. that one could do something to disprove AI being a large risk in the relative near future).

If you are just saying he hasn't got an argument in premise-conclusion form, well, that seems true but not very relevant or important. I could make one for you if you like.

If you could make such a premise-conclusion case I'd be more then welcome to hear it out.


ease of data collection? Cost of computing power? Usefulness of intelligence? -- but all three of these things seem like things that people have argued about at length, not assumed

Well, I am yet to see the arguments

Also the case for AI safety doesn't depend on these things being probable, only on them being not extremely unlikely.

It depends on you being able to put number on those probabilities though, otherwise you are in a Pascal wager's scenario, where any event that is not almost certainly ruled out should be taken into account with an amount of seriousness proportional to it's fictive impact.

Comment by george3d6 on When to assume neural networks can solve a problem · 2020-03-28T09:56:18.815Z · score: 1 (1 votes) · LW · GW
moreover I think Stuart Russell is too

Yes, I guess I should have made the clarification about that, I don't think Stuart Russell is necessarily much divergent from Bostrom in his views. Rather, he's most poniente arguments seem not to be very related to that view, so I think his books is a good guide for what I labeled as the second view in the article.

But he certainly tries to uphold both.

However the article was already too long and going into that would have made it even longer.... in hindsight I've decided to just split it into two, but the version here I shall leave as is.

Comment by george3d6 on When to assume neural networks can solve a problem · 2020-03-27T22:37:08.969Z · score: 5 (2 votes) · LW · GW

I will probably be stealing the perspective of the view being disjunctive as a way to look at why it's hard to pin down.

And thus, just like the state of neural networks in 2010 was only weakly informative about what would be possible in 2020, it seems reasonable to expect the state of things in 2020 will be only weakly informative and about will be possible in 2030.

This statement I would partially disagree with.

I think the idea of training on a GPU was coming to the forefront by 2010 and also the idea of CNNs for image recognition: https://hal.inria.fr/inria-00112631/document (see both in that 2006 paper)y K. et al. (2006)

I'd argue it's fairly easy to look at today' landscape and claim that by 2030 the things that are likely to happen include:

  • ML playing any possible game better than humans assuming a team actually works on that specific game (maybe even if one doesn't), with huma-like inputs and human-like limitations in terms of granularity of taking inputs and giving outputs.
  • ML achieving all the things we can do with 2d images right now for 3d images and short (e.g. < 5 minute) videos.
  • Algorithms being able to write e.g. articles summarizing various knowledge it gathers from given sources and possibly even find relevant sources via searching based on keywords (so you could just say "Write an article about Peru's economic climate in 2028, rather than feed a bunch of articles about Peru's economy in 2028)... the second part is already doable, but I'm mentioning them together since I assume people will be more impressed with the final product
  • Algorithms being able to translate from and to almost any language about as well as human, but still not well enough to translate sources which require a lot of interpretation (e.g. yes for translating a biology paper from english to hindi or vice versa, no for translating a phenomenology paper from english to hindi or vice versa)
  • Controlling mechanical systems (e.g. robotic arms) via networks trained using RL.
  • Generally speaking, algorithms being used in areas where they already out-perform humans but where regulations and systematic inefficiencies combined with issues of stake don't currently allow them to be used (e.g. accounting, risk analysis, setting insurance policies, diagnosis, treatment planning). Algorithms being jointly used to help in various scientific fields by replacing the need for humans to use classical statistics and or manually fitting equations in order to model certain processes.

I'd wager points 1 to 4 are basically a given, point 5 is debatable since it depends on human regulators and cultural acceptance for the most part.

I'd also wager than, other than audio processing, there won't be much innovation beyond those 5 points that will create load of hype by 2030. You might have ensembles of those 4 things building up to something bigger, but those 5 things will be at the core of it.

But that's just my intuition, partially based on the kind of heuristics above about what is easily doable and what isn't. But alas, the point of the article was to talk about what's doable in the present, rather than what to expect from the future, so it's not really that related.

Comment by george3d6 on George's Shortform · 2020-03-27T08:53:15.522Z · score: 3 (2 votes) · LW · GW

I find it interesting what kind of beliefs one needs to question and in which ways in order to get people angry/upset/touchy.

Or, to put it in more popular terms, what kind of arguments make you seem like a smart-ass when arguing with someone.

For example, reading Eliezer yudkowsky's Rationality from AI to Zombies, I found myself generally speaking liking the writing style and to a karge extent the book was just reinforcing the biases I already had. Other then some of his poorly thought out metaphysics based on which he bases his ethics argument... I honestly can't think of a single thing from that book I disagree with. Same goes for Inadequate Equilibria.

Yet, I can remember a certain feeling popping up in my head fairly often when reading it, one that can be best described in an image: https://i.kym-cdn.com/entries/icons/facebook/000/021/665/DpQ9YJl.jpg

***

One seeming pattern for this is something like:

  • Arguing about a specific belief
  • Going a level down and challenging a pillar of the opponent's belief that was not being considered as part of the discussion.

E.g: "Arguing about whether or not climate change is a threat, going one level down and arguing that there's not enough proof climate change is happening to being with"

You can make this pattern even more annoying by doing something like:

  • Arguing about a specific belief
  • Going a level down and challenging a pillar of the opponent's belief that was not being considered as part of the discussion.
  • Not entertaining an opposite argument about one of your own pillars being shaky.

E.g.: After the previous climate change argument, not entertaining the idea that "Maybe acting upon climate change as if it were real and as if it were a threat, would actually result in positive consequences even if those two things were unture"

You can make this pattern even more annoying by doing something like:

  • Arguing about a specific belief
  • Going a level down and challenging a pillar of the opponent's belief that was not being considered as part of the discussion.
  • Doing so with some evidence that the other party is unaware or cannot understand

E.g.: After the previous climate change argument, back up your point about climate change not being real by citing various studies that would take hours to fact check and might be out of reach knowledge-wise for either of you.

***

I think there's other things that come into account.

For example there's some specific fields which are considered more sacrosanct then others, trying to argue against a standard position in that field as part of your argument seems to much more easily put you into the "smartass" camp.

For example, arguing against commonly held religious or medical knowledge, seems to be almost impossible, unless you are taking an already-approved side of the debate.

E.g. You can argue ibuprofen against paracetamol as the go to for common cold since there's authoritative claims for each, you can't argue for a 3rd lesser backed NSAID or for using corticosteroids or no treatment instead of NSAIDs.

Other fields such as ethics or physics or computer science seem to be fair game and nobody really minds people trying to argue for an unsanctioned viewpoint.

***

There's obviously the idea of politics being overall bad, and the more politicized a certain subject is the less you can change people's minds about it.

But to some extent I don't feel like politics really comes into play.

It seems that people are fairly open to having their minds changed about economic policy but not about identity policy.... no matter which side of the spectrum you are on. Which seem counter intuitive, since the issue of "should countries have open borders and free healthcare" seems like one much more deeply embedded in existing political agendas and of much more import than "What gender should transgender people be counted in when participating in the olympics".

***

One interesting thing that I observed: I've personally been able to annoy a lot of people when talking with them online. However, IRL, in the last 4 years or so (since I actually begun explicitly learning how to communicate), I can't think of a single person that I've offended.

Even though I'm more verbose when I talk. Even though the ideas I talk about over coffee are usually much more niche and questionable in their verity then the ones I write about online.

I wonder if there's some sort of "magic oratory skill" I've come closer to attaining IRL that either can't be attained on the internet or is very different... granted, it's more likely it's the inherent bias of the people I'm usually discussing with.

Comment by george3d6 on The questions one needs not address · 2020-03-23T11:22:21.712Z · score: 1 (1 votes) · LW · GW

Well, not really, since the way they get talked about is essentially searching for a "better" definition or trying to make all definitions coincide.

Even more so, some of the terms allow for definitions but those definitions in themselves run into the same problem. For example, could you try to come up with one or multiple definitions for the meaning of "free will" ? In my experience it either leads to very boring ones (in which case the subject would be moot) or, more likely, to a definition that is just as problematic as 'free will' itself.

Comment by george3d6 on The questions one needs not address · 2020-03-22T18:59:44.202Z · score: 3 (1 votes) · LW · GW
We can now say that trying to answer questions like "what is the true nature of god" isn't going to work

I mean, I don't think and I'm not arguing we can do that. I just think that the question in itself is mistakenly formulate, the same way "How do we handle AI risk ?" is a mistaken formulation (see Jau Molstad's answer to the post which seems to address this).

All that I am claiming is that certain ill-defined question on which no progress can be made exist and that they can be to some extent easily spotted because they would make no sense if de-constructed or if an outside observe were to judge your progress on them.

Celebrating the people who dedicated their lives to building the first steam engine, while mocking people who tried to build perpetual motion machines before conservation of energy was understood, is just pure hindsight

Ahm, I mean, Epicurus and Thales would have had pretty strong intuitions against this, and conservation of energy has been postulated in physics since Issac Newton and even before him, when the whole thing wasn't even called "physics".

Nor is there a way to "prove" conservation of energy other than purely philosophically, or in an empirical way by saying: "All our formulas make sense if this is a thing, so let's assume the world works this way, and if there is some part of the world that doesn't we'll get to it when we find it".

Also, building a perpetual motion machine (or trying to) is not working on an unsanswerable problem/question of the sort I refer to.

As in, working on one will presumably lead you to build better and better engines, and/or see your failure and give up. There is a "failure state", and there's no obvious way of getting into "metaphysics" from trying to research perpetual motion.

Indeed, "Can we build a perpetual motion machine ?" is a question I see as entirely valid, not worth pursuing, but it's at worst harm-neutral and it has proven so in the last 2,000+ years of people trying to answer it.

Comment by george3d6 on George's Shortform · 2020-02-28T13:36:56.910Z · score: 5 (3 votes) · LW · GW

Walking into a new country where people speak very little English reminds me of the dangers of over communication.

Going into a restaurant and saying: "Could I get the turkish coffee and an omelette with a.... croissant, oh, and a glass of water, no ice and, I know this is a bit weird, but I like cinnamon in my turkish coffee, could you add a bit of cinnamon to it ? Oh, actually, could you scratch the omelette and do poached eggs instead"

Is a recipe for failure, at best the waiter looks at you confused and you can be ashamed of your poor communication skills and start over.

At worst you're getting an omelette, with a cinnamon bun instead of a croissant, two cups of turkish coffee, with some additional poached eggs and a room-temperature bottle of water.

Maybe a far fetched example, but the point is: The more instructions you give, the flourishes you put into your request, the higher the likelihood that the core of the requests gets lost.

If you can point at the items on the menu and hold a number of fingers in the air to indicate the quantity, that's an ideal way to order.

But it's curios that this sort of over communication never happens in, say, Japan. In places where people know very little to no English and where they don't mind telling you that what you just said made no sense (or at least they get very visibly embarrassed, more so than their standard over-the-top anxiety, and the fact that it made no sense is instantly obvious to anyone).

It happens in the countries where people kinda-know English and where they consider it rude to admit to not understanding you.

Japanese and Taiwanese clerks, random pedestrians I ask for directions and servers, know about as much English as I know Japanese or Chinese. But we can communicate just fine via grunts, smiles, pointing, shaking of heads and taking out a phone to google translate if the interactions is baring close to the 30s mark with no resolution in sight.

The same archtypes in India and Lebanon speak close to cursive English though, give them 6-12 months in the UK or US plus a panache for learning and they'd be a native speaker (I guess it could be argued that many people in India speak 100% perfect English, but their own dialect, but for the intents and purposes of this post I'm referring to English as UK/US city English).

Yet it's always in the second kind of country where I find my over communicative style fails me. Partially because I'm more inclined to use it, partially because people are less inclined to admit I'm not making any sense.

I'm pretty sure it's this phenomenon is a very good metaphor or instantiation of a principle that applies in many other situations, especially in expert communication. Or rather, in how expert-layman vs expert-expert vs expert-{almost expert} communication works.

Comment by george3d6 on George's Shortform · 2020-02-28T13:15:47.880Z · score: 1 (1 votes) · LW · GW

This just boils down to "showing off" though. But this makes little sense considering:

a) both genders engage in bad practices. As in, I'd expect to see a lot of men doing cross fit, but it doesn't make sense when you consider there's a pretty even gender split. "Showing off health" in a way that's harmful to health is not evolutionary adaptive for women (where it arguably pays off to live for a long time, evolutionarily speaking). This is backed up by other high-risk behaviors being mainly a men's thing

b) sports are a very bad way to show off, especially the sports that come with high risk of injury and permanent degradation when practiced in their current extreme (e.g. weight lifting, climbing, gymnastics, rugby, hokey). The highest pay-off sports I can think of (in terms of social signaling) are football, american football, basketball and baseball... since they are popular and thus the competition is both intense and achieving high rank is rewarding. Other than american football they are all pretty physically safe as far as sports go... when there are risks, they come from other players (e.g. getting a ball to the head) not from over-training or over-performing.

So basically, if it's genetic miss-firing then I'd expect to see it misfire almost only in men, and this is untrue.

If it's "rational" behavior (as in, rational from the perspective of our primate ancestor) then I'd expect to see the more dangerous forms of showing off bring the most social gains rather than vice-versa.

Granted, I do think handicap principle can be partially to blame for "starting" the thing, but I think it continues because of higher level memes that have little to do with social signaling or genetics.

Comment by george3d6 on George's Shortform · 2020-02-22T22:37:51.836Z · score: 1 (1 votes) · LW · GW

Should discomfort be a requirement for important experiences ?

A while ago I was discussing with a friend maligning about the fact that there doesn't exist some sort of sublingual DMT, with an absorption profile similar to smoking DMT, but without the rancid taste.

(Side note, there are some ways to get sublingual DMT: https://www.dmt-nexus.me/forum/default.aspx?g=posts&t=10240 , but you probably won't find it for sale at your local drug dealer and effects will differ a lot from smoking. In most experiences I've read about I'm not even convinced that the people are experiencing sublingual absorption rather than just slowly swallowing DMT with MAOIs and seeing the effects that way)

My point where something along the way of:

I wish there was a way to get high on DMT without going through the unpleasant experience of smoking it, I'm pretty sure that experience serves to "prime" your mind to some extent and leads to a worst trip.

My friend's point was:

We are talking about one of the most reality-shattering experiences ever possible to a human brain that doesn't involve death or permanent damage, surely having a small cost of entry for that in terms of the unpleasant taste is actually a desirable side-effect.

I kind of ended up agreeing with my friend and I think most people would find that viewpoint appealing

But

You could make the same argument for something like knee surgery (or any life-changing surgery, which is most of them).

You are electing to do something that will alter your life forever and will result in you experiencing severe side-effects for years to come... but the step between "decide to do it" and "support major consequences" has 0 discomfort associate to it.

That's not to say knee surgery is good, much like a DMT trip, I have a lot of prior of it being good for people (well, in this case assuming that doctor recommends you to do it).

But I do find it a bit strange that this is the case with most surgery, even if it's life altering, when I think of it in light of the DMT example.

But

If you've visited South Korea and seen the progressive note mutilation going on in their society (I'm pretty sure this has a fancier name... see some term they use in the study of super-stimuli, seagulls sitting on gigantic painted balls kinda king), I'm pretty sure the surgery example can become blurrier.

As in, I think it's pretty easy to argue people are doing a lot of unnecessary plastic surgery, and I'm pretty sure some cost of entry (e.g. you must feel mild discomfort for 3 hours to get this done... equivalent to say, getting a tattoo on your arm), would reduce that number a lot and intuitively that seem like a good thing.

It's not like you could do that though, as in, in practice you can't really do "anesthesia with controlled pain level" it's either zero or operating within a huge error range (see people's subjective reports of pain after dental anesthesia with similar quantities of lidocaine).

Comment by george3d6 on George's Shortform · 2020-02-22T21:09:10.688Z · score: 1 (1 votes) · LW · GW

Hmh, I actually did not think of that one all-important bit. Yeap, what I described as a "meta model for Dave's mind" is indeed a "meta model for human minds" or at least a "meta model for American minds" in which I plugged in some Dave-specific observations.

I'll have to re-work this at some point with this in mind, unless there's already something much better on the subject out there.

But again, I'll excuse this with having been so tried when I wrote this that I didn't even remember I did until your comment reminded me about it.

Comment by george3d6 on George's Shortform · 2020-02-21T02:13:54.516Z · score: 5 (3 votes) · LW · GW

90% certainty that this is bs because I'm waiting for a flight and I'm sleep deprive, but:

For most people there's not a very clear way or incentive to have a meta model of themselves in a certain situation.

By meta model, I mean one that is modeling "high level generators of action".

So, say that I know Dave:

  • Likes peanut-butter-jelly on thin cracker
  • Dislikes peanut-butter-jelly in sandwiches
  • Likes butter fingers candy

A completely non-meta model of Dave would be:

  • If I give Dave a butter fingers candy box as a gift, he will enjoy it

Another non-meta model of Dave would be:

  • If I give Dave a box of Reese's as a gift, he will enjoy it, since I thing they are kind of a combination between peantu-butter-jelly and butter fingers

A meta model of Dave would be:

  • Based on the 3 items above, I can deduce Dave likes things which are sweet, fatty, smooth with a touch of bitter (let's assume peanut butter has some bitter to it) and crunchy but he doesn't like them being too starchy (hence why he dislikes sandwiches).
  • So, if I give Dave a cup of sweet milk ice cream with bits of crunchy dark chocolate on top as a gift, he will love it.

Now, I'm not saying this meta-model is a good one (and Dave is imaginary, so we'll never know). But my point is, it seems highly useful for us to have very good meta-models of other people, since that's how we can predict their actions in extreme situations, surprise them, impress them, make them laugh... etc

On the other hand, we don't need to construct meta-models of ourselves, because we can just query our "high level generators of action" directly, we can think "Does a cup of milk ice cream with crunchy dark chocolate on top sound tasty ?" and our high level generators of action will strive to give us an estimate which will usually seem "good enough to us".

So in some way, it's easier for us to get meta models of other people, out of simple necessity and we might have better meta models of other people than we have of our own self... not because we couldn't construct a better one, but because there's no need for it. Or at least, based on the fallacy of knowing your own mind, there's no need for it.

Comment by george3d6 on George's Shortform · 2020-02-20T02:19:19.057Z · score: 7 (4 votes) · LW · GW

Physical performance is one thing that isn't really "needed" in any sense of the word for most people.

For most people, the need for physical activity seems to boil down to the fact that you just feel better, live longer and overall get less health related issues if you do it.

But on the whole, I've seen very little proof that excelling in physical activity can help you with anything (other than being a professional athlete or trainer, that is). Indeed, it seems that the whole relation to mortality basically breaks down if you look at top perform. Going from things like strongman competitions and american football where life expectancy is lower, to things like running and cycling where some would argue but evidence is lacking, to football and tennis where it's a bit above average.

If the subject interests you, I've personally looked into it a lot, and I think this is the definitive review: https://yorkspace.library.yorku.ca/xmlui/bitstream/handle/10315/32723/Lemez_Srdjan_2016_PhD.pdf

But it's basically a bloody book, I personally haven't read all of it, but I often go back to it for references.

Also, there's the much more obvious problem with pushing yourself to the limits, injury. I think this is hard to quantify and there's few studies looking at it. In my experience I know a surprising amount of "active" people that got injured in life-altering ways from things like skating, skying, snowboarding and even football (not in the paraplegic sense, more in the "I have a bar of titanium going through my spine and I can't lift more than 15kg safely" sort of way). Conversely, 100% of my couch-dwelling buddies in average physical shape doesn't seem to suffer from any chronic pain.

To some extent, this annoys me, though I wonder if poor studies and anecdotal evidence is enough to warrant that annoyance.

For example, I frequent a climbing gym. Now, if you look at climbing, it's relatively safe, there's two things people complain about most sciatica and "climbers back" (basically a very weird looking but not that harmful form of kyphosis).

I honestly found the idea rather weird... since one of the main reason I climb (besides the fact that it's fun) is that it helps and helped me correct my kyphosis and basically got rid of any back/neck discomfort I felt from sitting too much at a computer.

I think this boils down to how people climb, especially how they do bouldering.

A reference as to how the extreme kind of bouldering looks like: https://www.youtube.com/watch?v=7brSdnHWBko

The two issues I see here is:

  • Hurling limbs at tremendous speeds to try and crab onto something tiny.
  • Falling on the mat, often and from large heights. Climbing goes two ways up and down, most people doing bouldering only care about up

Indeed, a typical bouldering run might look something like: "Climb carefully and skillfully as much as possible, hurl yourself with the last bit of effort you have hoping you reach the top, fall on the mat rinse and repeat".

This is probably one of the stupidest things I've seen from a health perspective. You're essentially praying for articulation damage, dislocating a shoulder/knee, tearing a muscle (doesn't look pretty, I assume doesn't feel nice, recovery times are long and sometimes fully recovering is a matter of years) and spine damage (orthopedics don't agree on much, but I think all would agree the worst thing you can do for your spine is fall from a considerable height... repeatedly, like, dozens of time every day).

But the thing is, you can pretty much do bouldering without this, as in, you can be "decent" at it without doing any of this. Personally I approach bouldering as slowly and steadily climbing... to the top, with enough energy to also climb down + climbing down whenever I feel that I'm to exhausted to continue. Somehow, this approach to the sport is the one that give you strange looks. The people pushing themselves above the limits risking injury and getting persistent spine damage from falling... are the standard.

Another things I enjoy is weight lifting, I especially enjoy weighted squats. Weighted squats are fun, they wake you up in the morning, they are a lazy person exercise when you've got nothing else in during that day.

I've heard people claim you can get lower back pain and injury from weighted squats, again, this seems confusing to me. I actually used to have minor lower back pain on occasions (again, from sitting), the one exercise that seemed to have permanently fixed that is a squat. A squat is what I do when I feel that my back is a bit stiff and I need some help.

But I think, again, this is because I am "getting squats wrong", my approach to a squat is "Let me load a 5kg ergonomic bar with 25kg, do a squat like 8 times, check my posture on the last 2, if I'm able to hold it and don't feel tired, do 5-10 more, if I still feel nice and energetic after a 1 minute break, rinse and repeat".

But the correct squat, I believe, looks something like this: https://www.youtube.com/watch?v=nLVJTBZtiuw

Loading a bar with a few hundred kg, at least 2.5x your body weight, putting on a belt so that your intestines don't fall out and lowering it "ONCE", because fuck me you're not going to be able to do that twice in a day. You should at least get some nosebleed every 2 or 3 tries if you're doing this stuff correctly.

I've seen this in gyms, I've seen this in what people recommend, if I google "how much weight should I squat", the first thing I get is: https://www.livestrong.com/article/286849-normal-squat-weight/

If you weigh 165 pounds and have one of the following fitness levels, the standard for your squat one-rep max is:
Untrained: 110 pounds
Novice: 205 pounds
... etc

To say this seems insane is far fetched, basically the advice around the internet seems to be "If you've never done this before, aim for 40-60kg, if you've been to the gym a few times, go for 100+"

Again, it's hard to find data on this, but as someone that's pretty bloody tall who has been using weight to train for years, the idea of starting with 50kg for a squat as an average person seem insane. I do 45kg from time to time to change things up, I'd never squat anything over 70kg even if you paid me... I can feel my body during the move, I can feel the tentative pressure on my lower back if my posture slips for a bit... that's fine if you're lifting 30kg, that seems dangerous as heck if you're lifting more than your body weight, it even feels dangerous at 60kg.

But again, I'm not doing squats correctly, I am in the wrong here as far as people doing weight training are concerned.

I'm also wrong when it comes to every sport. I'm a bad runner because I give up once my lungs are burning for 5 minutes straight. I'm a horrible swimmer because I alter styles and stick with low-speed ones that are overall better for toning all muscles and have less risk of injury... etc

Granted, I don't think that people are too pushy about going to extremes. The few times people tell me some version of "try harder" phrased as a friendly encouragement. I finish what I'm doing, say thanks and lie to them that I have a slight injury and I'd rather not push it.

But deep inside I have a very strong suspicion that I'm not wrong on this thing. That somehow we've got ourselves into a very unhealthy memetic loop around sports, where pushing yourself is seen as the natural thing to do, as the thing you should be doing every day.

A very dangerous memetic loop, dangerous to some extent in that it causes injury, but much more dangerous because it might be discouraging people from sports. Both in that they try once, get an injury and quit. Or in that they see it, they think it's too hard (and, I think it is, the way most people do it) and they never really bother.

I'm honestly not sure why it might have started...

The obvious reason is that it physically feels good to do it, lifting a lot of running more than your body tells you that you should is "nice". But it's nice in the same way that smoking a tiny bit of heroine before going about your day is nice (as in, quite literally, it seems to me the feelings are related and I think there's some pharmacological evidence to back that up). It's nice to do it once to see how it is, maybe I'll do it every few months if I get the occasion and I feel I need a mental boost... but I wouldn't necessarily advise it or structure my life around it.

The other obvious reason is that it's a status thing, the whole "I can do this thing better than you thus my rank in the hierarchy is higher". But then... why is it so common with both genders, I'd see some reason for men to do this, because historically we've been doing it, but women competing in sports is a recent things, hardly "built into our nature" and most of the ones I know that practice things like climbing are among the most chilled out dudes I've ever meet.

The last reason might be that it's about breaking a psychological barrier, the "Oh, I totally thought I couldn't do that, but apparently I can". But it seems to me like a very very bad way of doing that. I can think of many other safer better ways from solving a hard calculus problem to learning a foreign language in a month to forcing yourself to write an article every day... you know, things that have zero risks of paralysis and long term damage involved.

But I think at this point imitation alone is enough to keep it going.

The "real" reason if I take the outside view is probably that that's how sports are supposed to be done and I just got stuck with a weird perspective because "I play things safe".

Comment by george3d6 on Does donating to EA make sense in light of the mere addition paradox ? · 2020-02-19T23:23:51.197Z · score: 1 (1 votes) · LW · GW
To the extent that you're pursuing topics that EA organizations are also pursuing, you should probably donate to their recommended charities rather than trying to do it yourself or going through less-measured charities.

Well yes, this is basically the crux of my question.

As in, I obviously agree with the E and I tend do agree with the A , buy my issue is why how A seems to be defined in EA (as in, mainly around improving the lives of people that you will never interact with or 'care' about on a personal level).

So I agree with: I should donate to some of my favorite writers/video-makers that are less popular and thus might be kept in business by 20$ monthly on pateron is another hundred people think like me. (efficient as opposed, to, say, donating to an org that helps all artists or donating to well-off creators).

I also agree with: It's efficient to save a life halfway across the globe for x,000$ as opposed to one in the EU where it would cost x00,000$ to achieve a similar addition in healthy life years.

Where I don't understand how the intuition really works is "Why is it better to save the life of a person you will never know/meet than to help 20 artists that you love" (or some such equivalence).

As in, I get there some intuition about it being "better" and I agree that might be strong enough in some people that it's just "obvious", but my thinking was that there might be some sort of better ethic-rooted argument for it.

Comment by george3d6 on Does donating to EA make sense in light of the mere addition paradox ? · 2020-02-19T18:58:20.530Z · score: 4 (2 votes) · LW · GW

No worries, I wasn't assuming you were a speaker for the EA community here, I just wanted to better understand possible motivations for donating to EA given my current perspective on ethics. I think the answer you gave outline on such line of reasoning quite well.

Comment by george3d6 on Does donating to EA make sense in light of the mere addition paradox ? · 2020-02-19T17:00:44.172Z · score: 3 (1 votes) · LW · GW
Utilitarianism is not the only system that becomes problematic if you try to formalize it enough; the problem is that there is no comprehensive moral system that wouldn't either run into paradoxical answers, or be so vague that you'd need to fill in the missing gaps with intuition anyway.

Agree, I wasn't trying to imply otherwise

Any decision that you make, ultimately comes down to your intuition (that is: decision-weighting systems that make use of information in your consciousness but which are not themselves consciously accessible) favoring one decision or the other. You can try to formulate explicit principles (such as utilitarianism) which explain the principles behind those intuitions, but those explicit principles are always going to only capture a part of the story, because the full decision criteria are too complex to describe.

Also agree, as in, this is how I usually formulate my moral decision and it's basically a pragmatic view on ethics, which is one I generally agree with.

is just "the kinds where donating to EA charities makes more intuitive sense than not donating"; often people describe these kinds of moral intuitions as "utilitarian", but few people would actually endorse all of the conclusions of purely utilitarian reasoning.

So basically, the idea here is that it actually makes intuitive moral sense for most EA donors to donate to EA causes ? As in, it might be that they partially justify it with one moral system or another, but at the end of the day it seems "intuitively right" to them to do so.

Comment by george3d6 on How to actually switch to an artificial body – Gradual remapping · 2020-02-19T12:56:54.469Z · score: 3 (3 votes) · LW · GW
This fear of continuity breaks is also why I would probably stay clear of any teleporters and the like in the future.

In case you haven't read it: https://existentialcomics.com/comic/1

But overall I agree, this "feeling" is partially the reason why I'm a fan of the insert slightly-invasive mechanical components + outsource to external device strategy. As in, I do believe it's the most practical since it seems to be roughly doable with non-singularity levels of technology, but it's also the one where no continuation errors can easily happen.

Comment by george3d6 on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-18T18:44:24.361Z · score: 3 (1 votes) · LW · GW

What exactly do you mean by "the factors I listed" though ?

As in, I think that my basic argument goes:

"There's reason to think most kids would feel unsafe in a college environment, desire a social circle and job security, not the kind of transcendent self-actualization style goals that fuel research". I think this generally holds for anyone at the age of 18-22 outside of outlier, hence why I cited the pyramid of needs, because the research behind that basically points to us needing different things in an age-correlated way (few teenagers feel like they need self actualization). I think this is somewhat exaggerated in the US because of debt&distance but should be noticeable everywhere.

Next, there's reason to believe research inside universities is slowing down in certain areas, I have no reason to believe the lack of people desiring self-actualization is the cause for this though, except for a gut feeling that self-actualization is a better motivation to research nature than, say, wanting your paycheck at the end of the day. Most famous researcher seem to have been slightly crazy and not driven by societal goals but rather by an inner wish to "set things right" in one way or another, or to leave a mark on the world"

So basically, the best I can do to "prove" any of this would be something like:

  • Take some sort of comparative research output metric, these are hard to find, and are going to be very confounded with country-wealth (some examples: https://www.natureindex.com/country-outputs/generate/Nature%20&%20Science/global/All/score) ... "small socialist countries" produce a surprising amount of researcher per capita, but maybe that's something inherent to being a small rich country, not to have stronger communities and social support.
  • See if this correlates with % of the population working, quality of social security, some index measuring security, some index measuring happiness. Assume more research will come out of countries that perform well on this.

This will generally be true in terms of research, publications, books... etc (see Switzerland, Netherlands, Sweeden, Norway, Iceland... which seem to have a disproportionate amount of e.g. nature publications in report to the population), but you will also get outliers (see Israel, which produced a lot of research even dozens of years back when professors&students would be called on a yearly basis to fight to death against an out-numbering enemy that wanted to murder them).

However, you can't **** draw conclusion from numbers of publications, and things such as "security index" and "happiness index" and even "quality of social security" are very hard to measure. Plus, they are confounded by the wealth of the country.

On the other hand, there's good data on the idea that research is slowing down overall, that is much easier to place on "universities as a whole", since by all metrics is seems that research is heavily correlated with academia (see, where most researchers work, where the people that get noble prizes work ... etc).

So making the general assumption, of "research is slowing down" is much easier than doing the correlation on a per country basis.

If you can claim there is a valid way to measure basic needs that has a per-country statistic, and a various way to measure "research output" on a per country basis... than I'd be very curios in seeing that, I can even run an analysis based on various standard methods to see if there's a correlation.

So the generic claim "kids are not researcher and don't want to be researcher, universities can't do multiple things at once better than doing one thing, thus if universities have to take care of kids they will have less time to focus on actual research" is easy to look at wholistically, but harder to look at on a per-country basis.

Impossible ? I don't think so

Worthwhile ? I don't know. As in, this whole article is closer to "here's an interesting perspective, say, one that might warrant thinking about, when doing research" rather than "here's a factual claim about how stuff works". To make it any better, it would have to be elevated to a factual claim, but then I would basically have to trust the kind of analysis mentioned above (which again, I think would be impossible to run and get significant results since all the metrics I can think of are very leaky).

Honestly, it might have been a better perspective to approach this topic, I might even try to see if there's relevant data on the subject and update the article if there is, barring that, I literally don't see how this sort of hunch + basic evidence about generic human psychology plus observing a trend opinion piece differs from anything here. Maybe I've been misjudging the epistemic strength of the claims being seen in article around here... in which case, ahm... "sorry ?", but also, I don't really see your argument here.

Yes, assuming magical data fell out of the sky or our time to gather data was infinite every single piece of human thought could be improved, but I'm not sure why the stopping condition for this article would be "analysis comparing countries"... as opposed to any other random goalpost.

Comment by george3d6 on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-18T14:12:51.066Z · score: 3 (1 votes) · LW · GW
To the extend that you are interested in knowing whether your thesis is true, it would make sense to check.

How would I specifically go about checking this though ? As in, I do have data and knowledge on US and UK universities, I don't have data on Germany Universities.

If you have data on German university research output, then I think it's worth looking at, if not, I feel like what you're basically doing is saying: "Hey, you don't have data on this specific thing, it might go either way, your hypothesis is null and void".

Provided data on German universities existed, why not ask for data about every single country with universities.

You could argue "Well, you should become an expert in the field and have all possible data handy before making any claims", but then that claim would invalidate literally every single original thought on LessWrong that uses facts and even most academic papers.

Also, German Universities constitute a pretty bad example in my opinion, as in:

a) Murdering, exiling or routing out your highest IQ demographic and most public intellectuals

b) Having the rest taken away by the US, Russia and UK

c) Living for dozens of years in a country that's been morally, geographically and culturally divided ravaged by WW2 (plus 1/3 of it living under a brutal~ish communist dictatorship)

Would make for a pretty weird outlier in all of this no matter what.

As in, if we were to compare other rich academic systems I'd rather do Japan, Italy, France, Spain or Switzerland

Comment by george3d6 on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-18T13:34:46.008Z · score: 3 (1 votes) · LW · GW
It seems that your comment tries to take it apart by looking at whether you like the way the system is designed and not by looking at effects of it. That means instead of trying to see whether what you are seeing is true, you expand on your ideas of how things should be.

What exactly should my reply contain ?

As in, my argument in the original post is basically:

a) Universities evolved to install and provide primary needs (safety and social circle) instead of a more niche need for self-actualization

b) Research is slowing down overall, it could partially be because universities no longer focus on self-actualization and instead focus on providing safety and a social circle.

What I was basically saying is that I'm not sure if a applies to German universities, as in, I agree that they are probably less-so incentivized to focus on providing safety and a social circle.

I have no idea if b applies or not, as in, I'm not sure how well German universities have been doing and it's hard to measure their progress since the 30s and 40s obviously had a pretty huge negative effect on the whole upper education system.

I do overall think the example of German universities specifically (and Austrian ones, to some extent), because there's so many of them and many of them are vocation-focused specifically, giving a place to go for people that just want security, not a place in academia, is a good counter to my ideas here.

But also, my knowledge of the German education system is so poor overall, that I can't really make very specific claims here.