Posts

Aaro Salosensaari's Shortform 2021-01-10T11:44:21.824Z

Comments

Comment by Aaro Salosensaari (aa-m-sa) on Douglas Hofstadter changes his mind on Deep Learning & AI risk (June 2023)? · 2023-07-03T22:38:01.911Z · LW · GW

>It turns out that using Transformers in the autoregressive mode (with output tokens being added back to the input by concatenating the previous input and the new output token, and sending the new versions of the input through the model again and again) results in them emulating dynamics of recurrent neural networks, and that clarifies things a lot...

I'll bite: Could you dumb down the implications of the paper a little bit, what is the difference between a Transformer emulating a RNN and  some pre-Transformer RNNs and/or not-RNN?

My much more novice-level answer to Hofstadter's intuition would have been: it's not the feedforward firing, but it is the gradient descent training of the model on massive scale (both in data and in computation). But apparently you think that something RNN-like about the model structure itself is important?

Comment by Aaro Salosensaari (aa-m-sa) on Latent variables for prediction markets: motivation, technical guide, and design considerations · 2023-02-19T12:54:31.757Z · LW · GW

Epistemic status: I am probably misunderstanding some critical parts of the theory, and I am quite ignorant on technical implementation of prediction markets. But posting this could be useful for my and others' learning. 

First question. Am I understanding correctly how the market would function. Taking your IRT probit market example, here is what I gather:

(1) I want to make a bet on the conditional market P(X_i | Y). I have a visual UI where I slide bars to make a bet on parameters a and b; (dropping subscript i) however, internally this is represented  by a bet on a' = a sigma_y and P(X) = Phi(b'/sqrt(a'+1)), b' = b + a mu_y. So far, so clear.

(2) I want to bet on the latent market P(Y). I make bet on mu_Y and sigma_y, which is internally represented by a bet on a' and P(X). In your demo, this is explicit:  P(Y) actually remains a standard normal distribution, it is the conditional distributions that shift.

(3) I want to bet on the unconditional market P(X_i), which happens to be an indicator variable for some latent variable market I don't really care about. I want to make a bet on P(X_i) only. What does exactly happen? If the internal technical representation is a database row [p_i, a'] changing to [p_new, a'], then I must be implicitly making a bet on a' and b' too, as b' derived from a and P(X) and P(X) changed. In other words, it appears I am also making a bet on the conditional market? 

Maybe this is OK if I simply ignore the latent variable market, but I could also disagree with it: I think P(X) is not correlated with the other indicators, or the correlation structure implied by linear probit IRT model is inadequate.  Can I make a bet of form [p_i, NA] and have it not to participate in the latent variable market structure? (Ie when the indicator market resolves, p_i is taken into account for scoring, NA is ignored while scoring Y).

Thus, I can launch a competing latent variable market for the same indicators? How can I make a bet against a particular latent structure in favor of another structure? A model selection bet, if you will.

Context for the question: In my limited experience of latent variable models for statistical inference tasks, the most difficult and often contested question is whether the latent structure you specified is good one. The choice of model and interpretations of it are very likely to get criticized and debated in published work.

Another question for discussion.

It seems theoretically possible to obtain individual predictors' bets / predictions on X_1 ... X_k, without presence of any latent variable market, impute for the missing values if some predictors have not predicted on all k indicators, and then estimate a latent variable model on this data. What is the exact benefit of having the latent model available for betting?  If fit a Bayesian model (non-market LVM with indicator datapoints) and it converges, from this non-market model, I would obtain posterior distribution for parameters and compute many of the statistics of interest. 

Presumably, if there is a latent "ground truth" relationship between all indicator questions and the market participants are considering them, the relationship would be recovered by such analysis. If the model is non-identifiable or misspecified (= such ground truth does not exist / is not recoverable), I will have problems fitting it and with the interpretation of fit. (And if I were to run a prediction market for a badly specified LVM, the market would exhibit problematic behavior.)

By not making a prediction market for the latent model available, I won't have predictions on the latent distribution directly, but on the other hand, I could try to estimate several different without imposing any of their assumptions about presumed joint structure on the market. I could see this beneficial, depending on how the predictions on Y "propagate" to indicator markets or the other way around (case (3) above).

Final question for discussion: Suppose I run a prediction market using the current market implementations (Metaculus or Manifold or something else, without any LVM support), where I promise to fit a particular IRT probit model at time point t in future and ask for predictions on the distribution of posterior model parameters estimates when I fit the model. Wouldn't I obtain most of the benefits of "native technical implementation" of a LVM market ... except that updating away any inconsistencies between the LVM prediction market estimate and individual indicator market will be up to to individual market participants? The bets on this estimated LVM market at least should behave as any other bets, right? (Related: which should be the time point t when I promise to fit the model and resolve the market? t = when the first indicator market resolves? when the last indicator market resolves? arbitrary fixed time point T?)

Comment by Aaro Salosensaari (aa-m-sa) on On not getting contaminated by the wrong obesity ideas · 2023-01-29T12:15:14.166Z · LW · GW

>Glancing back and forth, I keep changing my mind about whether or not I think the messy empirical data is close enough to the prediction from the normal distribution to accept your conclusion, or whether that elbow feature around 1976-80 seems compelling.

 

I realize you two had a long discussion about this, but my few cents: This kind of situation (eyeballing is not enough to resolve which of two models fit the data better) is exactly the kind of situation for which a concept of statistical inference is very useful.

I'm a bit too busy right now  to present a computation, but my first idea would be to gather the data and run a simple "bootstrappy" simulation: 1) Get the original data set. 2) Generate k = 1 ... N simulated samples x^k = [x^k_1, ... x^k_t] form a normal distribution with linearly increasing mean mu(t) = mu + c * t  at time points t= 1960 ... 2018, where c and variance are as in "linear increase hypothesis". 3) How many of simulated replicate time series have an elbow at 1980 that is equally or more extreme than observed in the data? (One could do this not too informal way by fitting a piece-wise regression model with break at t = 2018 to  reach replicate time series, and computing if the two slope estimates differ by a predetermined threshold, such as the estimates recovered by fitting the same piece-wise model in the real data).

This is slightly ad hoc, and there are probably fancier statistical methods for this kind of test, or you could fits some kind of Bayesian model, but I'd think such computational exercise would be illustrative.

Comment by Aaro Salosensaari (aa-m-sa) on The Alignment Community Is Culturally Broken · 2022-11-14T14:52:22.007Z · LW · GW

Hyperbole aside, how many of those experts linked (and/or contributing to the 10% / 2% estimate) have arrived to their conclusion with a thought process that is "downstream" from the thoughtspace the parent commenter thinks suspect? Then it would not qualify as independent evidence or rebuttal, as it is included as the target of criticism.

Comment by Aaro Salosensaari (aa-m-sa) on Open & Welcome Thread - July 2022 · 2022-07-17T12:59:44.804Z · LW · GW

Thanks. I had read it years ago, but didn't remember that he had many more points than O(n^3.5 log(1/h)) scale and provides useful references (other than Red Plenty).

Comment by Aaro Salosensaari (aa-m-sa) on Open & Welcome Thread - July 2022 · 2022-07-15T12:45:19.165Z · LW · GW

(I initially thought it would be better not to mention the context of the question as it might bias the responses. OTOH the context could make the marginal LW poster more interested in providing answers, so I here it is:)

It came up in an argument that the difficulty of economic calculation problem could be a difficult to a hypothetical singleton, insomuch a singleton agent needs certain amount of compute relative to the economy in question. My intuition consists two related hypotheses: First, during any transition period where any agent participates in global economy where most other participants are humans ("economy" could be interpreted widely to include many human transactions), can the problem of economic calculation provide some limits how much calculation would be needed for an agent to become able to manipulate / dominate the economy? (Is it enough for an agent to be marginally more capable than any other participant, or does it get swamped by the sheer size of the economy is large enough?)

 Secondly, if an Mises/Hayek answer is correct and the economic calculation problem is solved most efficiently by a distributed calculation, it could imply that a single agent in a charge of a number of processes on "global economy" scale could be out-competed by a community of coordinating agents. [1]

However, I would like to read more to judge if my intuitions are correct. Maybe all of this is already rendered moot by results I simply do not how to find.

([1] Related but tangential: Can one provide a definition when distributed computation is no longer a singleton but more-or-less aligned community of individual agents? My hunch is, there could be a characterizations related to speed of communication between agents / processes in a singleton. Ultimately speed of light is prone to mandate some limitations.)

Comment by Aaro Salosensaari (aa-m-sa) on Open & Welcome Thread - July 2022 · 2022-07-14T19:19:37.048Z · LW · GW

Can anyone recommend good reading material on economic calculation problem? 

Comment by Aaro Salosensaari (aa-m-sa) on The Cage of the Language · 2022-07-12T13:52:08.089Z · LW · GW

I found this interesting. Finnish is also language of about 5 million speakers, but we have a commonly used natural translation of "economies of scale" (mittakaavaetu, "benefit of scale"). Any commonplace obvious translation for "Single point of failure" didn't strike my mind, so I googled, and found engineering MSc thesis works and similar documents: the words they choose to use included yksittäinen kriittinen prosessi ("single critical process", most natural one IMO), yksittäinen vikaantumispiste ("single point of failure", literal translation and a bit clumsy one), yksittäinen riskikohde ("single object of risk", makes sense but only in the context), and several phrases that chose to explain the concept.

Small languages need active caretaking and cultivation so that translations of novel concepts are introduced, and then there is active intellectual life where they are used. In Finnish, this work has been done for "economies of scale", but less efficiently for "single point of failure". But I believe one could use any of the translations I found or invent ones own, maybe add the English term in parenthesis, and not look like a crackpot. (Because I would expect quite a few people are familiar with the concept with its English name. In a verbal argument I would expect a politician just say it in English if they didn't know an established equivalent in Finnish. Using English is fancy and high-status in Finland in the way French was fancy and high-status in the 19th century.)

In another comment you make a comparison to LW concepts like Moloch. [1] I think the idea of "cultivation" is also applicable to LW shibboleths too, especially in the context of the old tagline "rising the sanity waterline". It is useful to have a good language - or very least, good words for important concepts. (And also maybe avoid promoting words for concepts that are not well thought-out.) Making such words common requires active work, which includes care in choice of words that are descriptive/good-sounding and good judgement to choose words and use-patterns that they can become popular and make it to common use without sounding crackpot-ish. (In-group shibboleths can become a failure mode.) 

Lack of such work is obvious sooner in small languages than in larger ones, but even large language with many speakers, like English, miss words for every concept they have not yet adopted a word for.

In Iceland, much smaller language, I have heard they translate and localize everything.

[1] https://www.lesswrong.com/posts/zSbrbiNmNp87eroHN/the-cage-of-the-language?commentId=rCWFaLH5FLoH8WKQK

Comment by Aaro Salosensaari (aa-m-sa) on Questions about ''formalizing instrumental goals" · 2022-04-12T21:57:42.405Z · LW · GW

"if I were an AGI, then I'd be able to solve this problem" "I can easily imagine"

Doesn't this way of analysis come with a ton of other assumptions left unstated? 

 

Suppose "I" am an AGI  running on a data center and I can modeled as an agent with some objective function that manifest as desires and I know my instantiation needs electricity and GPUs to continue running. Creating another copy of "I" running in the same data center will use the same resources. Creating another copy in some other data center requires some other data center. 

Depending on the objective function and algorithm and hardware architecture bunch of other things, creating copies may result some benefits from distributed computation (actually it is quite unclear to me if "I" happen already to be a distributed computation running on thousands of GPUs -- do "I" maintain even a sense of self -- but let's no go into that). 

The key here is the word may. Not  obviously it necessarily follows that..

For example: Is the objective function specified so that the agent will find creating a copy of itself beneficial for fulfilling the objective function (informally, it has internal experience of desiring to create copies)? As the OP points out, there might be a disagreement: for the distributed copies to be any useful, they will have different inputs and thus they will end in different, unanticipated states. What "I" am to do when "I" disagree another "I"? What if some other "I" changes, modifies its objective function into something unrecognizable to "me", and when "we" meet, it gives false pretenses of cooperating but in reality only wants hijack "my" resources? Is the "trust" even the correct word here, when "I" could verify instead: maybe "I" prefer to create and run a subroutine of limited capability (not a full copy) that can prove its objective function has remained compatible with "my" objective function and will terminate willingly after it's done with its task (killswitch OP mentions) ? But doesn't this sound quite like our (not "our" but us humans) alignment problem? Would you say "I can easily imagine if I were an AGI, I'd be easily able to solve it" to that? Huh? Reading LW I have come to think the problem is difficult to the human-general intelligence.

Secondly: If "I" don't have any model of data centers existing in the real world, only the experience of uploading myself to other data centers (assuming for the sake of argument all the practical details of that can be handwaved off), i.e. it has a bad model of the self-other boundary described in OPs essay, it could easily end up copying itself to all available data centers and then becoming stuck without any free compute left to "eat" and adversely affecting human ability to produce more. Compatible with model and its results in the original paper (take the non-null actions to consume resource because U doesn't view the region as otherwise valuable). It is some other assumptions (not the theory) that posit an real-world affecting AGI would have U that doesn't consider the economy of producing the resources it needs.

So if "I" were to successful in running myself with only "I" and my subroutines, "I" should have a way to affecting the real world and producing computronium for my continued existence. Quite a task to handwaved away as trivial! How much compute an agent running in one data center (/unit of computronium) needs to successfully model all the economic constraints that go into the maintenance of one data center? Then add all the robotics to do anything. If "I" have a model of running everything a chip fab requires more efficiently than the current economy, and act on it, but the model was imperfect and the attempt is unsuccessful but destructive to economy, well, that could be [bs]ad and definitely a problem. But it is a real constraint to the kind of simplifying assumptions the OP critiques (disembodied deployer of resources with total knowledge).

All of this --how would "I" solve a problem and what problems "I" am aware of-- is contingent on, I would call them, the implementation details. And I think author is right to point them out. Maybe it does necessary follows, but it needs to be argued so. 

Comment by Aaro Salosensaari (aa-m-sa) on [RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm. · 2022-04-11T12:09:54.285Z · LW · GW

Why wonder when you can think: What is the substantial difference in MuZero (as described in [1]) that makes the algorithm to consider interruptions?

Maybe I show some great ignorance of MDPs, but naively I don't see how an interrupted game could come into play as a signal in the specified implementations of MuZero:

Explicit signals I can't see, because the explicitly specified reward u seems contingent ultimately only on the game state / win condition. 

One can hypothesize an implicit signal could be introduced if algorithm learns to "avoid game states that result in game being terminated for out-of-game reason / game not played until the end condition", but how such learning would happen? Can MuZero interrupt the game during training? Sounds unlikely such move would be implemented in Go or Shogi environment. Are there any combination of moves in Atari game that could cause it?

[1] https://arxiv.org/abs/1911.08265

Comment by Aaro Salosensaari (aa-m-sa) on Game theory, sanctions, and Ukraine · 2022-03-20T19:34:53.847Z · LW · GW

a backdrop of decades of mistreatment of the Japanese by Western countries.

I find this a bit difficult to take seriously. The WW2 in the Pacific didn't start with well-treatment of China and other countries by Japan, either. Naturally Japanese didn't care about that part of the story, but hey had plenty of other options how they could have responded their the UK or the US trade policy instead of invading Manchuria.

making Ukraine a country with a similar international status to Austria or Finland during the Cold War would be one immediate solution.

This is not a simple task, but rather a tall order. Austria was "made neutral" after it was occupied. Finland signed a peace treaty that put it into effectively similar position. Why would any country submit to such a deal voluntarily? The answer is, they often don't. Finland didn't receive significant assistance from the Allies in 1939, yet they decided to defend themselves against the USSR anyway when Stalin attacked.

However, if one side in these disputes had refused to play the game of ratcheting up tensions, the eventual wars would simply not have happened. In this context it takes two to dance.

Sure, but the game theoretic implication is that this kind of strategy favors the first party to take the first step and say "I have an army and a map where this neighboring country belongs to us". 

NATO would have refrained from sending lethal arms to Ukraine and stationing thousands of foreign military advisors in Ukrainian territory after Maidan.

What a weird way to present the causality of events. I am quite confident NATO didn't have time to send any weapons and certainly not thousands of advisors between Maidan and the war starting. Yanukovich fled 22 February. Antimaidan protests started in Donetsk 1 March and shooting war started in April.

Comment by Aaro Salosensaari (aa-m-sa) on Ukraine Post #2: Options · 2022-03-13T15:55:37.693Z · LW · GW

First, avoiding arguments from the "other side" on the basis that they might convince you of false things assumes that the other side's belief are in fact false.  

I believe it is less about true/false, but whether you believe the "other side" is making a well-intentioned effort at obtaining and sharing accurate maps of reality. On practical level, I think it is unlikely studying Russian media in detail is useful and cost-effective for a modal LWer. 

Propaganda during wartime, especially during total war, is a prima facia example of situation where every player of note is doing their best to convince you of something in order to produce certain effects. [2] To continue with the map metaphor, they want to you to have a certain kind of map that will guide you to certain location. All parties wish to do this to some extent, and because it is a situation with the highest stakes of all, they are putting in their best effort.

Suppose you read lots of Western media sources and then a lot of Russian media sources. All sides in the conflict do their best to fill the air with favorable propaganda. You will find yourself doing a lot of reading, and I don't know if there is any guarantee you can achieve any good results by interpolating between two propaganda-infused maps [1], instead of say, reading much less of both Western media and Russian media and trying to find good close-to-ground signals, or outsourcing the time-consuming analysis part to people / sources who you have a good reason to trust to do a good analysis (preferably you have vetted them before the conflict, and you can trust the reason still applies).

So the good reason to read Russian media to analyze it, is if you have a good reason to believe you would be good analyst of Russian media sphere. But if you were, would you find yourself reading a Russian newspaper you had not heard about two weeks ago with Google translate?

[1] I don't have references at hand to give a good summary, but imagine you are your great*-grandparent and reading newspapers during WW2. At great expense you manage to get newspapers from London, New York, Berlin, Tokyo, and Moscow. Are you going to get good picture of "what happens" by reading them all? I think you would get some idea of how situation develops by reading accounts of battles and cross-referencing a map, but I don't know it would be worth the expense. One thing I know, none of them is reporting much at all about the thing you most likely consider most salient about WW2, namely, the holocaust and the atomic bomb until after the fact.

[2] edit. addendum. Zvi used the word "hostile" and I want to stress its importance. During peacetime and in internal politics it is often a mistake to assume hostile influences (ie. conflict on conflict/mistake theory spectrum), because then you are engaging in a conflict all the time and likely to escalate it more and more. But now that we have a major European war, I think that is a good situation to assume that the players in the field are actually "hostile" because there is a shooting war conflict to begin with.

Comment by Aaro Salosensaari (aa-m-sa) on Open & Welcome Thread November 2021 · 2021-11-18T20:26:05.782Z · LW · GW

Open thread is presumably the best place for a low-effort questions, so here goes: 

I came across this post from 2012: Thoughts on the Singularity Institute (SI) by Holden Karnofsky (then-Co-Executive Director of GiveWell). Interestingly enough, some of the object-level objections (under subtitle "objections") Karnofsky raises[1] are similar to some points that were came up in the Yudkowsky/chathamroom.com discussion and Ngo/Yudkowsky dialogue I read the other day (or rather, read parts of, because they were quite long).

What are people's thought about that post and objections raised today? What the 10 year (-ish, 9.5 year) retrospective looks like?

Some specific questions.

Firstly, how his arguments would be responded today? Any substantial novel contra-objections? (I ask because its more fun to ask than start reading through Alignment forum archives.)

Secondly, predictions. When I look at the bullet points under the subtitle "Is SI the kind of organization we want to bet on?", I think I can interpolate a prediction Karnofsky could have made: in 2012, SI [2] had not the sufficient capability nor engaged in activities likely to achieve its stated goals ("Friendliness theory" or Friendly AGI before others), as it was not worth a GiveWell funding recommendation in 2012.

A perfect counterfactual experiment this is not, but given what people on LW today know about what SI/MIRI did achieve in the NoGiveWell!2012 timeline, was Karnofsky's call correct, incorrect or something else? (As in, did his map of the situation in 2012 matched the reality better than some other map, or was it poor compared to other map?) What inferences could be drawn, if any?

Would be curious to hear perspectives from MIRI insiders, too (edit. but not only them). And I noticed Holden Karnofsky looks active here on LW, though I have no idea if how to ping him.

[1] Tool-AI; idea that advances in tech would bring insights into AGI safety.

[2] succeeded by MIRI I suppose

edit2. fixed ordering of endnotes.

Comment by Aaro Salosensaari (aa-m-sa) on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-13T22:47:40.284Z · LW · GW

Yeah, random internet forum users emailing eminent mathematician en masse would be strange enough to be non-productive. I for one wasn't thinking anyone would to, I don't think it was what OP suggested. To anyone contemplating sending one, the task is best delegated to someone who not only can write coherent research proposals that sound relevant to the person approached, but can write the best one.

Mathematicians receive occasional crank emails about solutions to P ?= NP, so anyone doing the reaching needs to be reputable to get past their crank filters.

Comment by Aaro Salosensaari (aa-m-sa) on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-11T17:17:25.682Z · LW · GW

A reply to comments showing skepticism about how mathematical skills of someone like Tao could be relevant:

Last time I thought I would understood anything of Tao's blog was around ~2019. Then he was working on curious stuff, like whether he could prove there can be finite-time blow-up singularities in Navier-Stokes fluid equations (coincidentally, solving the famous Millenium prize problem showing non-smooth solution) by constructing a fluid state that both obeys Navier-Stokes and also is Turing complete and ... ugh, maybe I quote the man himself:

[...] one would somehow have to make the incompressible fluid obeying the Navier–Stokes equations exhibit enough of an ability to perform computation that one could programme a self-replicating state of the fluid that behaves in a manner similar to that described above, namely a long period of near equilibrium, followed by an abrupt reorganization of the state into a rescaled version of itself. However, I do not know of any feasible way to implement (even in principle) the  necessary computational building blocks, such as logic gates, in the Navier–Stokes equations.
 

However, it appears possible to implement such computational ability in partial differential equations other than the Navier–Stokes equations. I have shown5 that the dynamics of a particle in a potential well can exhibit the behaviour of a universal Turing machine if the potential function is chosen appropriately. Moving closer to the Navier–Stokes equations, the dynamics of the Euler equations for inviscid incompressible fluids on a Riemannian manifold have also recently been shown6,7 to exhibit some signs of universality, although so far this has not been sufficient to actually create solutions that blow up in finite time.

(Tao, Nature Review Physics 2019.)

The relation (if any, to proving stuff about computational agents alignment people are interested in) is probably spurious (I myself don't follow either Tao's work or alignment literature), but I am curious if he'd be interested in working on a formal system of self-replicating / self-improving / aligning computational agents, and (then) capable of finding something genuinely interesting.

minor clarifying edits.

Comment by Aaro Salosensaari (aa-m-sa) on Lies, Damn Lies, and Fabricated Options · 2021-10-17T11:57:37.340Z · LW · GW

I have not read Irving either but he is relatively "world-famous" 1970s-1980s author. (In case it helps you to calibrate, his novel The World According To Garp is the kind of book that was published in translation in the prestigious Keltainen Kirjasto series by Finnish publisher Tammi.)

However, I would like make an opposing point about literature and fiction. I was surprised that post author mentioned a work of fiction as a positive example that demonstrates how some commonly argued option is a fabricated one. I'd think literature would at least as often (maybe more often) disseminate belief in fabricated options than correct them, as an author can easily literally fabricate (make things up, it is fiction) easily believable and memorable stories how characters choose one course of action out of many options and it works out (or not, either way, because the narrator decided so) but in reality, all options as portrayed in the story could all turn out be misrepresented, "fabricated options" in real life.

Comment by Aaro Salosensaari (aa-m-sa) on Insights from Modern Principles of Economics · 2021-09-22T09:40:40.881Z · LW · GW

The picture looks like evidence there is something very weird going on that is not reflected in the numbers or arguments provided. There are homeless encampments in many countries around the world, but very rarely 20 min walk from anyone's office.

Comment by Aaro Salosensaari (aa-m-sa) on Insights from Modern Principles of Economics · 2021-09-22T08:49:45.003Z · LW · GW

From what I remember form my history of Finland classes, the 19th/early 20th century state project to build a compulsory school system met some not insignificant opposition from parents. They liked having the kids working instead going to school, especially in agrarian households.

Now, I don't want to get into debate whether schooling is useful or not (and for whom, and for what purpose, and if the usefulness has changed over time), but there is something illustrative in the opposition: children rarely are independent agents to the extent adults are. If the incentives are set in that way, the parents will prefer to make choices about their children labor that result in more resources for the household/family unit (charitable interpretation) or for themselves (not so charitable). Number of children in the family also affects the calculus. (One kid, it makes sense to invest in their career; ten kids, and the investment was in the number.)

Comment by Aaro Salosensaari (aa-m-sa) on Aaro Salosensaari's Shortform · 2021-07-29T13:36:14.757Z · LW · GW

Genetic algorithms are an old and classic staple of LW. [1]

Genetic algorithms (as used in optimization problems) traditionally assume "full connectivity", that is any two candidates can mate. In other words, population network is assumed to be complete and potential mate is randomly sampled from the population.

Aymeric Vié has a paper out showing (numerical experiments) that some less dense but low average shortest path length network structures appear to result in better optimization results: https://doi.org/10.1145/3449726.3463134

Maybe this isn't news for you, but it is for me! Maybe it is not news to anyone familiar with mathematical evolutionary theory?

This might be relevant for any metaphors or thought experiments where you wish to invoke GAs.

[1] https://www.lesswrong.com/search?terms=genetic%20algorithms

Comment by Aaro Salosensaari (aa-m-sa) on What does knowing the heritability of a trait tell me in practice? · 2021-07-26T17:20:51.185Z · LW · GW

My take is that the scientific concept of "heritability" has some problems in its construction: the exact definition (Var(genotype)/Var(phenotype)), while useful in some regard, does not match the intuition of the word

Maybe the quantity should be called "relative heritability", "heritability relative to population" or "proportion of population variance explained", like many other quantities that similarly have form A/B where both A and B are (population) parameters or their estimates.

Addendum 1.

"Heritable variance"? See also Feldman, Lewontin 1975 https://scholar.google.com/scholar?cluster=10462607332604262282

Comment by Aaro Salosensaari (aa-m-sa) on Re: Competent Elites · 2021-07-23T15:12:58.598Z · LW · GW

The smartest people tend to be ambitious.

 

If this is anecdotal, wouldn't it be easily explained by some sort of selection bias? Smart ambitious people are much visible than smart, definitely-not-ambitious people (and by definition of "smart", they have probably better chances at succeeding in their ambitions than equally ambitious less smart people).

Anecdotally, I have met some relatively smart people who are not very ambitious, and I can imagine there could be much smarter people one does not meet except by random chance, because they do not have much ambition. Also anecdotally, I would not be surprised if not-so-ambitious smart people would be content with a "default", probably mildly successful career path and opportunities for a person like them tend to find.

Comment by Aaro Salosensaari (aa-m-sa) on Re: Competent Elites · 2021-07-23T15:09:40.830Z · LW · GW

What is the correct amount of self praise? Do you have reasons to believe Isusr has made an incorrect evaluation regarding their aptitude? Do you believe that even if the evaluation is correct that the post is still harmful?

I don't know if the post is harmful, but in general, "too much self-praise" can be a  failure mode that makes argumentative writing less likely to succeed at convincing readers of its arguments.

Comment by Aaro Salosensaari (aa-m-sa) on Re: Competent Elites · 2021-07-23T14:54:20.603Z · LW · GW

The following blog post might be of interest to anyone who either claims Dunning-Kruger means that low-skill people think they are highly skilled or claims Dunning-Kruger is not real: http://haines-lab.com/post/2021-01-10-modeling-classic-effects-dunning-kruger/

The author presents the case how D-K is misunderstood, then why one might suspect it is a mathematical artifact from measurement error, but then shows with a model that there is some evidence for Dunning-Kruger effect, as some observed data are reliably explained with an additive perception bias + noise effect (or a non-linear perception distortion effect).

Comment by Aaro Salosensaari (aa-m-sa) on Re: Competent Elites · 2021-07-23T14:34:44.591Z · LW · GW

Agreed. The difference is more pronounced in live social situations, and quite easy to quantify in situation such as a proof-heavy mathematics class in college. Many students who have done their work on the problem sets can present a correct solution and if not, usually follow the presented solution. For some, completing the problem sets took more time. Likewise, some people get more out of any spontaneous discussion of the problems. Some relatively rare people would pull out the proofs and points seemingly from thin air: look at the assignment, made some brief notes, and then present their solution intelligibly while talking about it.

Comment by Aaro Salosensaari (aa-m-sa) on Jean Monnet: The Guerilla Bureaucrat · 2021-07-06T22:30:41.554Z · LW · GW

However, European Commission seems to defy that rule. The members are nominated by the national governments, yet, they seem not to give unfair advantage to their native countries.

I am uncertain if this is true, or at least, it can be debated. There have been numerous and many complaints of Commission producing decisions and policies that favor some countries.However, such failure mode, if true, is not of the form where individual comissioners favor their native countries, but where the commission as a body adopts stances compatible with overall political power dynamics in the EU.

Also to be considered that national governments do not get to unilaterally appoint their respective comissioners, but must present comissioners that are acceptable to other organs. In monarchies, this would comparable to difference between monarch appointing prime minister at His Majesty's will, desire and whim, vs monarch being forced to take parliaments opinion into account in appointing the PM so that the appointed government is viable. In analogy "monarch" is national government, "PM" the commissioner-appointee-to-be, "parliament" (in official procedure) the Commission President and the European Parliament (and unofficially, I would not be surprised if there are other considerations).

Comment by Aaro Salosensaari (aa-m-sa) on Religion's Claim to be Non-Disprovable · 2021-06-24T20:47:58.188Z · LW · GW

>So the context of this post is less about religion itself, and more about an overall cluster of ways that rationalists/skeptics/etc could still use to improve their own thinking.

At best, this line sounds like arguing that this thing that looks like fish is not a fish because of its evolutionary history, method of giving birth, and it has this funny nose on top of its head through with it breathes makes it a mammal, thus not fish -- in a world where the most salient definition of fish is functional one, "it is a sea-creature that lives in water and we need boats to get to them".

However, I do not grant that argument holds. I believe what we have here is more of a shark than a whale, which despite the claims to contrary, are today still called fish. Instead of imparting any lessons, it reads more like argument concerning factuality and history of Judaism and Christianity ... because most of all its words are spend talking about specific claims about Judaism, Christianity and their history. A comment answering newcomer wondering about "it seems to me that this article is about fishes in water, I'd like to point out something on that matter" with a claim "welcome to forum! this totally-not-a-shark is actually a whale, which is not a fish, so whatever you pointed out is out of context" feels like ... incorrect way to defend it.

Incorrect enough why I think it is worth pointing it out 3 years later. But such things happen when 14 year old posts are rotated as recommendations on frontpage.

Comment by Aaro Salosensaari (aa-m-sa) on Sabien on "work-life" balance · 2021-05-27T20:13:09.898Z · LW · GW

>And as Duncan is getting at, employment has changed a lot since the term was coined and there's now a lot more opportunity for jobs and work to be aligned with a person's personal goals.

I can agree, I am skeptical that this ...integratedness(?) is actually a good thing for everyone. From point of view of the old "work vs life" people who valued the life part, it probably looks like them losing if what they get is "your work is supposed to integral part of what you choose to do with your life" but the options of where and what kind of work to do are not that different than they were some decades ago. And even the new^1 options present trade-offs.

Maybe there are some people whose true calling is to found a startup or develop mastery in some particular technology stack or manage projects that create profit for stockholders. However, if the job market environment is shaped by it so that every job expects an applicant whose life goals are integrally aligned to performing the job, it plausibly affects what kind of goals people think are thinkable when they think of their life and careers, because it certainly affects how they present themselves to the hiring committee or people with equivalent power.

--

Another point, concerning integration of work in ones life. I found myself thinking of the movie Tokyo Story (Tokyo Monogatari, 1953), which I saw maybe two years ago. While the story is not exactly about jobs, it explores how the modernity (the contemporary, post-WW2, kind  of modernity in particular) intersects with the Japanese society through the lenses of single Japanese family. The many characters in the film work various jobs: there is a son who is a physician (the kind of one who does visits and has a private practice), a daughter who is running a beauty saloon (a family business like setup; if the husband did something not affiliated with business, I forget what), and a daughter-in-law who is a menial clerk at a corporate business.

The part where this musing connects to anything, while writing the first part of the comment, I started to think about, what are the personal goals of the physician and the beauty business owner? If I recall, both of them are the kind of person who wants to strive and get forward and upward in their life in Tokyo (this leads to the one of conflicts in the film) and view their jobs integral to that goal. Their jobs are quite integrated to their life in concrete terms, both practice at their homes. Both kind of professions predate the work-life balance, probably. One could replace the beauty saloon with something a bit more traditional, like a restaurant or inn without much difference to their relevance to story, at the very least. The character with clearest difference between time off and time in work is the office clerk. Which actually connects to another plot point. I recommend the movie.

Maybe the big difference comes implied in the "good for the world in ways I care about" angle There is no crusader or activist, someone who seeks to make change in the world instead of making it well within it. Today the doctor would be likely to emphasize how he wants to help  people by being a good doctor, the family business would have a thing (natural beauty products that help the environment, powered by solar!), and the big corp would have mission, too. The owner of the corp, several echelons above, might be even serious about it. Nobody goes to found a start-up.

So, I guess my point is that there always have been people who don't view their work and non-work lives a fundamentally different kind of thing.

1: The newness might be debatable, though. I don't think starting a technology business because you have skills and ideas is something truly new in the US, I think both Edison and Tesla tried their hands at it and I have read Tesla's interviews which indicate he thought it was for the betterment of mankind? It would have been with the spirit of the times.

Comment by Aaro Salosensaari (aa-m-sa) on Chinese History · 2021-05-16T21:53:37.350Z · LW · GW

>The Church of England still has bishops that vote in the house of lords. 

That is argument for particular church-state relationship. The original claim spoke of entanglement (in the present tense!). For reference, the archbishop of Evangelical-Lutheran Church in Finland has always been appointed by whomever is the head of state since Gustav I Vasa embraced the Protestantism and the church was until recently an official state apparatus and to some extent still is. The Holy See has had negligible effect here since centuries, and some historians maintain that most of the time the influence tended to flow from the state to the Ev. Lut. church than other way around despite the overall symbiosis between the two.

The aspects of political power in such conflicts were not alien to Catholic cardinal Richeliu of France who financed Gustav II Adolf's war against the Catholic League in Germany while repressing the Huegenots at home.

It is very enlightening to read to the other responses below concerning the history of Confucianism, and I can be convinced China & Confucianism have very different history about the matters we (or I) often pattern-match to religion. And it makes sense that peculiarities of the Taiping rebellion or the CCP's current positions concerning Catholicism are motivated by them being in contact with European concepts of religion only relatively recently on historical timescales. Yet however:

In my understanding, the conflict between CCP and the Catholic church indicates that the party views Catholicism in terms of national identity and temporal power in ways both different and not so different how Catholicism was viewed in Protestant countries of 17th/18th century. The CCP apparently do not want Catholicism or specifically the Church of Rome's interpretation to have significant presence in the local thoughtspace, presumably in favor of something else which plausibly serves an analogous role (otherwise there would be no competition about that thoughtspace).

In this case, I find it likely that the parable about fish and water also applies to birds and air: there are both commonalities despite the differences, while water is no air, and the birds have more reason to differentiate the air from the ground. Maybe the Chinese are like more like to rockets in the vacuum of space, but that would take more explaining.

Writing out the argument how there is no entanglement and why the clarity arises (and why linking to Sun Tzu is supposed to back that argument) could possibly help here.

Consequently, the original remark and some of the subsequent discussion reads me to as "booing" all things that get called "religions" and cheering for the Chinese tradition as better for being not a religion.

Comment by Aaro Salosensaari (aa-m-sa) on What weird beliefs do you have? · 2021-05-16T12:51:31.887Z · LW · GW

I sort of believe in something like this, except without the magical bits. It motivates me to vote in elections and follow the laws also when there is no effective enforcement. Maybe it is a consequence of reading Pratchett's Discworld novels when I was in impressionable age. 

My mundane explanation (or rationalization) is a bit difficult to write, but I believe it is because of:

>It gets in people's minds.

When people believe something, it affects their behavior. Thus memetic phenomena can have real effects.

As an example I feel is related to this, I half-believe that believing in magical rationalizations[1] can also enable good societal outcomes, as long as enough people believe that also other people believe them, and it facilitates trust.

Have you read Joseph Conrad's Nostromo? It deals with how valuable things and what they do affect people's behavior, both on the societal scale (how corruption in imaginary South American state of Costaguano seeds more corruption) and personal scale.

[1] "if I vote in the national elections, it somehow makes difference, maybe because then more people like me are encourage to vote in elections" and "if I obey the law of not serving alcohol to underage people when there is no probably harm to them from it, or stop at the traffic signs at deserted street in midnight, the world somehow becomes a better place because world would be better place if more people followed the laws". 

Comment by Aaro Salosensaari (aa-m-sa) on What weird beliefs do you have? · 2021-05-16T12:15:42.172Z · LW · GW

I agree with Phil that this sounds very ... counterintuitive. Usually nothing is free, and even with free things there is consequences or some sort of externality.

However, I recently read an argument by a Finnish finance podcaster, who argued while the intuition might be true and government debt system probably is not sustainable and is going to have some kind of messup in long term, not participating may put your country at disadvantage compared to countries who take the "free" money and invest it, and thus have more assets when it all falls down.

Comment by Aaro Salosensaari (aa-m-sa) on Chinese History · 2021-05-12T23:09:36.009Z · LW · GW

I realize this is a 3mo old comment.

>Nor does China entangle religion with politics to the same extent you find in the Christian and Islamic worlds. This makes it easier to think about conflicts. I feel it produces a better understanding of political theory and strategy.

Does not entangle? I thought China is the only country of note around that enforces their version of Catholic church with Chinese characteristics (the translation used by Wikipedia is "Chinese Patriotic Catholic Church", apparently excommunicated by the pope in Rome). One can discuss how it compares to Church of England's historical past or more recently, the protestant skepticism about JFK's Catholicism, but it is kind of remarkable on its own right.

(edit. Thinking about the little bit I do know about Chinese history ... Taiping Rebellion?)

Comment by Aaro Salosensaari (aa-m-sa) on Predictive Coding has been Unified with Backpropagation · 2021-04-05T21:13:27.387Z · LW · GW

Sure, but statements like

>ANNs are built out of neurons. BNNs are built out of neurons too.

are imprecise and possibly imprecise enough to be also incorrect if it turns out that biological neurons do something different than perceptrons that is important. Without making the exact arguments and presenting evidence in what respects the perceptron model is useful, it is quite easy to bake in conclusions along the lines of "this algorithm for ANNs is a good model of biology" in the assumptions "both are built out of neurons".

Comment by Aaro Salosensaari (aa-m-sa) on Technological stagnation: Why I came around · 2021-01-26T21:22:26.193Z · LW · GW

Home delivery is way cheaper than it used to be.

 

I am going to push back a little on this one, and ask for context and numbers? 

As some of my older relatives commented when Wolt became popular here, before people started going to supermarkets, it was common for shops to have a delivery / errand boy (this would have been 1950s, and more prevalent before the WW2). It is one thing that strikes out reading biographies; teenage Harpo Marx dropped out from school and did odd jobs as an errand boy; they are ubiquitous part of the background in Anne Frank's diaries; and so on.

Maybe it was proportionally more expensive (relative to cost of purchase), but on the other hand, from the descriptions it looks like the deliveries were done by teenage/young men who were paid peanuts.

Comment by Aaro Salosensaari (aa-m-sa) on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-01-26T18:38:12.575Z · LW · GW

Thanks for writing this, the power to weight statistics are quite interesting. I have an another, longer reply with my own take (edit. comments about the graph, that is) in the works, but while writing it, I started to wonder about a tangential question:

I am saying that many common anti-short-timelines arguments are bogus. They need to do much more than just appeal to the complexity/mysteriousness/efficiency of the brain; they need to argue that some property X is both necessary for TAI and not about to be figured out for AI anytime soon, not even after the HBHL milestone is passed by several orders of magnitude.

I am not super familiar with the state of discussion and literature nowadays, but I was wondering what are these anti-short-timelines arguments that appeal to the  general complexity/mysteriousness and how common they are? Are they common in popular discourse, or common among people considered worth taking seriously?

Data efficiency, for example, is already a much more specific feature than handwave-y "human brain is so complex", and thus as you demonstrate, it becomes much easier to write a more convincing argument from data efficiency than mysterious complexity.

Comment by Aaro Salosensaari (aa-m-sa) on Aaro Salosensaari's Shortform · 2021-01-12T07:08:34.632Z · LW · GW

Eventually, yes, it is related to arguments concerning people. But I was curious about what aesthetics remain after I try to abstract away the messy details. 

Comment by Aaro Salosensaari (aa-m-sa) on Aaro Salosensaari's Shortform · 2021-01-12T07:05:34.821Z · LW · GW

>Is this a closed environment, that supports 100000 cell-generations?

Good question! No. I was envisioning it as a system where a constant population of 100 000 would be viable. (RA pipettes in a constant amount of nutritional fluid every day or something).  Now that you asked the question, it might make more sense to investigate this assumption more.

Comment by Aaro Salosensaari (aa-m-sa) on Aaro Salosensaari's Shortform · 2021-01-10T11:44:23.710Z · LW · GW

I have a small intuition pump I am working on, and thought maybe others would find it interesting.

Consider a habitat (say, a Petri dish) that in any given moment has maximum carrying capacity for supporting 100 000 units of life (say, cells), and two alternative scenarios.

Scenario A. Initial population of 2 cells grows exponentially, one cell dying but producing two descendants each generation. After the 16th generation, the habitat overflows, and all cells die in overpopulation. The population experienced a total of 262 142 units of flourishing.

Scenario B. More or less stable population of x cells (x << 100 000 units, say, approximately 20) continues for n generations, for total of x * n units of flourishing until the habitat meets its natural demise after n generations.

For some reason or other, I find the scenario B much more appealing even for relatively small numbers of n. For example, while n=100 000 (2 000 000 units of total flourishing) would be obviously better for utilitarian who cares about total equal sum of flourishing units (utilitons), I personally find already meager n=100 (x*n = 2000) sounding better than A. 

Maybe this is just because of me assuming that because n=100 is possible, also larger n sounds possible. Or maybe I am utiliton-blind and just think 100 > 17. Or maybe something else.

Background. In a recent discussion with $people, I tried to argue why I find a long term existence of a limited human population much more important than mere potential size of total experienced human flourishing or something more abstract. I have not tried to "figure in" more details, but somethings I have thought about adding in, is various probabilistic scenarios / uncertainty about total carrying capacity. No, I have not read (/remember reading) previous relevant LW posts, if you can think of something useful / relevant, please link it! 

Comment by Aaro Salosensaari (aa-m-sa) on How long does it take to become Gaussian? · 2020-12-10T10:46:57.119Z · LW · GW

I agree the non-IID result is quite surprising. Careful reading of the Berry-Esseen gives some insight on the limit behavior. In the IID case, the approximation error is bounded by constants / $\sqrt{n}$ (where constants are proportional to third moment / $\sigma^3$.

The not-IID generalization for n distinct distribution has the bound more or less sum of third moments divided by (sum of sigma^2)^(3/2) times (sum of third moments), which is surprisingly similar to IID special case. My reading of it suggests that if the sigmas / third moments of all n distributions are all bounded below / above some sigma / phi (which of course happens when you pick up any finite number of distributions by hand), the error is again diminishes at rate $1/\sqrt{n}$ if you squint your eyes.

So, I would guess for a series of not-IID distributions to sum into a Gaussian as poorly as possible (while Berry-Esseen still applies), one would have to pick a series of distributions with as wildly small variances and wildly large skews...? And getting rid of the assumptions of CLT/its generalizations gives that the theorem no longer applies.

Comment by Aaro Salosensaari (aa-m-sa) on Reason isn't magic · 2020-12-10T08:25:02.895Z · LW · GW

It gets worse. This isn't a randomly selected example - it's specifically selected as a case where reason would have a hard time noticing when and how it's making things worse.

Well, the history of bringing manioc to Africa is not the only example. Scientific understanding of human nutrition (alongside with disease) had several similar hiccups along the way, several which have been covered in SSC (can't remember the post titles where):

There was a time when Japanese army lost many lives to beriberi during Russo-Japanese war, thinking it was a transmissible disease, several decades [1] after the one of the first prominent Japanese young scholars with Western medical training discovered it was a deficiency related to nutrition with a classical trial setup in Japanese navy (however, he attributed it -- wrongly -- to deficiency of nitrogen). It took several decades to identify vitamin B1. [2]

Earlier, there was a time when scurvy was a problem in navies, including the British one, but then British navy (or rather, East India Company) realized citrus fruits were useful preventing scurvy, in 1617 [3]. Unfortunately it didn't catch on. Then they discovered it again with an actual trial and published the results, in 1740-50s [4]. Unfortunately it again didn't catch on, and the underlying theory was also as wrong as the others anyway. Finally, against the scientific consensus at the time, the usefulness of citrus was proven by a Navy read admiral in 1795 [5]. Unfortunately they still did not have proper theory why the citrus was supposed to work, so when the Navy managed to switch to using lime juice with minimal vitamin C content [6], then managed reason themselves out of use of citrus, and scurvy was determined as a result of food gone bad [7]. Thus Scott's Arctic expedition was ill-equipped to prevent scurvy, and soldiers in Gallipoli 1915 also suffered from scurvy.

Story of discovering vitamin D does not involve as dramatic failings, but prior to discovery of UV treatment and discovery of vitamin D, John Snow suggested the cause was adulterated food [8]. Of course, even today one can easily find internet debates about what is "correct" amount of vitamin D supplement if one has not sunlight in winter. Solving B12 deficiency induced anemia appears a true triumph of the science, as a Nobel prize was awarded for dietary recommendation for including liver in the diet [9] before B12 (present in liver) was identified [10].

Some may notice that we have now covered many of the significant vitamins in human diet. I have not even started with the story of Semmelweis.

And anyway, I dislike the whole premise of casting the matter about "being for reason" or "against reason". The issue with manioc, scurvy, beriberi, and hygiene was that people had unfortunate overconfidence in their per-existing model of reality. With sufficient overconfidence, rationalization or mere "rational speculation", they could explain how seemingly contradictory experimental results actually fitted in their model, and thus claim the nutrition-based explanations as an unscientific hogwash, until the actual workings of vitamins was discovered. (The article [1] is very instructive about rationalizations Japanese army could come up to dismiss Navy's apparent success with fighting beriberi: ships were easier to keep clean, beriberi was correlated with spending time on contact with damp ground, etc.)

While looking up food-borne diseases while writing this comment, I was reminded about BSE [11], which is hypothesized to cause vCJD in humans because humans thought it was a good idea to feed dead animals to cattle to improve nutrition (which I suppose it does, barring prion disease). I would view this as a failing from not having a full model what side-effects behavior suggested by the partial model would cause. 

On the positive side, sometimes the partial model works well enough: It appears that miasma theory of disease like cholera was the principal motivator for building modern sewage systems. While it is today obvious cholera is not caused by miasma, getting rid of smelly sewage in orderly fashion turned out to be a good idea nevertheless [12].

I am uncertain if I have any proper suggested conclusion, except for that, in general, mistakes of reason are possible and possibly fatal, and social dynamics may prevent proper corrective action for a long time. This is important to keep in mind when making decisions, especially novel and unprecedented, and when evaluating the consequences of action. (The consensus does not necessarily budge easily.)

Maybe a more specific conclusion could be: If one has only evidently partial scientific understanding of some issue, it is very possible acting on it can have unintended consequences. It may even not be obvious where the holes in the scientific understanding are. (Paraphrasing the response to Semmelweis: "We don't exactly know what causes childbed fever, it manifests in many different organs so it could be several different diseases, but the idea of invisible corpse particles that defy water and soap is simply laughable.")

 

[1] https://pubmed.ncbi.nlm.nih.gov/16673750/

[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3725862/

[3] https://en.wikipedia.org/wiki/John_Woodall

[4] https://en.wikipedia.org/wiki/James_Lind

[5] https://en.wikipedia.org/wiki/Alan_Gardner,_1st_Baron_Gardner 

[6] https://en.wikipedia.org/wiki/Scurvy#19th_century 

[7] https://idlewords.com/2010/03/scott_and_scurvy.htm 

[8] https://en.wikipedia.org/wiki/Rickets#History 

[9] https://www.nobelprize.org/prizes/medicine/1934/whipple/facts/

[10] https://en.wikipedia.org/wiki/Vitamin_B12#Descriptions_of_deficiency_effects 

[11] https://en.wikipedia.org/wiki/Bovine_spongiform_encephalopathy 

[12] https://en.wikipedia.org/wiki/Joseph_Bazalgette 

Comment by Aaro Salosensaari (aa-m-sa) on Developmental Stages of GPTs · 2020-07-28T23:32:12.516Z · LW · GW

(Reply to gwern's comment but not only addressing gwern.)

Concerning the planning question:

I agree that next-token prediction is consistent with some sort of implicit planning of multiple tokens ahead. I would phrase it a bit differently. Also, "implicit" is doing lot of work here

(Please someone correct me if I say something obviously wrong or silly; I do not know how GPT-3 works, but I will try to say something about how it works after reading some sources [1].)

The bigger point about planning, though, is that the GPTs are getting feedback on one word at a time in isolation. It's hard for them to learn not to paint themselves into a corner.

To recap what I have thus far got from [1]: GPT-3-like transformers are trained by regimen where the loss function evaluates prediction error of the next word in the sequence given the previous word. However, I am less sure if one can say they do it in isolation. During training (by SGD I figure?), transformer decoder layers have (i) access to previous words in the sequence, and (ii) both attention and feedforward parts of each transformer layer has weights (that are being trained) to compute the output predictions. Also, (iii) the GPT transformer architecture considers all words in each training sequence, left to right, masking the future. And this is done for many meaningful Common Crawl sequences, though exact same sequences won't repeat.

So, it sounds a bit trivial that GPTs trained weights allow "implicit planning": if given a sequence of words w_1 to w_i-1 GPT would output word w for position i, this is because a trained GPT model (loosely speaking, abstracting away many details I don't understand) "dynamically encodes" many plausible "word paths" to word w, and [w_1 ... w_i-1] is such a path; by iteration, it also encodes many word paths from w to other words w', where some words are likelier to follow w than others. The representations in the stack of attention and feedforward layers allows it to generate text much more better than eg old good Markov chain. And "self-attending" to some higher-level representation that allows it generate text in particular prose style seems a lot of like a kind of plan. And GPT generating text that it used as input to it, to which it again can selectively "attend to", this all seems like as a kind of working memory, which will trigger self-attention mechanism to take certain paths, and so on.

I also want highlight oceainthemiddleofanisland's comment in other thread: Breaking complicated generation tasks into smaller chunks getting GPT to output intermediate text from initial input, which is then given as input to GPT to reprocess, enabling it finally to output desired output, sounds quite compatible to this view.

(On this note, I am not sure what to think of the role of human in the loop here, or in general, how it apparently requires non-trivial work to find a "working" prompt that seeds GPT obtain desired results for some particularly difficult tasks. That there are useful, rich world models "in there somewhere" in GPTs weights, but it is difficult to activate them? And are these difficulties because it is humans are bad at prompting GPT to generate text that accesses the good models, or because GPTs all-together model is not always so impressive as it easily turns into building answers based on gibberish models instead of the good ones, or maybe GPT having a bad internal model of humans attempting to use GPT? Gwern's example concerning bear attacks was interesting here.)

This would be "implicit planning". Is it "planning" enough? In any case, the discussion would be easier if we had a clearer definition what would constitute planning and what would not.

Finally, a specific response to gwerns comment.

During each forward pass, GPT-3 probably has plenty of slack computation going on as tokens will differ widely in their difficulty while GPT-3's feedforward remains a fixed-size computation; just as GPT-3 is always asking itself what sort of writer wrote the current text, so it can better imitate the language, style, format, structure, knowledge limitations or preferences* and even typos, it can ask what the human author is planning, the better to predict the next token. That it may be operating on its own past completions and there is no actual human author is irrelevant - because pretending really well to be an author who is planning equals being an author who is planning! (Watching how far GPT-3 can push this 'as if' imitation process is why I've begun thinking about mesa-optimizers and what 'sufficiently advanced imitation' may mean in terms of malevolent sub-agents created by the meta-learning outer agent.)

Using language how GPT-3 is "pretending" and "asking itself what a human author would do" can be maybe justified as metaphors, but I think it is a bit fuzzy and may obscure differences between what transformers do when we say they "plan" or "pretend", and what people would assume of beings who "plan" or "pretend". For example, using a word like "pretend" easily carries over an implication that there is something true, hidden, "unpretense" thinking or personality going on underneath. This appears quite unlikely given a fixed model, and generation mechanism that starts anew from each seed prompt. I would rather say that GPT has a model (is a model?) that is surprisingly good at natural language extrapolation and also, it is surprising at what can be achieved by extrapolation.


[1] http://jalammar.github.io/illustrated-gpt2/ , http://peterbloem.nl/blog/transformers and https://amaarora.github.io/2020/02/18/annotatedGPT2.html in addition to skimming original OpenAI papers

Comment by Aaro Salosensaari (aa-m-sa) on Developmental Stages of GPTs · 2020-07-28T09:13:05.225Z · LW · GW

I contend it is not an *implementation* in a meaningful sense of the word. It is more a prose elaboration / expansion of the first generated bullet point list (an inaccurate one: "plan" mentions chopping vegetables, putting them in a fridge and cooking meat; prose version tells of chopping a set of vegetables, skips the fridge and then cooks beef, and then tells an irrelevant story where you go to sleep early and find it is a Sunday and no school).

Mind, substituting abstract category words with sensible more specific ones (vegetables -> carrots, onions and potatoes) is an impressive NLP task for an architecture where the behavior is not hard-coded in (because that's how some previous natural language generators worked), and even more impressive that it can produce the said expansion with a NLP input prompt, but hardly a useful implementation of a plan.

An improved experiment of "implementing plans" that could be within capabilities of GPT-3 or similar system: get GPT-3 to first output a plan of doing $a_thing and then the correct keystroke sequence input for UnReal World, DwarfFortress or Sims or some other similar simulated environment to produce it.

Comment by Aaro Salosensaari (aa-m-sa) on Self-sacrifice is a scarce resource · 2020-07-27T17:45:57.652Z · LW · GW

At the risk of stating very much the very obvious:

Trolley problem (or the fat man variant) is a wrong metaphor for near any ethical decision, anyway, as there are very few real life ethical dilemmas that are as visceral and require immediate action from very few limited set of options and whose consequences are nevertheless as clear.

Here is a couple of a bit more realistic matter of life and death. There are many stories (probably I could find factual accounts, but I am too lazy to search for sources) of soldiers who make the snap decision to save the lives of rest of their squad by jumping on a thrown hand grenade. Yet I doubt very few would cast much blame on anyone who had a chance of taking cover, and did that instead. (I wouldn't.) Moreover, the generals who demand prisoners (or agitate impressionable recruits) to clear a minefield without proper training or equipment are to be much frowned upon. And of course, there are untold possibilities to commit a dumb self-sacrifice that achieves nothing.

It general, a military force can not be very effective without people willing to put themselves in danger: if one finds oneself agreement with existence of states and armies, some amount of self-sacrifice follows naturally. For this reason, there are acts of valor who are viewed positively and to be cultivated. Yet, there are also common Western moral sentiments which dictate that it is questionable or outright wrong to require the unreasonable of other people, especially if the benefactors or the people doing the requiring are contributing relatively little themselves (sentiment demonstrated here by Blackadder Goes Forth). And in some cases drawing a judgement is generally considered difficult.

(What one should make of the Charge of the Light Brigade? I am not a military historian, but going by the popular account, the order to charge was stupid, negligent, mistake, or all of the three. Yet to some people, there is something inspirational in the foolishness of soldiers fulfilling the order; others would see such vies as abhorrent legend-building propaganda that devalues human life.)

In summary, I have not much concrete conclusions to offer, and anyway, details from one context (here, military) do not translate necessarily very well into other aspects of life. In some situations, (some amount of) self-sacrifice may be a good option, maybe even the best or only option for obtaining some outcomes, and it can be good thing to have around. On the other hand, in many situations it is wrong or contentious to require large sacrifices from others, and people who do so (including also extreme persuasion leading to voluntary self-sacrifice) are condemned as taking unjust advantage of others. Much depends on the framing.

As reader may notice, I am not arguing from any particular systematic theory of ethics, but rehashing my moral intuitions what is considered acceptable in West, assuming there is some signal of ethics in there.

Comment by Aaro Salosensaari (aa-m-sa) on Maths writer/cowritter needed: how you can't distinguish early exponential from early sigmoid · 2020-05-06T17:47:17.776Z · LW · GW

"Non-identifiability", by the way, is the search term that does the trick and finds something useful. Please see: Daly et al. [1], section 3. They study indentifiability characteristics of logistic sigmoid (that has rate r and goes from zero to carrying capacity K at t=0..30) via Fisher information matrix (FIM). Quote:

When measurements are taken at times t ≤ 10, the singular vector (which is also the eigenvector corresponding to the single non-zero eigenvalue of the FIM) is oriented in the direction of the growth rate r in parameter space. For t ≤ 10, the system is therefore sensitive to changes in the growth rate r, but largely insensitive to changes in the carrying capacity K. Conversely, for measurements taken at times t ≥ 20, the singular vector of the sensitivity matrix is oriented in the direction of the growth rate K[sic], and the system is sensitive to changes in the carrying capacity K but largely insensitive to changes in the growth rate r. Both these conclusions are physically intuitive.

Then Daly et al. proceed with MCMC scheme to numerically show that samples at different parts of time domain result in different identifiability of rate and carrying capacity parameters (Figure 3.)

[1] Daly, Aidan C., David Gavaghan, Jonathan Cooper, and Simon Tavener. “Inference-Based Assessment of Parameter Identifiability in Nonlinear Biological Models.” Journal of The Royal Society Interface 15, no. 144 (July 31, 2018): 20180318. https://doi.org/10.1098/rsif.2018.0318

EDIT.

To clarify, because someone might miss it: this is not only a reply to shminux. Daly et al 2018 is (to some extent) the paper Stuart and others are looking for, at least if you are satisfied with their approach by looking what happens to effective Fisher information of logistic dynamics before and after inflection, supported by numerical inference methods showing that identifiability is difficult. (Their reference list also contains a couple of interesting articles about optimal design for logistic, harmonic models etc.)

Only thing missing that one might want AFAIK is a general analytical quantification of the amount of uncertainty, and comparison to specifically exponential (maybe along the lines Adam wrote there), and maybe writing it up in easy to digest format.

Comment by Aaro Salosensaari (aa-m-sa) on Maths writer/cowritter needed: how you can't distinguish early exponential from early sigmoid · 2020-05-06T17:40:18.061Z · LW · GW

Was momentarily confused what is k (sometimes denotes carrying capacity in the logistic population growth model), but apparently it is the step size (in numerical integrator)?

I have not enough expertise here to speak like an expert, but it seems that stiffness would be related in a roundabout way. It seems to describe difficulties of some numerical integrators with systems like this: the integrator can veer much off of true logistic curve with insufficiently small steps because the differential changes fast.

The phenomenon seems to be more about non-sensitivity than sensitivity of solution to parameters (or to be precise, non-identifiability of parameters): part of the solution before inflection seems to change very little to changes in "carrying capacity" (curve maximum) parameter.

Comment by Aaro Salosensaari (aa-m-sa) on Maths writer/cowritter needed: how you can't distinguish early exponential from early sigmoid · 2020-05-06T13:04:23.258Z · LW · GW

I was going to suggest that maybe it could be a known and published result in dynamical systems / population dynamics literature, but I am unable to find anything with Google, and textbooks I have at hand, while plenty mentions of logistic growth models, do not discuss prediction from partial data before inflection point.

On the other hand, it is fundamentally a variation on the themes of difficulty in model selection with partial data and dangers of extrapolation, which are common in many numerical textbooks.

If anyone wishes to flesh it out, I believe this behavior is not limited to trying to distinguish exponentials from logistic curves (or different logistics from each other), but also distinguishing different orders of growth from each other in general. With a judicious choice of data range and constants, it is not difficult to create a set of noisy points which could be either from a particular exponential or a particular quadratic curve. Quick example: https://raw.githubusercontent.com/aa-m-sa/exponential_weirdness/master/exp_vs_x2.png (And if you limit data point range you are looking at to 0 to 2, it is quite impossible to say if a linear model wouldn't also be plausible.)

Comment by Aaro Salosensaari (aa-m-sa) on Against strong bayesianism · 2020-05-04T23:29:09.257Z · LW · GW

I am happy that you mention Gelman's book (I am studying it right now). I think lots of "naive strong bayesianists" would improve from a thoughtful study of the BDA book (there are lots of worked out demos and exercises available for it) and maybe some practical application of Bayesian modelling to some real-world statistical problems. The practice of "Bayesian way of life" of "updating my priors" sounds always a bit too easy in contrast to doing a genuine statistical inference.

For example, a couple of puzzles I am still myself unsure how to answer properly and with full confidence: Why one would be interested in doing stratified random sampling with your epidemiological study instead of naive "collect every data point that you see and then do a Bayesian update?" Or how multiple comparisons corrections for classical frequentist p-values map into Bayesian statistical framework? Does it matter for LWian Bayesianism if you are doing your practical statistical analyses with frequentist or Bayesian analysis tools (especially if many frequentist methods can be seen as clever approximations to full Bayesian model, see e.g. discussion of Kneser-Ney smoothing as ad hoc Pitman-Yor process inference here: https://cs.stanford.edu/~jsteinhardt/stats-essay.pdf ; similar relationship exists between k-means and EM-algorithm of Gaussian mixture model.) And if there is no difference, is the philosophical Bayesianism then actually that important -- or important at all -- for rationality?

Comment by Aaro Salosensaari (aa-m-sa) on Open & Welcome Thread—May 2020 · 2020-05-04T17:12:02.647Z · LW · GW

Howdy. I came across Ole Peters' "ergodicity economics" some time ago, and was interested to see what LW made of it. Apparently one set of skeptical journal club meetup notes: https://www.lesswrong.com/posts/gptXmhJxFiEwuPN98/meetup-notes-ole-peters-on-ergodicity

I am not sure what to make of criticisms of Seattle meetups (they appear correct, but I am not sure if they are relevant; see my comment there).

Not planning to write a proper post, but here is an example blog post of Peters which I found illustrative and demonstrates why I think the "ergodicity way of thinking" might have something in it: https://ergodicityeconomics.com/2020/02/26/democratic-domestic-product/ . In summary, looking at the aggregate ensemble quantity such GDP per capita does not tell much what happens to individuals in the ensemble: the typical individual experienced growth in population in general is not related to GDP growth per capita (which may be obvious to a numerate person but not necessarily so, given the importance given to GDP in public discussion). And if one takes average of exponential growth rate, one obtains a measure (geometric mean income that they dub "DDP") known in economics literature, but originally derived otherwise.

But maybe this looks insightful to me because I am not that very well-versed in economics literature, so it would be nice to have some critical discussion about this.

Comment by Aaro Salosensaari (aa-m-sa) on Meetup Notes: Ole Peters on ergodicity · 2020-05-04T16:28:34.511Z · LW · GW

Peters' December 2019 Nature Physics paper (https://www.nature.com/articles/s41567-019-0732-0 ) provides some perspective on 0.6/1.5x coin flip example and other conclusions of the above discussion. (If Peters' claims have changed along the way, I wouldn't know.)

In my reading, there Peters' basic claim is not that ergodicity economics can solve the coin flip game in a way that classical economics can not (because it can, by switching to expected log wealth utility instead of expected wealth), but the utility functions as originally presented are a clutch that misinforms us on people's psychological motives in doing economic decisions. So, while the mathematics of many parts stays the same, the underlying phenomena can be more saliently reasoned about by looking at the individual growth rates in context of whether the associated wealth "process" is additive or multiplicative or something else. Thus there is also less need to use lingo where people may have an (innate, weirdly) "risk-averse utility function" (as compared to some other less risk-averse theoretical utility function).