What is estimational programming? Squiggle in context

post by Quinn (quinn-dougherty) · 2022-08-12T18:39:57.230Z · LW · GW · 7 comments

Contents

  A whiggish history of estimational programming 
  Why do you want your estimational programming product to be uncertain
  Why do you want your estimational programming product to be functional
  Estimational programming is not probabilistic programming.
  Estimational programming is not Squiggle and vice versa.
None
7 comments

This post does not contain the canonical QURI opinion, but I am a contractor there. I want to thank early commenters Vivian Belenky and Ozzie Gooen. The section concerning probabilistic programming is readily skippable to make things easier for nonprogrammers.

In this post, I aim to explain and describe what Squiggle [EA · GW] is. In future posts, I will clarify its EA value proposition and highlight its applications and theories of change. 

TLDR: 

  1. Estimational programming is any practice of writing quantitative belief specifications. 
  2. Estimational programming has a rich history.
  3. Squiggle is a great tool for estimational programming, but other languages can and should be great tools for estimational programming as well.

A whiggish history of estimational programming 

Belief specifications are descriptive units of belief. Estimational programming (EP) is the practice of writing belief specifications. We want EP products to be functional and uncertain. When I say an EP product is uncertain, I mean that it allows uncertain beliefs to be specified. By functional, I mean that when users write estimates on an EP product, the estimates should be reproducible and composable. By composable, I also mean to say that they should be transparent and interpretable. If an estimate is both reproducible and interpretable, I will call it auditable.

Technologies for writing belief specifications exist but lack the above desirable properties. Users can accomplish estimational programming in each of them to varying degrees, but Squiggle is unusual because it is a platform for which EP is the first-class intended use case. 

PythonSquiggle
# Hat tip to Jonas Moss for the code
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt
rng = np.random.default_rng(313)
n = 10000
# Translate quantiles
a = np.log(0.75)
b = np.log(0.9)
k1 = st.norm.ppf(0.05)
k2 = st.norm.ppf(0.95)
sigma = (b - a) / (k2 - k1)
mean = b - sigma * k2
transfer_efficiency = np.random.lognormal(
 mean=mean, 
 sigma=sigma, 
 size=n)
x = np.linspace(0.7, 1, 100)
plt.plot(x, st.lognorm.pdf(x/np.exp(mean), sigma)) # Scipy's parameterization of the log-normal is stupid. Cost me another 5 minutes to figure out how to do this one.

### It's prudent to check if I've done the calculations correctly too..
np.quantile(transfer_efficiency, [0.05, 0.95]) # array([0.75052923, 0.90200089])
transfer_efficiency = 0.75 to 0.95

In addition to these gains, in a general purpose programming language, any nontrivial computation forces users to do Monte Carlo longhand. The programming perspective in notebooks is very free; we have trivial/apparent Turing completeness, we can ingest data and do intensive numeric work on it, but the level of generality provided leads to the imposition of boilerplate costs when users want to write belief specs. 

It is not enough to identify what estimational programming is; I must also identify what EP ought to be, which is uncertain and functional. Indeed, the term of art may even be “functional estimational programming” or “programming compositional estimation functions'' or variants. Still, I don’t want to be clunky, and I don’t want to separate the aspirations of EP from its history. 

Why do you want your estimational programming product to be uncertain

I can mostly punt this to Superforecasting and How To Measure Anything for the details, but TLDR, you’re uncertain about everything. Belief specs aren’t handy without being in confidence intervals. 

Why do you want your estimational programming product to be functional

Functionality, or composability, is a desirable property for an estimational programming language.

MicroCOVID dashboards are estimates of the risk involved in doing various activities. However, they can’t export to Guesstimate sheets. It would be helpful to send our covid risk tolerance into an estimate of how much fun we’ll have, value we’ll create, or resources we’ll consume. The reason this might not have seemed thinkable to the developers of either project is that they’re not working from a shared notion of belief spec

I think a lot about this article by Fabrizio Genovese, which defines compositional systems as fluid de- and re-composition under the abolition of emergent properties, contrasting it with modular systems that, like a house’s electrical wiring, might blow up in your face if you don’t understand how you’ve put it together even if you understand the individual parts. Both are described as “breaking things apart and putting them back together,” which we’d like to do with forecasts and cost-effectiveness analyses. Still, the compositional timbre emphasizes ease of an audit: the parts you break things down into provide understanding, show you how to check somebody’s work, or your own, and the piecing back together is so simple that it is precisely – no more than, no less than – the sum of the parts. I hope this illustrates how I associate interpretability/auditability with compositionality. 

This aspiration of compositionality (rather than modularity or black box stories) is a game-changing aspect of forecasting and cost-effectiveness analysis for the following reasons:

Estimational programming is not probabilistic programming.

This section, may you be warned, is more technical than other sections. 

Some of you have heard of a programming language where the terms are distributions. There are two kinds of literature going by the name probabilistic programming (PP). In one of them, terms are random variables, and we ask questions like “probability of halting” or “expected length of redux chain” (see citation pdf chapters 1 & 2). In the other, the terms are distributions, and we ask questions like “what sampling process approximates the posterior of a dataset and a supplied prior” or “given a sampling process, can we extrapolate integrals and derivatives of the implied density function” (see SR, BDA). I’m pretty sure the former case involving random variables is obscure and too academic, whereas the latter case involving distributions can be used by scientists from diverse departments. This latter case is also what people ask the Squiggle team about all the time [EA(p) · GW(p)]. An academic paper properly comparing and contrasting formal properties of estimational and probabilistic programming is somewhere in the “not a priority” or “eventually” region of the QURI roadmap (nevertheless, quinn@quantifieduncertainty.org if you have ideas about what that would look like), but I’d like to take a quick pass here. 

 Probabilistic programming (e.g. stan or pyro)Estimational programming
DataEmphasizes ingesting dataIs not primarily about ingesting data
InferenceIs for updating your beliefsIs for writing down your beliefs
Purposes of simulationNeeds to work very precisely with very particular sampling processes to keep track of unique derivatives and integralsOnly needs sampling processes for operations not defined analytically (or spoofable from properties of chart coordinates, in Squiggle’s case)
The term languageDistributions are terms only in a sampling context (e.g., with precise state management over random seeds), of which probabilistic programming exposes the direct control to the user (e.g., with syntax in pymc3)For EP to be a first-class priority of a language, distributions are still terms, but the idea of sampling context should be global, implicit, and ambient. 
Turing completenessA PPL is often a domain-specific language (DSL) embedded into a general purpose (i.e., Turing complete) language, which can either allow or outlaw features of Turing completenessIn principle, an EPL project could be very similar in this regard, but we have not observed such a product/platform yet (Squiggle doesn’t have `while` or streams, and it’s not an embedded DSL)

Now, quickly showing contrast doesn’t on its own justify the existence of Squiggle when existing ecosystems are “nearby” in the usable space. Still, it should help explain the relative value prop of using one or the other for your particular project or use case. 

Estimational programming is not Squiggle and vice versa.

Squiggle is simply the first open-source language that emphasizes EP as a first-class citizen. It is not the sole keeper of what EP is all about or where it’s going. By another token, by putting Squiggle in this box I’ve created, I may be constraining the reader’s imagination about where Squiggle could end up. 

7 comments

Comments sorted by top scores.

comment by gjm · 2022-08-12T20:09:56.300Z · LW(p) · GW(p)

I think the transfer_efficiency example is kinda dishonest (also kinda wrong: the right-hand column uses an upper figure of 0.95 but the left-hand column uses 0.9). Squiggle apparently has special-case support for the special case of a lognormal variable where you know the lower and upper fifth-percentile values. That's nice, I guess, but it really is quite a special case.

Unless I am going to be spending all my putative time in Squiggle specifying fifth-percentile values of lognormal variables, that example is of no use to me without knowing what happens if I want to specify the tenth-percentile values instead, what happens if I want a normal distribution, etc.

I had a look in the Squiggle documentation, and it looks to me as if the "to" syntax is only for 5th-to-95th percentiles, and only for normal distributions where those quantities have different signs and lognormal distributions where they have the same sign. (Which means, e.g., that if you are calculating those quantities then you cannot safely use "to" unless you know for sure what the signs will end up being.) If you have Squiggle code that uses "to" and then you realise that actually you wanted to use the 10th and 90th percentiles, or a different distribution, or a normal distribution even though both those quantiles are positive, you need to throw away that one line and write something completely different that does the same untidy thing as those "15 lines" of Python code. Is the corresponding Squiggle code in that case actually any shorter or clearer?

Also, those "15 lines" of Python code corresponding to the "1 line" of Squiggle code include

  • three lines of imports that will happen once at the start of whatever thing you're doing and don't need repeating for each uncertain quantity you work with
  • one line initializing a variable that is never actually used in the code
  • one line double-checking that the earlier lines produced something like the right result

and if you were in fact writing something in Python that makes substantial use of distributions specified by giving a couple of quantiles, you would write a function containing the boilerplate once and never have to worry about the fiddly details again.

(On the other hand, if you're only doing it once then indeed the boilerplate is a substantial fraction of your code. But if you're only doing it once then I bet the broader advantages of a general-purpose programming language will outweigh the local advantages of a domain-specific language specializing in doing neatly something you only need to do once.)

For the avoidance of doubt, this is not intended to imply that Squiggle is bad. Just that that example seems super-unfair.

[EDITED to add:] I'm also really confused, because the actual Squiggle notebook linked to from the discussion this example supposedly comes from (1) doesn't in fact define transfer_efficiency this way, (2) does consider at least one lognormally distributed quantity determined from estimates of its quantiles, namely consumption_prior, and (3) handles that quantity ... by giving explicit numerical values for the mean and stddev of the underlying normal distribution. There is a quantity in the spreadsheet called transfer_efficiency, it is beta-distributed not lognormal, and its distribution is also specified by giving magic numbers for the distribution parameters rather than saying where two of its quantiles are.

Replies from: alexlyzhov, quinn-dougherty
comment by alexlyzhov · 2022-08-14T06:29:01.082Z · LW(p) · GW(p)

I'd be really interested in a head-to-head comparison with R on a bunch of real-world examples of writing down beliefs that were not selected to favor either R or Squiggle. R because at least in part specifying and manipulating distributions seems to require less boilerplate than in Python.

comment by Quinn (quinn-dougherty) · 2022-08-13T00:50:04.259Z · LW(p) · GW(p)

Thanks!

  • 0.75 to 0.95 vs. 0.75 to 0.9 is strictly my transcription bug, not being careful enough.
  • In general I wasn't auditing the code from the Jonas Moss comment, I just stepped through looking at the functionality. I should've been more careful, if I was going to make a claim about the conversion factor.
  • You're kinda right about the question "if it's a constant number of lines written exactly once, does it really count as boilerplate?" I can see how it feels a little dishonest of me to imply that the ratio is really 15:1. The example I was thinking of was the Biological Anchors Report ("Ajeya's Timeilnes"), those notebooks have lots of LOC in hidden cells, but the relative cost of those goes down as the length of the report goes up. All that considered, I could be updated to the idea that the boilerplate point is moot for power users (who are probably able and willing to provide that boilerplate once per file), but I would still be excited about what is opened up for more casual users.
  • You're right that, or your comment is suggesting to me indirectly that, squiggle, having not yet provided a way to give non-default quantiles with the to syntax, hasn't done anything to show that it'd really beat hand-crafted python functions, to accomplish this.
  • Re the underlying squiggle notebook concerning GiveDirectly and so on, I've flagged your comment to Sam (it's something else I haven't taken a close look at).
Replies from: sam-nolan, Mo Nastri
comment by Sam Nolan (sam-nolan) · 2022-08-13T09:33:36.984Z · LW(p) · GW(p)

Thanks for the flag!
I might not be understanding correctly, but I don't think there's a problem here with the actual underlying code just my explanation of it (we all hate magic numbers). Which is very fair enough, the notebook is much too dense for my liking. It's still a work in progress!

I agree! The Squiggle team is looking to create different quantiles for different distributions. I've needed them on several occasions. You can check out the discussion on GitHub here. It's on my todo list.

comment by Mo Putera (Mo Nastri) · 2022-08-13T03:01:10.584Z · LW(p) · GW(p)

Just letting you know that you seem to have double-pasted the 3rd bullet point.

Replies from: quinn-dougherty
comment by Quinn (quinn-dougherty) · 2022-08-13T03:39:40.957Z · LW(p) · GW(p)

oof, good catch, fixed.

comment by Quinn (quinn-dougherty) · 2023-11-26T20:19:06.913Z · LW(p) · GW(p)

the author re-reading one year+ out:

  • My views on value of infographics that actually look nice have changed, perhaps I should have had nicer looking figures.
  • "Unlike spreadsheets, users can describe in notebooks distributions, capturing uncertainty in their beliefs." seems overconfident. Monte carlo has been an excel feature for years. My dismissal of this (implicit) makes sense as a thing to say because "which usecases are easy?" is a more important question than "which usecases can you get away with if you squint?", but I could've done a way better job at actually reading and critiquing excel programs that feature monte carlo.
  • Viv told me to fix this and I ignored them because I liked the aesthetics of the sentence at the time: "which defines compositional systems as fluid de- and re-composition under the abolition of emergent properties" -- viv was right. I've changed my views on the importance of writing being boring / non-idiosyncratic, also even my prose aesthetic preferences change over time (I no longer enjoy this sentence).
  • "(this use case isn’t totally unlike what Ergo accomplishes)" I keep thinking about ergo, perhaps a paragraph in the "whiggish history" section would've been appropriate, since API access to some zoo of scoring rules / crowd wisdom is a pretty obvious feature that many platforms would foreseeably prioritize.
  • I underrated how strong the claims about compositionality (being precisely the sum of parts), since statistical noise is such a fundamental piece of the puzzle.

I haven't been working on this stuff except a little on the side for most of the last year, but still get excited here and there. I returned to this post because I might have another post about module systems in software design, package management, and estimational programming written up in the not too distant future.

Overall, this remains a super underrated area.