Posts

Sufficiently Advanced Language Models Can Do Reinforcement Learning 2020-08-02T15:32:47.894Z · score: 22 (14 votes)
Structured Tasks for Language Models 2020-07-29T14:17:59.478Z · score: 5 (2 votes)
You Can Probably Amplify GPT3 Directly 2020-07-26T21:58:53.962Z · score: 35 (15 votes)
An Old Way to Visualize Biases 2020-07-24T00:10:17.970Z · score: 4 (5 votes)
Idea: Imitation/Value Learning AIXI 2020-07-03T17:10:16.775Z · score: 3 (1 votes)
Replication Dynamics Bridge to RL in Thermodynamic Limit 2020-05-18T01:02:53.417Z · score: 6 (3 votes)
Zachary Robertson's Shortform 2020-05-06T00:42:10.113Z · score: 2 (1 votes)
What Resources on Journal Analysis are Available? 2019-12-28T20:00:11.512Z · score: 13 (4 votes)
The Planning Problem 2019-08-04T18:58:55.186Z · score: 16 (8 votes)
Is there a user's manual to using the internet more efficiently? 2019-08-04T18:51:38.818Z · score: 19 (9 votes)

Comments

Comment by zachary-robertson on is gpt-3 few-shot ready for real applications? · 2020-08-03T22:45:36.587Z · score: 1 (1 votes) · LW · GW

So storage no longer scales badly with the number of operations you define. However, latency still does, and latency per call is now much larger, so this might end up being as much of a constraint. The exact numbers – not well understood at this time – are crucial: in real life the difference between 0.001 seconds, 0.1 seconds, 1 second, and 10 seconds will make or break your project.

This does seem to be a big issue for practical applications. Yet, I'm very much of the opinion that the API is more about exploring areas where fine-tuning would be useful. As you note, OpenAI does both. I'd assume a common use pattern will end up being something like: use few-shot, release, collect data, fine-tune, rinse-repeat.

(-3) Unlike supervised learning, there’s no built-in mechanism where you continually improve as your application passively gathers data during usage.

I think it's worth remembering that OpenAI is getting incredibly valuable data right now from the API. Adding more data about how people interact with the model seems completely doable with the centralization setup OpenAI has.

Comment by zachary-robertson on Sufficiently Advanced Language Models Can Do Reinforcement Learning · 2020-08-03T13:32:47.244Z · score: 5 (2 votes) · LW · GW

This paper looks interesting. My understanding is that this paper implemented a form of fine-tuning. However, learning the reward function does not seem to be few-shot whereas GPT3 does few-shot pretty well. That’s the main difference here as I see it.

It seems like there’s slow adaption (this paper) which is useful for more complicated tasks and fast adaption (the method here) that is useful for disposable tasks. I’d think a combination of both approaches is needed. For example, a module that tracks repeatedly occurring tasks can start a larger buffer to perform slow adaption.

Perhaps on a meta-level fine tuning GPT3 to few-shot inverse reinforcement learning would be an example of what could be possible with combining both approaches?

Comment by zachary-robertson on Sufficiently Advanced Language Models Can Do Reinforcement Learning · 2020-08-02T23:56:24.439Z · score: 2 (2 votes) · LW · GW

Correct. That’s why the section on classification and RL are separate. Classification tasks are a subclass of RL. A recurrent task need not be a classification task. In fact that I’d go further and say there’s still a huge difference between having an agent that can do RL and having an AGI. That’s why I put such speculation at the end.

Having said all that, it seems plausible to me that a language model might be able to reason about what modules it needs and then design them. I implicitly believe this to be the case, but perhaps I could’ve been more explicit. This is more of an empirical question, but if that were possible the difference between that model and AGI would become much smaller in my opinion.

Comment by zachary-robertson on Sufficiently Advanced Language Models Can Do Reinforcement Learning · 2020-08-02T18:54:19.726Z · score: 1 (1 votes) · LW · GW

Hopefully that's fixed! I wrote this as quickly as possible so there may be many tiny errors. Apologies. Let me know if anything else is wrong.

Comment by zachary-robertson on Power as Easily Exploitable Opportunities · 2020-08-02T12:16:41.157Z · score: 1 (1 votes) · LW · GW

Oh, if you read Understand from Story of Our Lives and Others by Ted Chiang you end up with a scenario where a human ends up finding a way to hack biological feedback loops into other people. At least, that’s what I immediately thought of when I read this.

Comment by zachary-robertson on Would AGIs parent young AGIs? · 2020-08-02T02:20:24.085Z · score: 1 (1 votes) · LW · GW

You might be interested in reading The Lifecycle of Software Objects by Ted Chiang

Comment by zachary-robertson on How will internet forums like LW be able to defend against GPT-style spam? · 2020-07-29T13:55:23.915Z · score: 0 (2 votes) · LW · GW

It's stereotyping to assume X will copy-paste a lot of posts per hour for little money where X is actually based on class/race status. Also, it's not central to your point so it seems easy to just remove.

Comment by zachary-robertson on How will internet forums like LW be able to defend against GPT-style spam? · 2020-07-29T12:21:05.784Z · score: -1 (4 votes) · LW · GW

I think the stereotyping (‘poor Indian’) is unnecessary to your point.

Comment by zachary-robertson on You Can Probably Amplify GPT3 Directly · 2020-07-27T13:41:43.080Z · score: 2 (2 votes) · LW · GW

Thanks! I forgot to do this. Luckily I can go back through the run and put this is in. There is ambiguity whenever it auto-completes, but I hope I did a decent job of noting where this is happening.

Comment by zachary-robertson on You Can Probably Amplify GPT3 Directly · 2020-07-27T13:09:49.102Z · score: 5 (4 votes) · LW · GW

You could prompt with “Q:” + (content) and then “A:”

I use the default settings on the temperature, but I do cut it off after it finishes an answer. However, you likely won’t get my exact results unless you literally copy the instances. Moreover, if you gave up after the first response I think might’ve given up to quickly. You can respond to it and communicate more information, as I did. The above really was what I got on the first try. It’s not perfect, but that’s the point. You can teach it. It’s not “it works” or “it doesn’t work”.

I don’t think there are tutorials, but perhaps in due time someone (maybe me) will get to that. I also feel like ‘trying’ to get it to do something might be a sub-optimal approach. This is a subtle difference, but my intent here was to get it to confirm it understood what I was asking by answering questions.

Comment by zachary-robertson on You Can Probably Amplify GPT3 Directly · 2020-07-27T01:33:09.293Z · score: 6 (4 votes) · LW · GW

I agree. Coming up with the right prompts was not trivial. I almost quit several times. Yet, there is a science to this and I think it’ll become more important to turn out focus away from the spectacle aspects of GPT and more towards reproducibility. More so if the way forward is via interrelated instances of GPT.

As an aside, critique seems much easier than generation. I’m cautiously optimistic about prompting GPT instances to “check” output.

Comment by zachary-robertson on Alignment As A Bottleneck To Usefulness Of GPT-3 · 2020-07-23T01:23:33.970Z · score: 3 (2 votes) · LW · GW

My problem is that this doesn't seem to scale. I like the idea of visual search, but I also realize you're essentially bit-rate limited in what you can communicate. For example, I'd about give up if I had to write my reply to you using a topic model. Other places in this thread mention semi-supervised learning. I do agree with the idea of taking a prompt and auto-generating the relevant large prompt that at the moment is manually being written in.

Comment by zachary-robertson on Alignment As A Bottleneck To Usefulness Of GPT-3 · 2020-07-22T12:00:43.985Z · score: 3 (2 votes) · LW · GW

Thanks for the link! I’ll partially accept the variations example. That seems to qualify as “show me what you learned”. But I’m not sure if that counts as an interface simply because of the lack of interactivity/programability.

Comment by zachary-robertson on Alignment As A Bottleneck To Usefulness Of GPT-3 · 2020-07-22T02:19:57.463Z · score: 2 (2 votes) · LW · GW

Have we ever figured out a way to interface with what something has learned that doesn't involve language prompts? I'm serious. What other options are you trying to hint at? I think manipulating hidden layers is a terrible approach, but I won't expound on that here.

Comment by zachary-robertson on Idea: Imitation/Value Learning AIXI · 2020-07-04T21:58:34.540Z · score: 1 (1 votes) · LW · GW

I agree with what you’re saying. Perhaps, I’m being a bit strong. I’m mostly talking about ambitious value learning in an open-ended environment. The game of Go doesn’t have inherent computing capability so anything the agent does is rather constrained to begin with. I’d hope (guess) that alignment in similarly closed environments is achievable. I’d also like to point out that in such scenarios I’d expect it to be normally possible to give exact goal descriptions rendering value learning superfluous.

In theory, I’m actually onboard with a weakly superhuman AI. I’m mostly skeptical of the general case. I suppose that makes me sympathetic to approaches that iterate/collectivize things already known to work.

Comment by zachary-robertson on Idea: Imitation/Value Learning AIXI · 2020-07-04T02:19:16.342Z · score: 1 (1 votes) · LW · GW

However, why should you expect to be a "better" policy than according to human values?

I feel like this is sneaking in the assumption that we're going to partition the policy into an optimization step and a value learning step. Say we train using data sampled from , then my point is that generalizes to optimally. Value learning doesn't do this. In the context of algorithmic complexity, value learning inserts a prior about how a policy ought to be structured.

On a philosophical front, I'm of the opinion that any error in defining "human values" will blow up arbitrarily if given to an optimizer with arbitrary capability. Thus, the only way to safely work with this inductive bias is to constrain the capability of the optimizer. If this is done correctly, I'd assume the agent will only be barely superhuman according to "human values". These are extra steps and regularizations that effectively remove the inductive bias with the promise that we can then control how "superhuman" the agent will be. My conclusion (tentatively) is that there is no way to arbitrarily extrapolate human values and doing so, even a little, introduces risk.

Comment by zachary-robertson on Idea: Imitation/Value Learning AIXI · 2020-07-04T00:37:45.464Z · score: 1 (1 votes) · LW · GW

I guess I'm confused by your third point. It seems clear that AIXI optimized on any learned reward function will have superhuman performance. However, AIXI is completely unaligned via wireheading. My point with the Kolgomorov argument is that AIXIL is much more likely to behave reasonably than AIXVL. Almost by definition, AIXIL will generalize most similarly to a human. Moreover, any value learning attempt will have worse generalization capability. I'm hesitant, but it seems I should conclude value alignment is a non-starter.

Comment by zachary-robertson on Superexponential Historic Growth, by David Roodman · 2020-06-19T22:43:57.320Z · score: 3 (2 votes) · LW · GW

It’d be nice if they would’ve plotted the projected singularity date using partial slices (filtrations) of the data. In other words, if we did the analysis in year X what is the projected singularity year Y?

Comment by zachary-robertson on Inaccessible information · 2020-06-04T04:50:57.739Z · score: 1 (1 votes) · LW · GW

As an aside, it really seems like the core issue rests with unease about using machine generated concepts to predict things about the world. Yet, the truth is that humans normally operate in the very way you don't want. We explain reasoning by heavily sanitizing our thought process. For example, I came up with several bad intuitions for why ML is a reasonable solution and that informed me that humans are also bad at the thing you think we're good at. See how convoluted that last sentence was? The point is that when I formally type up my thoughts I'm also sanitizing them so you don't see reasoning structure I don't want you to see.

The basic motivation to avoid sharing sensitive information leads me to maintain somewhat high differential privacy in communication. I'd like the result to be rather disentangled from the process. Well, we certainly agree that the process largely consists of inaccessible information. Trivially, you're only judging my writing based on what you can read...even if you make a latent model about me. Putting these two facts together, disagreement/debate only really exists if we have differing internal models.

Using your post as start, I'd say the strategic advantage of inaccessible information incentives agents (instrumentally) to have internal models of the kind I just described. This means that differential privacy would have instrumental value as a mechanism to prevent agents from peering inside one another without explicit permission from one another.

Comment by zachary-robertson on Inaccessible information · 2020-06-04T04:26:11.526Z · score: 1 (1 votes) · LW · GW

I actually enjoyed the post, but I'm not convinced of the relevance of this topic. You seem to be concerned that certain queries to an intelligent oracle might result in returning false, but plausible answers. On the one hand, this seems relevant. On the other hand, I struggle to actually come up with an example where this can happen. In principle, everything a model does with data is observable. You write,

At that point “picking the model that matches the data best” starts to look a lot like doing ML, and it’s more plausible that we’re going to start getting hypotheses that we don’t understand or which behave badly.

This confuses me. Modern ML is designed to engage in automatic feature generation. It turns out that engineered features introduce more bias than their worth. For example, it's fair to point out racial bias in facial recognition technology. However, it's also sensible to argue that creating a method to automatically engineer concepts to discriminate with is a major advance in removing human bias (inaccessible information) from the process.

But, but, but then you have groups that intentionally choose to use controversial concepts, such as facial features, to infer things such as income, propensity to violence, etc. You see this is where the bait-and-switch seems to come in here. It's not the machine's fault for being used to make spurious judgments. The real culprit is poor human reasoning. So then,

...or we need to figure out some way to access the inaccessible information that “A* leads to lots of human flourishing.”

sets off an alarm bell in my head. How is this any different than trying to use ML to 'catch' terrorists via facial recognition? While I'll readily admit ML models can learn good/bad concepts to use for a downstream task, the idea that these concepts also map onto human-readable concepts seem rather tenuous.

So I conclude that the idea of making all concepts used by a machine human-readable seems dubious. You really want good/high-dimensional data that makes no assumptions on the concepts it's going to be used to model with. ML concepts come with PAC guarantees people don't.

Comment by zachary-robertson on Reexamining The Dark Arts · 2020-06-02T05:16:39.401Z · score: 0 (3 votes) · LW · GW

This is out of context, perhaps even nitpicking. It’s clear they’re talking about universality. In that context, sustainable implies allowable.

Comment by zachary-robertson on GPT-3: a disappointing paper · 2020-05-29T19:32:06.851Z · score: 22 (12 votes) · LW · GW

Reading this I get the impression you have mismanaged expectations of what you think GPT-3 would do (ie should only be reserved for essentially pseudo-AGI)...but scaling GPT to the point of diminishing returns is going to take several more years. As everyone is stressing, they don’t even fit the training data at the moment.

Comment by zachary-robertson on OpenAI announces GPT-3 · 2020-05-29T19:03:42.118Z · score: 5 (3 votes) · LW · GW

GPT-2 was a hype fest, while this gets silently released on ArXiv. I’m starting think there’s something real here. I think before I’d laugh anyone who suggested GPT-2 could reason. I still think that’s true with GPT-3, but I wouldn’t laugh anymore. It seems possible massive scaling could legitimately produce a different kind of AI then anything we’ve seen yet.

Comment by zachary-robertson on OpenAI announces GPT-3 · 2020-05-29T19:00:29.387Z · score: 5 (3 votes) · LW · GW

While I’m not sure how easy plugging into a DRL algorithm will be, this seems to be the obvious next step. On the other hand, I suspect DRL isn’t really mature enough to work as an integrating paradigm.

Comment by zachary-robertson on What is your internet search methodology ? · 2020-05-23T21:07:38.714Z · score: 4 (2 votes) · LW · GW

I asked a related question and got some answers about finding things on the internet. Didn’t completely satisfy me, but my question was significantly more vague so it might help you!

Comment by zachary-robertson on Orthogonality · 2020-05-21T03:51:25.210Z · score: 5 (3 votes) · LW · GW

I think that a hidden assumption here is that improving in a weak skill always has a positive spillover affect on other skills. There might be a hidden truth within this. Namely, sometimes unlearning things will be the best way to make progress.

Comment by zachary-robertson on Orthogonality · 2020-05-21T03:41:12.386Z · score: 3 (2 votes) · LW · GW

Perhaps this can be connected with another recent post. It was pointed about in Subspace Optima that when we optimize we do so under constraints external or internal. It seems like you had an internal constraint stopping you from optimizing over the whole space. Instead you focused on what you thought was the most correlated trait. This almost reads like an insight following the realization you’ve been optimizing a skill along a artificial sub-space.

Comment by zachary-robertson on What are your greatest one-shot life improvements? · 2020-05-17T21:24:53.288Z · score: 1 (1 votes) · LW · GW

Do this at the end of the day as a way to review progress?

Comment by zachary-robertson on What are your greatest one-shot life improvements? · 2020-05-17T21:22:59.475Z · score: 11 (7 votes) · LW · GW

Can I get clarification on what sort of emotions were problematic and/or what reactions were problematic? I’m wondering if this was rumination or in the moment reactions.

Comment by zachary-robertson on What are your greatest one-shot life improvements? · 2020-05-17T13:55:48.660Z · score: 2 (2 votes) · LW · GW

This also helped me for getting up on time!

Comment by zachary-robertson on What newsletters are you subscribed to, and why? · 2020-05-14T18:56:58.487Z · score: 2 (2 votes) · LW · GW

Just a meta-comment. If you don’t give a description of the feed, I found myself very unlikely to look at the url.

Comment by zachary-robertson on Zachary Robertson's Shortform · 2020-05-11T02:57:03.630Z · score: 1 (1 votes) · LW · GW

I think it's worth taking a look at what's out there:

  • SpanBERT
    • Uses random spans to do masked pre-training
    • Seems to indicate that using longer spans is essentially difficult
  • Distillation of BERT Models
    • BERT embeddings are hierarchical
Comment by zachary-robertson on Zachary Robertson's Shortform · 2020-05-11T02:29:48.606Z · score: 1 (1 votes) · LW · GW

I'm aware of this. I'm slowly piecing together what I'm looking for if you decide to follow this.

Comment by zachary-robertson on Zachary Robertson's Shortform · 2020-05-11T02:13:00.658Z · score: 1 (1 votes) · LW · GW

Markov and general next-token generators work well when conditioned with text. While some models, such as Bert, are able to predict masked tokens I'm not aware of models that are able to generate the most likely sentence that would sit between a given start/end prompt.

It's worth working in the Markov setting to get a grounding for what we're looking for. The core of Markov model is the transition matrix which tells us the conditional likelihood of the token following immediately after the token . The rules of conditional probability allow us to write,

This gives us the probability of a token occurring immediately between the start/end prompts. In general we're interested in what happens if we 'travel' from the starting token to the ending token over time steps. Say we want to see the distribution of tokens at time step . Then we'd write,

This shows us that we can break up the conditional generation process into a calculation over transition probabilities. We could write this out for an arbitrary sequence of separated words. From this perspective we'd be training a model to perform a regression over the words being generated. This is the sense in which we already use outlines to effectively create regression data-sets to model arguments.

What would be ideal is to find a way to generalize this to a non-Markovian, preferably deep-learning, setting. This is where I'm stuck at the moment. I'd want to understand where the SOTA is on this. The only options that immediately come to mind seem to be tree-search over tokens or RL. From the regression point of view, it seems like you'd want to try fitting the 'training data' such that the likelihood for the result is as high as possible.

Comment by zachary-robertson on The Mind: Board Game Review · 2020-05-10T03:54:36.291Z · score: 3 (2 votes) · LW · GW

I'll independently support that this is an amazing game. It's a really good icebreaker or way to get a sense of a person because it's both short, collaborative, and intense.

Comment by zachary-robertson on Zachary Robertson's Shortform · 2020-05-10T03:45:39.863Z · score: 1 (1 votes) · LW · GW

If we're taking the idea that arguments are paths in topological space seriously, I feel like conditioned language models are going to be really important. We already use outlines to effectively create regression data-sets to model arguments. It seems like modifying GPT-2 so that you can condition on start/end prompts would be incredibly helpful here. More speculative, I think that GPT-2 is near the best we'll ever get at next word prediction. Humans use outline like thinking much more often then is commonly supposed.

Comment by zachary-robertson on Zachary Robertson's Shortform · 2020-05-10T03:36:07.405Z · score: 1 (1 votes) · LW · GW

Not really sure, if I was really going for it, I could do about 15-25 posts. I'm going back and forth on which metrics to use. This seems highly tied to what I actually want feedback on. What do you mean by Q&A?

Comment by zachary-robertson on Maths writer/cowritter needed: how you can't distinguish early exponential from early sigmoid · 2020-05-07T14:01:53.276Z · score: 7 (3 votes) · LW · GW

Assuming this turns out interesting, I’d be interested in co-writing a paper like this. It does seem true that literature mostly deals with a non-sequential prediction task. It seems like you want an online predictor where data is received in a time sequential order. Not finding anything for that immediately either. I think the problem is interesting and will see if I can do some literature-review or just solve the problem over the weekend.

Comment by zachary-robertson on Zachary Robertson's Shortform · 2020-05-06T00:42:10.410Z · score: 3 (2 votes) · LW · GW

I'm interested in converting notes I have about a few topics into posts here. I was really trying to figure out why this would be a good use of my time. The notes are already rather readable by myself. I thought about this for a while and it seems as though I'm explicitly interested in getting feedback on some of my thought processes. I'm aware of Goodhart's law so I know better than to have an empty plan to simply maximize my karma. However, on the other end, I don't want to simply polish notes. If I were to constrain myself to only write about things I have notes on then it seems I could once again explicitly try to maximize karma. In fact, if I felt totally safe doing this it'd be a fun game to try out, possibly even comment on. Of course, eventually, the feedback that'd I'd receive would start to warp what kinds of future contributions I'd make to the site, but this seems safe. Given all of this, I'd conclude I can explicitly maximize different engagement metrics, at least at first.

Comment by zachary-robertson on Stop saying wrong things · 2020-05-05T18:08:58.608Z · score: 2 (2 votes) · LW · GW

First couple of sentences seem reasonable, something I was thinking but didn't comment. However, the rest of this seems needlessly aggressive. I'd almost recommend pairing down to the limited critique and fleshing that out in more detail.

Comment by zachary-robertson on Mati_Roy's Shortform · 2020-05-05T03:15:50.238Z · score: 1 (1 votes) · LW · GW

Tangential, but I'd venture to guess there's significant correlation between title-choice (word-vec) and upvotes on this site. I wonder if there'd be a significant difference here as compared with say Reddit?

Comment by zachary-robertson on Against strong bayesianism · 2020-04-30T18:50:57.488Z · score: 3 (2 votes) · LW · GW

This feels off. The quote is basically right. You need much more than limiting behavior to say anything about whether or not the processes are ‘similar’ in a useful way before that. Estimating a useful domain of applicability is important and you can’t just assume it as a default whenever two things happen to end up looking similar through a limiting process.

That mostly is a reason for saying “Problem X can be solved in big-O time vía method Y” as opposed to saying, “Given infinite compute X can be solved via method Y.” It’s occasionally cool to see that the latter can work, because we usually work with the assumption we’re really talking about the former case. This is not default behavior, it’s a prior.

Comment by zachary-robertson on Would 2009 H1N1 (Swine Flu) ring the alarm bell? · 2020-04-07T19:00:04.759Z · score: 3 (3 votes) · LW · GW

The original question said “or”. There was not vaccine or a treatment. You could’ve said neither/nor, but you phrased it this way for a reason I thought. Also, not having a vaccine is bad, so I’d be annoyed if you changed the criteria so both had to be met.

Also Mexico is 15th largest by nominal GDP so I’m not sure what you mean by “major” either. I’m trying to point out that your list gives false alarms. You can claim that this alarm is meant to spark debate, I’m just saying it’s original purpose is questionably achieved. In fact, saying there’s room for debate threatens to ruin the entire premise of the alarm no? At that point we’re just discussing if I think your classifier is better than my classifier.

Comment by zachary-robertson on Would 2009 H1N1 (Swine Flu) ring the alarm bell? · 2020-04-07T14:11:36.935Z · score: 5 (4 votes) · LW · GW

2.4 Should be yes. Clearly there wasn’t a vaccine.

4.2 Should be yes. Mexico was clearly on lockdown. You shouldn’t get to cherry pick. It’s worth mentioning that CDC was also recommending shutting down school for 14 days.

2.1Only in retrospect do we know what the mortality rate truly is. It was 1.9% was reported in Mexico at one point.

It seems like H1N1 would’ve triggered this simply because if you’d been following the checklist as news came in, you would’ve noted there wasn’t a vaccine ‘check’ and then saw Mexico went on lockdown ‘check’. There wouldn’t have been this hindsight. Going through this list I conclude this would trigger false alarms.

I’m assuming the checklist was designed to have 16/16 for COVID19 given the cutoffs in the questions so it seems worth pointing out it’s also possible to have the opposite occur where a disease doesn’t ring the bell simply because its different than COVID19 in a few key ways. In the grand scheme this alarm bell seems designed to overfit to our current pandemic.

Obviously there should exist an alarm bell, but it seems too easy to subjectively wiggle the question answers. This is a prototype, the idea is nice though!

Source: https://www.wired.com/2009/06/apocalypse-not-behind-swine-flu-hysteria/

Comment by zachary-robertson on An alarm bell for the next pandemic · 2020-04-07T03:11:59.381Z · score: 2 (2 votes) · LW · GW

From context it’s presumably the 2009 outbreak where roughly ~60 million got infected in the US

Comment by zachary-robertson on [deleted post] 2020-03-30T17:34:00.549Z

What is GameB?

Comment by zachary-robertson on Blog Post Day II · 2020-03-23T16:40:50.271Z · score: 7 (4 votes) · LW · GW

Is it against the spirit to start going now and use Saturday as the deadline? I have some journal entries I’d like to spin-up and a deadline would help!

Comment by zachary-robertson on Donald Hobson's Shortform · 2020-03-21T03:58:26.990Z · score: 1 (1 votes) · LW · GW

Just a thought, maybe it's a useful perspective. It seems kind of like a game. You choose whether or not to insert your beliefs and they choose their preferences. In this case it just turns out that you prefer life in both cases. What would you do if you didn't know whether or not you had an Alice/Bob and had to choose your move ahead of time?

Comment by zachary-robertson on March 18th: Daily Coronavirus Links · 2020-03-20T16:20:21.911Z · score: 1 (1 votes) · LW · GW

Do you guys have an RSS feed I could subscribe to?

Comment by zachary-robertson on What Resources on Journal Analysis are Available? · 2019-12-28T23:15:29.062Z · score: 1 (1 votes) · LW · GW

Well, the ordering refers to where the entry is. It’s possible to make edits after the fact. For instance, I correct typos whenever I see them. However, I don’t ‘delete’ entries.