Posts

How important are accurate AI timelines for the optimal spending schedule on AI risk interventions? 2022-12-16T16:05:39.093Z
The optimal timing of spending on AGI safety work; why we should probably be spending more now 2022-10-24T17:42:05.865Z
Tristan Cook's Shortform 2022-07-17T12:38:30.653Z
Replicating and extending the grabby aliens model 2022-04-23T00:38:07.059Z

Comments

Comment by Tristan Cook on [deleted post] 2023-11-03T13:41:05.782Z

If we can find a problem where EDT clearly and irrevocably gives the wrong answer, we should not give it any credence

I think this is potentially an overly strong criteria for decision theories - we should probably  restrict to something like the problems to a fair problem class,  else we end up with no decision theory receiving any credence. 

I also think "wrong answer" is doing a lot of work here. Caspar Oesterheld writes

However, there is no agreed-upon metric to compare decision theories, no way to asses even for a particular problem whether one decision theory (or its recommendation) does better than another. (This is why the CDT-versus-EDT-versus-other debate is at least partly a philosophical one.) In fact, it seems plausible that finding such a metric is “decision theory-complete” (to butcher another term with a specific meaning in computer science). By that I mean that settling on a metric is probably just as hard as settling on a decision theory and that mapping between plausible metrics and plausible decision theories is fairly easy.

Comment by Tristan Cook on Zach Stein-Perlman's Shortform · 2023-03-31T18:08:26.970Z · LW · GW

I deeply sympathize with the presumptuous philosopher but 1a feels weird.

Yep! I have the same intuition

Actually putting numbers on 2a (I have a post on this coming soon), the anthropic update seems to say (conditional on non-simulation) there's almost certainly lots of aliens all of which are quiet, which feels really surprising.

Nice! I look forward to seeing this. I did similar analysis - both considering SIA + no simulations and SIA + simulations in my work on grabby aliens

Comment by Tristan Cook on Zach Stein-Perlman's Shortform · 2023-03-30T08:40:28.236Z · LW · GW

Which of them feel wrong to you? I agree with all them other than 3b, which I'm unsure about - I think it this comment does a good job at unpacking things. 

2a is Katja Grace's Doomsday argument. I think 2aii and 2aiii depends on whether we're allowing simulations; if faster expansion speed (either the cosmic speed limit or engineering limit on expansion) meant more ancestor simulations then this could cancel out the fact that faster expanding civilizations prevent more alien civilizations coming in to existence.

Comment by Tristan Cook on Alignment-related jobs outside of London/SF · 2023-03-23T16:41:10.392Z · LW · GW

At the Center on Long-Term Risk we're open to remote work. Currently we're only  hiring for summer research fellows and the application page states (as with other previous positions, iirc)

Location: We prefer summer research fellows to work from our London offices, but will also consider applications from people who are unable to relocate.

Last year we had one fully remote fellow.

Comment by Tristan Cook on Tristan Cook's Shortform · 2023-02-15T20:39:49.884Z · LW · GW

The lifecycle of 'agents'

Epistemic status: mostly speculation and simplification, but I stand by the rough outline of 'self-unaware learners -> self-aware consequentialists struggling with multipolarity -> static rule-following not-thinking-too hard non-learners'. The two most important transitions are "learning" and then, once you've learned enough, "committing/self-modifying (away from learning)".

Setup

I briefly sketch three phases I guess that ‘agents’ go through, and consider how two different metrics change during this progression. This is a highly speculative just-so story that currently intuitively sounds correct to me, though I’m not very confident in very much of what I’ve written and leaned too much into the ‘fun’ heuristic at times.

The transition from the first stage to the second stage is learning to become more consequentialist. The transition from the second stage to the third is self-modifying away from consequentialism.

In each of three stages I consider the predictability of both (a) the agent’s decisions and (b) the agent’s environment when one has either (I) full empirical facts about the agent and environment or (II) partial empirical facts. I don’t think these two properties to track are the most important or relevant, but helped to guide my intuitions in writing this life-cycle.

Phase 1: the transition from self-unaware and dumb to self-aware and smart

Agents in this stage are characterised by learning, but not yet self-modifying - they have not learned enough to do this yet! They have started in motion (possibly by selection pressure), and are on the right track towards becoming more consequentialist / VNM rational / maximise-y.

They’re generally relatively self-centred and don’t model other agents in much detail if at all. They begin to have some self-awareness. There’s not too much sense that they consider different actions: the process to decide between actions is relatively ‘unconscious’ and the ability to consider the value of modifying oneself is beyond the agent for a while. They stumble into the next stage by gaining this ability.

These agents are updating on everything and thus ‘winning’ more in their world. The ability to move into stage two requires some minimum amount of ‘winning’ (due to selection pressures).

  

 Agent’s decisionsAgent’s environment
Full empirical factsHigh. Computationally the agent is not doing anything advanced and so one can easily simulate them.Low-medium. The environment is relatively unaffected by the agent since they are not very good at achieving their goals. One might expect to see some change towards the satisfaction of their preferences. This is more true in ‘easy mode’ i.e. worlds where there is little to no competition.
Partial empirical facts

Medium. The agent’s behaviour, since it is poorly optimised, could fit any number of internal states.


 

Further, there may be significant randomness involved in the decision making. This could be deliberate e.g. for exploration. This could also be because of low error correction in their decision-making module and physical features of the world can influence their decisions.

Slightly lower than above. Their goals and preferences are not necessarily obvious from their environment. 


 

(Again, the less competition in the environment, or the easier it is for them to achieve their goal, or the more crude their goal is, make this ability to predict easier).

Phase 2: self-aware, maximise-y and beginning to model other agents

Agents in this stage are consequentialists. Between stages one and two, they now reason about their own decision process and are able to consider actions that modify their action-choosing process. They also remain updateful and have the capacity to reason about other agents (not limited to their future selves, who may be very different). These three features make stage two agents unstable: they quickly self-modify away. 

At the end of this stage, the agents are thinking in great detail about other agents. They can ‘win’ in some interactions by outthinking other agents. The interactions are not necessarily restricted to nearby agents. The acausal landscape is massively multipolar and the stakes (depending on preferences) may be much higher than in the local spacetime environment. 

 Agent’s decisionsAgent’s environment
Full empirical factsLow. Agents are doing many logical steps to work out what other agents are thinking.High. They are beginning to build computronium and converge on optimal designs for environment (e.g. Dyston-sphere like technology)
Partial empirical facts

Low-medium. The environment still gives lots of clues about the agent’s preferences and beliefs and the agent is following a relatively simple to write down algorithm.


 

Further, the agent is already optimising for error correcting and preserving its existing values and improving cognitive abilities, and so their mind is relatively orderly.


 

However, the process by which they move from stage 2 to 3 (which is what happens straight upon coming to stage 2) may be highly noisy. This commitment race may be a function of the agent’s prior beliefs about facts they have little evidence for, and this prior may be relatively arbitrary. 

High. The exact contents of some of the the computronium may be hard to predict (which is pretty much predicting their decision) but some will be easy (e.g. their utiltronium). 

Phase three: galaxed brained, set in their ways and 'at one' with many other agents

Agents in this stage are in it for the long haul (trillions of years). Between phases 2 and 3 the agent makes irreversible commitments, making themselves more predictable to other agents and settling into game-theoretic equilibria. Phase 3 agents act in ways very correlated with other agents (potentially in a coalition of many agents all running the same algorithm).

Phase 3 agents have maxed out their lightcone with physical stuff and reached the end of their tech tree. They have nothing left to learn and are most likely updateless (or similar e.g. a patchwork of many commitments constraining their actions). There’s not much thinking for the agent left  to do; everything was decided a long time ago (though maybe this thinking - the transition from phase 2 to 3 -  took a while). The agent mostly sticks around just to maintain their optimised utility (potentially using something like a compromise utility function following acausal trade).

The universe expands into many causally disconnected regions and the agent is ‘split’ into multiple copies.  Whether these are still meaningfully agents is not clear: I would guess they are well imagined as a non-human animal but with overpowered instincts and abilities to protect themselves and their stuff - like a sleeping dragon guarding its gold. 

 Agent’s decisionsAgent’s environment
Full empirical factsHigh. There are not many decisions left to make. They are pretty much lobotomised versions of their “must think about the consequences of everything”-former selves.  They follow simple rules and live in a relatively static world.High. Massive stability (after all the stars rearranged into the most efficient arrangement). The world is relatively static. 
Partial empirical factsHigh. They have very robust error correcting mechanisms, and also mechanisms to prevent the emergence of any consequentialist (sub-)agents with any (bargaining) power within their causal control.High. There’s a lot of redundancy in the environment in order to figure out what’s going on. Not much changes.
Comment by Tristan Cook on Taboo P(doom) · 2023-02-03T17:59:08.834Z · LW · GW

I agree. I think we should break "doom" into at least these four outcomes {human extinction, humans remain on Earth} x {lots of utility achieved, little to no utility} ( )

Comment by Tristan Cook on How important are accurate AI timelines for the optimal spending schedule on AI risk interventions? · 2022-12-20T11:15:23.867Z · LW · GW

Mmm. I'm a bit confused about the short timelines: 50% by 2030 and 75% by 2030 seem pretty short to me.

I think the medium timelines I use has a pretty long tail, but the 75% by 2060 is pretty much exactly the Metaculus' community 75% by 2059.

Comment by Tristan Cook on Using Obsidian if you're used to using Roam · 2022-12-11T09:59:13.352Z · LW · GW

Thanks for sharing! I've definitely had productivity gains from using a similar setup (Logseq, which is pretty much an open source clone of Roam/Obisidan and stores stuff locally as .md files).

Comment by Tristan Cook on Tristan Cook's Shortform · 2022-11-29T10:57:22.802Z · LW · GW

[Crossposted to the EA Forum

This is a short follow up to my post on the optimal timing of spending on AGI safety work which, given exact values for the future real interest, diminishing returns and other factors, calculated the optimal spending schedule for AI risk interventions.

This has also been added to the post’s appendix and assumes some familiarity with the post.

Here I consider the most robust spending policies and supposes uncertainty over nearly all parameters in the model[1] Inputs that are not considered include: historic spending on research and influence, rather than finding the optimal solutions based on point estimates and again find that the community’s current spending rate on AI risk interventions is too low

My distributions over the the model parameters imply that

  • Of all fixed spending schedules (i.e. to spend X% of your capital per year[2]), the best strategy is to spend 4-6% per year.
  • Of all simple spending schedules that consider two regimes: now until 2030, 2030 onwards, the best strategy is  to spend ~8% per year until 2030, and ~6% afterwards.

I recommend entering your own distributions for the parameters in the Python notebook here.[3] Further, these preliminary results use few samples: more reliable results would be obtained with more samples (and more computing time).

I allow for post-fire-alarm spending (i.e., we are certain AGI is soon and so can spend some fraction of our capital). Without this feature, the optimal schedules would likely recommend a greater spending rate.


Caption: Fixed spending rate. See here for the distributions of utility for each spending rate.

Caption: Simple  - two regime -  spending rate


Caption: The results from a simple optimiser[4], when allowing for four spending regimes: 2022-2027, 2027-2032, 2032-2037 and 2037 onwards. This result should not be taken too seriously: more samples should be used, the optimiser runs for a greater number of steps and more intervals used. As with other results, this is contingent on the distributions of parameters.

 

Some notes

  • The system of equations - describing how a funder’s spending on AI risk interventions change the probability of AGI going well - are unchanged from the main model in the post.
  • This version of the model randomly generates the real interest, based on user inputs. So, for example, one’s capital can go down.

Caption: An example real interest function , cherry picked to show how our capital can go down significantly. See here for 100 unbiased samples of .

Caption: Example probability-of-success functions. The filled circle indicates the current preparedness and probability of success.

 

Caption: Example competition functions. They all pass through (2022, 1) since the competition function is the relative cost of one unit of influence compared to the current cost. 

 

This short extension started due to a conversation with David Field and comment from Vasco Grilo; I’m grateful to both for the suggestion.

  1. ^

     Inputs that are not considered include: historic spending on research and influence, the rate at which the real interest rate changes, the post-fire alarm returns are considered to be the same as the pre-fire alarm returns.

  2. ^

    And supposing a 50:50 split between spending on research and influence

  3. ^

     This notebook is  less user-friendly than the notebook used in the main optimal spending result (though not un user friendly)  - let me know if improvements to the notebook would be useful for you.

  4. ^

    The intermediate steps of the optimiser are here.

Comment by Tristan Cook on Decision theory does not imply that we get to have nice things · 2022-10-18T11:09:34.728Z · LW · GW

Adjacent to interstice's comment about trade with neighbouring branches, if the AI is sufficiently updateless (i.e. it is reasoning from a prior where it thinks it could have human values) then it may still do nice things for us with a small fraction of the universe. 

Johannes Treutlein has written about this here.

Comment by Tristan Cook on Replicating and extending the grabby aliens model · 2022-08-22T15:26:17.153Z · LW · GW

Thanks for your response Robin! I've written a reply to you on the EA Forum here

Comment by Tristan Cook on Replicating and extending the grabby aliens model · 2022-07-21T23:55:15.339Z · LW · GW

Mean 0.7

Comment by Tristan Cook on Replicating and extending the grabby aliens model · 2022-07-21T17:33:48.688Z · LW · GW

Sorry, this is very unclear notation. The  is meant to be a random variable exponentially distributed with parameter 0.7.

Comment by Tristan Cook on Tristan Cook's Shortform · 2022-07-18T17:30:28.350Z · LW · GW

Using DuckDuckGo as my address bar search..
... but  rarely actually searching DuckDuckGo. DuckDuckGo allows for 'bangs' in the search.

For example "London !gmaps" redirects your search to Google Maps. At least half of my searches involve "!g" to search Google since the DuckDuckGo search isn't very good.  

The wildcard "!" takes you to the first result on DuckDuckGo's search. For example, "Interstellar !imdb" is slower than "Interstellar imdb !" since the latter takes you to the first page of the DuckDuckGo search whereas the former takes you to the IMDb search results page.

When using DuckDuckGo with Bangs, I highly recommend the extension "DuckDuckGo !bangs but Faster" (Chrome, Firefox)  which processes the bangs client side.

There is a LessWrong bang (!lw) and an EA Forum bang (!eaf) -  both are currently broken but I've submitted requests to fix.

Comment by Tristan Cook on Tristan Cook's Shortform · 2022-07-17T12:42:22.041Z · LW · GW

Not using a web browser on my phone

I've gone nearly a year without using a web browser on my phone. I minimise the number of apps that are used for websites (e.g. I don't use the Reddit or Facebook apps but heavily rely on the Google Maps app).

This habit makes me more attached to my laptop (and I feel more helpless without it) which seems mixed. I've only rarely needed to re-enable the app and occasionally ask other people to do something for me (e.g. restaurants that only have a web based menu or ordering system)

My Android phone has Chrome installed as a system app so can only be disabled in the settings and not uninstalled. 

Comment by Tristan Cook on Tristan Cook's Shortform · 2022-07-17T12:41:35.229Z · LW · GW

Using an adblocker to block distracting or unnecessary elements of web pages

On the uBlock Origin extension (Chrome | Firefox) one can right click to "Block element" and pick an element of a webpage to hide. I find this useful for removing distractions or ugly elements (but I don't think speeds up page loading at all)

Some examples

- the Facebook news feed (for which dedicated addons also exist) as well as the footers and left and right sidebars
- the YouTube comments, suggested video sidebar, search bar, footer
- the footer on Amazon

Comment by Tristan Cook on Tristan Cook's Shortform · 2022-07-17T12:39:45.671Z · LW · GW

Watching videos at >1x speed

I've listened to audibooks and pocasts at >1x speed for a while and began applying this to any video (TV or film) I watch too.

For the past few months I've been watching film and TV at 1.5x to 2.5x speed quite comfortably. I made the mistake of starting a rewatch of Breaking Bad, but powered through at 3x speed without much loss of moment-to-moment enjoyment. At faster speeds I find it very hard to follow without using subtitles.

I recommend Video Speed Controller (free & open source extension for Chrome & Firefox) for any online videos and most local video players (e.g. VLC) have speed controls built in.

Comment by Tristan Cook on Tristan Cook's Shortform · 2022-07-17T12:38:31.072Z · LW · GW

A thread for miscellaneous things I find useful

Comment by Tristan Cook on The table of different sampling assumptions in anthropics · 2022-06-30T13:05:25.978Z · LW · GW

Thanks for putting this together! Lots of ideas I hadn't seen before.

As for the meta-level problem, I agree with MSRayne to do the thing that  maximises EU which leads me to the ADT/UDT approach. This assumes we can have some non-anthropic prior, which seems reasonable to me.

Comment by Tristan Cook on How do I use caffeine optimally? · 2022-06-23T12:38:56.316Z · LW · GW

Anecdata: I aim to never take caffeine  on two consecutive days, and when I do it's normally<50mg. This has worked well for me. 

Comment by Tristan Cook on Replicating and extending the grabby aliens model · 2022-05-03T12:59:39.258Z · LW · GW

Wouldn't the respective type of utilitarian already have the corresponding expectations on future GCs? If not, then they aren't the type of utilitarian that they thought they were.

I'm not sure what you're saying here. Are you saying that in general, a [total][average] utilitarian wagers for [large][small] populations?

So there's a lower bound on the chance of meeting a GC 44e25 meters away.

Yep! (only if we become grabby though)
 

Lastly, the most interesting aspect is the symmetry between abiogenesis time and the remaining habitability time (only 500 million years left, not a billion like you mentioned). 

What's your reference for the 500 million lifespan remaining? I followed Hanson et al. in using  in using the end of the oxygenated atmosphere as the end of the lifespan. 

Just because you can extend the habitability window doesn't mean you should when doing anthropic calculations due to reference class restrictions.

Yep, I agree. I don't do the SSA update with reference class of observers-on-planets-of-total-habitability-X-Gy but agree that if I did, this 500 My difference would make a difference.

Comment by Tristan Cook on Replicating and extending the grabby aliens model · 2022-05-03T12:47:43.968Z · LW · GW

The habitability of planets around longer lived stars is a crux for those using SSA, but not SIA or decision theoretic approaches with total utilitarianism.

I show in this section  that if one is certain that there are planets habitable for at least  , then SSA with the reference class of observers in pre-grabby intelligent civilizations gives ~30% on us being alone in the observable universe. For  this gives ~10% on being alone.

Comment by Tristan Cook on Replicating and extending the grabby aliens model · 2022-04-25T20:58:53.887Z · LW · GW

Great report. I found the high decision-worthiness vignette especially interesting.

Thanks! Glad to hear it

Maybe this is discussed in the anthropic decision theory sequence and I should just catch up on that?

Yep, this is kinda what anthropic decision theory  (ADT) is designed to be :-D ADT + total utilitarianism often gives similar answers to SIA.
 

I wonder how uncertainty about the cosmological future would affect grabby aliens conclusions. In particular, I think not very long ago it was thought plausible that the affectable universe is unbounded, in which case there could be worlds where aliens were almost arbitrarily rare that still had high decision-worthiness. (Faster than light travel seems like it would have similar implications.)

Yeah, this is a great point. Toby Ord mentions here the potential for dark energy to be harnessed here, which would lead to a similar conclusion. Things like this may be Pascal's muggings (i.e., we wager our decisions on being in a world where our decisions matter infinitely). Since our decisions might already  matter 'infinitely' (evidential-like decision theory plus an infinite world) I'm not sure how this pans out.
 

SIA doomsday is a very different thing than the regular doomsday argument, despite the name, right? The former is about being unlikely to colonize the universe, the latter is about being unlikely to have a high number of observers?

Exactly. SSA (with a sufficiently large reference class) always predicts Doom as a consequence of its structure, but SIA doomsday is contingent on the case we happen to be in (colonisers, as you mention).

Comment by Tristan Cook on Replicating and extending the grabby aliens model · 2022-04-25T17:04:54.344Z · LW · GW

Could your model also include a possibility of the SETI-attack: grabby aliens sending malicious radio signals with AI description ahead of their arrival?


I briefly discuss this in Chapter 4. My tentative conclusion is that we have little to worry about in the next hundred or thousand years, especially (which I do not mention) if we think malicious grabby aliens to try particularly hard to have their signals discovered.

Comment by Tristan Cook on Replicating and extending the grabby aliens model · 2022-04-25T17:02:32.094Z · LW · GW

I agree it seems plausible SIA favours panspermia, though my rough guess is that doesn't change the model too much.

Conditioning on panspermia happening (and so the majority of GCs arriving through panspermia) then the number of hard steps  in the model can just be seen as the number of post-panspermia steps.

I then think this doesn't change the distribution of ICs or GCs spatially if (1) the  post-panspermia steps are sufficiently hard (2) a GC can quickly expand to contain the volume over which its panspermia of origin occurred. The hardness assumption implies that GC origin times will be sufficiently spread out for a single to GC to prevent any prevent any planets with  step completions of life from becoming GCs.

Comment by Tristan Cook on Is Grabby Aliens built on good anthropic reasoning? · 2022-03-17T19:08:36.360Z · LW · GW

Ah, I don't think I was very clear either.

I interpreted this comment as you saying “We could restrict our SSA reference class to only include observers for whom computers were invented 80 years ago”. (Is that right?)

What I wanted to say was: keep the reference class the same, but restrict the types of observers we are we saying we are contained in(the numerator in the SSA ratio) to be only those who (amongst other things)  observe the invention of the computer 80 years ago. 

And then I was trying to respond to that by saying “Well if we can do that, why can’t we equally well restrict our SSA reference class to only include observers for whom the universe is 13.8 billion years old? And then “humanity is early” stops being true.”

Yep, one can do this. We might still be atypical if we think longer-lived planets are habitable (since life has more time to appear there) but could also restrict the reference class further. Eventually we end up at minimal reference class SSA
 

Comment by Tristan Cook on Is Grabby Aliens built on good anthropic reasoning? · 2022-03-17T17:43:33.898Z · LW · GW

Doesn't sound snarky at all :-)

Hanson et al. are conditioning on the observation that the universe is 13.8 billion years old.  On page 18 they write

Note that by assuming a uniform distribution over our origin rank r (i.e., that we are equally likely to be any percentile rank in the GC origin time distribution), we can convert distributions over model times τ (e.g., an F(τ ) over GC model origin times) into distributions over clock times t. This in effect uses our current date of 13.8Gyr to estimate a distribution over the model timescale constant k. If instead of the distribution F(τ ) we use the distribution F0(τ ), which considers only those GCs who do not see any aliens at their origin date, we can also apply the information that we humans do not now see aliens.

Formally (and I think spelling it out helps) with SSA with the above reference class, our likelihood ratio is the ratio of [number of observers in pre-grabby civiliations that observe Y] to  [number of observers in pre-grabby civilizations] where Y is our observation that the universe is 13.8 billion years old, we are on a planet that has been habitable for ~4.5Gy and has total habitability of ~5.5Gy, we don't observe any grabby civilizations, etc

Comment by Tristan Cook on Is Grabby Aliens built on good anthropic reasoning? · 2022-03-17T17:22:35.834Z · LW · GW

Yep, you're exactly right. 

We could further condition on something like "observing that computers were invented ~X years ago" (or something similar that distinguishes observers like) such that the (eventual) population of civilizations doesn't matter. This conditioning means we don't have to consider that longer-lived planets will have greater populations.

Comment by Tristan Cook on Is Grabby Aliens built on good anthropic reasoning? · 2022-03-17T16:57:51.872Z · LW · GW

I've been studying & replicating the argument in the paper  [& hopefully be sharing results in the next few weeks]

The argument implicitly uses the self-sampling assumption (SSA) with reference class of observers in civilizations that are not yet grabby (and may or may not become grabby).

Their argument is similar in structure to the Doomsday argument:

If there are no grabby aliens (and longer lived planets are habitable) then there will be many civilizations that appear far in the future, making us highly atypical (in particular, 'early' in the distribution of arrival times). 

If there are sufficiently many grabby aliens (but not too many) they set a deadline (after the current time) by when all civilizations must appear if they appear at all.  This makes civilizations/observers like us/ours that appear at ~13.8Gy more typical in the reference class of all civilizations/observers that are not yet grabby.

Throughout we're assuming the number of observers per pre-grabby civilization is roughly constant. This lets us be loose with the the civilization -  observer distinction.

 

I don't think the reference class is a great choice. A more natural choice would be the maximal reference class (which includes observers in grabby alien civilization) or the minimal reference class (containing only observers subjectively indistinguishable from you).

Comment by Tristan Cook on Breaking the SIA with an exponentially-Sleeping Beauty · 2022-02-21T12:50:52.358Z · LW · GW

It looks like you've rediscovered SIA fears (expected) infinity

Comment by Tristan Cook on 30-ish focusing tips · 2021-10-23T16:41:55.932Z · LW · GW

Something about being watched makes us more responsible. If you can find people that aren't going to distract you, working alongside them keeps you accountable. If it's over zoom you can mute them

I like Focusmate for this. You book a 25 minute or 50 minute pomodoro session with another member of the site and video call during the duration. I've found sharing my screen also helps.
 

Comment by Tristan Cook on Hammertime Day 5: Comfort Zone Expansion · 2021-10-02T19:23:07.992Z · LW · GW

I've finally commented on LessWrong (after lurking for the last few years) which had been on the edge of my comfort zone. Thanks for exercise!

Comment by Tristan Cook on Robin Hanson's Grabby Aliens model explained - part 1 · 2021-10-02T19:19:47.422Z · LW · GW

Thanks for this great explainer! For the past few months I've been working on the Bayesian update from Hanson's argument and hoping to share it in the next month or two.

Comment by Tristan Cook on The Best Software For Every Need · 2021-10-02T19:12:46.916Z · LW · GW

I use Loop Habit Tracker [Android app] for a similar purpose. It's free and open source and allows notifcations to be set and then habits ticked off. The notifcations can be made sticky too.