Posts

Intermittent Distillations #2 2021-04-14T06:47:16.356Z
Transparency Trichotomy 2021-03-28T20:26:34.817Z
Intermittent Distillations #1 2021-03-17T05:15:27.117Z
Strong Evidence is Common 2021-03-13T22:04:40.538Z
Open Problems with Myopia 2021-03-10T18:38:09.459Z
Towards a Mechanistic Understanding of Goal-Directedness 2021-03-09T20:17:25.948Z
Coincidences are Improbable 2021-02-24T09:14:11.918Z
Chain Breaking 2020-12-29T01:06:04.122Z
Defusing AGI Danger 2020-12-24T22:58:18.802Z
TAPs for Tutoring 2020-12-24T20:46:50.034Z
The First Sample Gives the Most Information 2020-12-24T20:39:04.936Z
Does SGD Produce Deceptive Alignment? 2020-11-06T23:48:09.667Z
What posts do you want written? 2020-10-19T03:00:26.341Z
The Solomonoff Prior is Malign 2020-10-14T01:33:58.440Z
What are objects that have made your life better? 2020-05-21T20:59:27.653Z
What are your greatest one-shot life improvements? 2020-05-16T16:53:40.608Z
Training Regime Day 25: Recursive Self-Improvement 2020-04-29T18:22:03.677Z
Training Regime Day 24: Resolve Cycles 2 2020-04-28T19:00:09.060Z
Training Regime Day 23: TAPs 2 2020-04-27T17:37:15.439Z
Training Regime Day 22: Murphyjitsu 2 2020-04-26T20:18:50.505Z
Training Regime Day 21: Executing Intentions 2020-04-25T22:16:04.761Z
Training Regime Day 20: OODA Loop 2020-04-24T18:11:30.506Z
Training Regime Day 19: Hamming Questions for Potted Plants 2020-04-23T16:00:10.354Z
Training Regime Day 18: Negative Visualization 2020-04-22T16:06:46.138Z
Training Regime Day 17: Deflinching and Lines of Retreat 2020-04-21T17:45:34.766Z
Training Regime Day 16: Hamming Questions 2020-04-20T14:51:31.310Z
Mark Xu's Shortform 2020-03-10T08:11:23.586Z
Training Regime Day 16: Hamming Questions 2020-03-01T18:46:32.335Z
Training Regime Day 15: CoZE 2020-02-29T17:13:42.685Z
Training Regime Day 14: Traffic Jams 2020-02-28T17:52:28.354Z
Training Regime Day 13: Resolve Cycles 2020-02-27T17:45:07.845Z
Training Regime Day 12: Focusing 2020-02-26T19:07:15.407Z
Training Regime Day 11: Socratic Ducking 2020-02-25T17:19:57.320Z
Training Regime Day 10: Systemization 2020-02-24T17:20:15.385Z
Training Regime Day 9: Double-Crux 2020-02-23T18:08:31.108Z
Training Regime Day 8: Noticing 2020-02-22T19:47:03.898Z
Training Regime Day 7: Goal Factoring 2020-02-21T17:55:29.848Z
Training Regime Day 6: Seeking Sense 2020-02-20T17:33:29.011Z
Training Regime Day 5: TAPs 2020-02-19T18:11:05.649Z
Training Regime Day 4: Murphyjitsu 2020-02-18T17:33:12.523Z
Training Regime Day 3: Tips and Tricks 2020-02-17T18:53:24.808Z
Training Regime Day 2: Searching for bugs 2020-02-16T17:16:32.606Z
Training Regime Day 1: What is applied rationality? 2020-02-15T21:03:32.685Z
Training Regime Day 0: Introduction 2020-02-14T08:22:19.851Z

Comments

Comment by Mark Xu (mark-xu) on Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers · 2021-04-14T22:52:58.870Z · LW · GW

I'm curious what "put it in my SuperMemo" means. Quick googling only yielded SuperMemo as a language learning tool.

Comment by Mark Xu (mark-xu) on Transparency Trichotomy · 2021-03-28T22:12:11.478Z · LW · GW

I agree it's sort of the same problem under the hood, but I think knowing how you're going to go from "understanding understanding" to producing an understandable model controls what type of understanding you're looking for.

I also agree that this post makes ~0 progress on solving the "hard problem" of transparency, I just think it provides a potentially useful framing and creates a reference for me/others to link to in the future.

Comment by Mark Xu (mark-xu) on Strong Evidence is Common · 2021-03-15T03:13:22.453Z · LW · GW

Yeah, I agree 95% is a bit high.

Comment by Mark Xu (mark-xu) on Open Problems with Myopia · 2021-03-11T23:38:25.355Z · LW · GW

One way of looking at DDT is "keeping it dumb in various ways." I think another way of thinking about is just designing a different sort of agent, which is "dumb" according to us but not really dumb in an intrinsic sense. You can imagine this DDT agent looking at agents that do do acausal trade and thinking they're just sacrificing utility for no reason.

There is some slight awkwardness in that the decision problems agents in this universe actually encounter means that UDT agents will get higher utility than DDT agents.

I agree that the maximum a posterior world doesn't help that much, but I think there is some sense in which "having uncertainty" might be undesirable.

Comment by Mark Xu (mark-xu) on Open Problems with Myopia · 2021-03-11T20:08:15.505Z · LW · GW

has been changed to imitation, as suggested by Evan.

Comment by Mark Xu (mark-xu) on Open Problems with Myopia · 2021-03-10T19:55:39.105Z · LW · GW

Yeah, you're right that it's obviously unsafe. The words "in theory" were meant to gesture at that, but it could be much better worded. Changed to "A prototypical example is a time-limited myopic approval-maximizing agent. In theory, such an agent has some desirable safety properties because a human would only approve safe actions (although we still would consider it unsafe)."

Comment by Mark Xu (mark-xu) on Open Problems with Myopia · 2021-03-10T19:52:10.922Z · LW · GW

Yep - I switched the setup at some point and forgot to switch this sentence. Thanks.

Comment by Mark Xu (mark-xu) on Coincidences are Improbable · 2021-02-25T00:48:50.346Z · LW · GW

This is brilliant.

Comment by Mark Xu (mark-xu) on Coincidences are Improbable · 2021-02-24T19:43:19.545Z · LW · GW

I am using the word "causal" to mean d-connected, which means not d-seperated. I prefer the term "directly causal" to mean A->B or B->A.

In the case of non-effects, the improbable events are "taking Benadryl" and "not reacting after consuming an allergy"

Comment by Mark Xu (mark-xu) on DanielFilan's Shortform Feed · 2021-02-14T21:51:42.945Z · LW · GW

I agree market returns are equal in expectation, but you're exposing. yourself to more risk for the same expected returns in the "I pick stocks" world, so risk-adjusted returns will be lower.

Comment by Mark Xu (mark-xu) on Ways to be more agenty? · 2021-01-05T15:01:04.782Z · LW · GW

I sometimes roleplay as someone role playing as myself, then take the action that I would obviously want to take, e.g. "wow sleeping regularly gives my character +1 INT!" and "using anki every day makes me level up 1% faster!"

Comment by Mark Xu (mark-xu) on Collider bias as a cognitive blindspot? · 2020-12-31T16:10:03.317Z · LW · GW

If X->Z<-Y, then X and Y are independent unless you're conditioning on Z. A relevant TAP might thus be:

  • Trigger: I notice that X and Y seem statistically dependent
  • Action: Ask yourself "what am I conditioning on?". Follow up with "Are any of these factors causally downstream of both X and Y?" Alternatively, you could list salient things causally downstream of either X or Y and check the others.

This TAP unfortunately abstract because "things I'm currently conditioning on" isn't an easy thing to list, but it might help.

Comment by Mark Xu (mark-xu) on Chain Breaking · 2020-12-29T20:21:21.553Z · LW · GW

yep, thanks

Comment by Mark Xu (mark-xu) on Great minds might not think alike · 2020-12-29T16:49:54.845Z · LW · GW

Here are some possibilities:

  • great minds might not think alike
  • untranslated thinking sounds untrustworthy
  • disagreement as a lack of translation
Comment by Mark Xu (mark-xu) on Open & Welcome Thread - December 2020 · 2020-12-27T17:36:52.249Z · LW · GW

Transferring money is usually done via ACH bank transfer, which is usually accessed in the "deposit" tab of the "transfers" tab of your investment account.

I'm not sure how to be confident in the investment in general. One simple way is to double-check the ticker symbol, e.g. MSFT for Microsoft, actually corresponds to the company you want. For instance, ZOOM does not correspond to Zoom Technologies, rather ZM is the correct ticker.

Talking to a financial advisor might be helpful. I have been told r/personalfinance is a reasonable source for advice, although I've never checked it out thoroughly.

Comment by Mark Xu (mark-xu) on Defusing AGI Danger · 2020-12-26T18:53:43.160Z · LW · GW

My opposite intuition is suggested by the fact that if you're trying to guess correctly a series of random digits with 80% "1" and 20% "0", then you should always guess "1".

I don't quite know how to model cross-pollination and diminishing sort of returns. I think working on both for the information value is likely going to be very good. It seems hard to imagine a scenario where you're robustly confident that one project is 80% better taking diminishing returns into account without being able to create a 3rd project with the best features of both, but if you're in that scenario I think just spending all your efforts on the 80% project seems correct.

One example is deciding between 2 fundamentally different products your startup could be making. We also supposed that creating an MVP of either product that would provide information would take a really long time. In this situation, if you suspect one of them is 60% likely to be better than the other it would be less useful to spend your time in a 60/40 split rather than building the MVP of the one likely to be better and reevaluating after getting more information.

The version of your claim that I agree with is "In your current epistemic state, you should spend all your time pursuing the 80% project, but the 80% probably isn't that robust, working on a project has diminishing returns, and other projects will give more information value, globally the amount of time you expect to spend on the 80% project is about 80%."

Comment by Mark Xu (mark-xu) on Defusing AGI Danger · 2020-12-26T18:06:20.403Z · LW · GW

I absolutely agree that I'm not arguing for "safety by default".

I don't quite agree that you should split effort between strategies, i.e. it seems likely that if you think 80% disaster by default, you should dedicate 100% of your efforts to that world.

Comment by Mark Xu (mark-xu) on What trade should we make if we're all getting the new COVID strain? · 2020-12-25T22:54:51.186Z · LW · GW

Is what you mean by "naive instrument" SPY put options?

Comment by Mark Xu (mark-xu) on Operationalizing compatibility with strategy-stealing · 2020-12-25T17:14:16.884Z · LW · GW

Using the perspective from The ground of optimization means you can get rid of the action space and just say "given some prior and some utility function, what percentile of the distribution does this system tend to evolve towards?" (where the optimization power is again the log of this percentile)

We might then say that an optimizing system is compatible with strategy stealing if it's retargetable for a wide set of utility functions in a way that produces an optimizing system that has the same amount of optimization power.

An AI that is compatible with strategy stealing is one such way to of producing an optimizing system that is compatible with strategy stealing with a particularly easy form of retargeting, but the difficulty of retargeting provides another useful dimension along which optimizing systems vary, e.g. instead of the optimization power the AI can direct towards different goals, you have a space of the "size" of allowed retargeting and the optimization power applied toward the goal for all goals and retargeting sizes.

Comment by Mark Xu (mark-xu) on Defusing AGI Danger · 2020-12-25T04:06:18.905Z · LW · GW

Thanks! Also, oops - fixed.

Comment by Mark Xu (mark-xu) on The First Sample Gives the Most Information · 2020-12-24T23:41:26.058Z · LW · GW

The game play in Decrypto, Chameleon, and Spyfall are similar to the game you just suggested.

Comment by Mark Xu (mark-xu) on Is there an easy way to turn a LW sequence into an epub? · 2020-12-24T21:35:28.892Z · LW · GW

A button that changes a sequence into one really long post might also be sufficient when paired with other tools, e.g. instapaper.

Comment by Mark Xu (mark-xu) on 100 Tips for a Better Life · 2020-12-24T01:15:25.358Z · LW · GW

for linux, the put windows plugin works reasonably well for me: https://extensions.gnome.org/extension/39/put-windows/

there's also writing wmctrl scripts: https://linux.die.net/man/1/wmctrl

Comment by Mark Xu (mark-xu) on Training Regime Day 8: Noticing · 2020-12-23T21:50:03.246Z · LW · GW

thanks!

Comment by Mark Xu (mark-xu) on Steelmanning Divination · 2020-12-23T17:46:49.236Z · LW · GW

It's been a while, but:

  • When my brain says "I'm tired stop working" it mostly means "I'm tired of working on this thing stop working on this thing". Switching tasks often allows me to maintain productivity.
  • Trying to think while on a walk doesn't work for me at all. Thinking without something to write with is impossible.
  • Social media should probably be avoided.
  • Writing things as a way to clarify thinking works much better than I expected.
Comment by Mark Xu (mark-xu) on Steelmanning Divination · 2020-12-18T16:00:58.380Z · LW · GW

This post made me try adding more randomness to my life for a week or so. I learned a small amount. I remain excited about automated tools that help do things like this, e.g. recent work from Ought.

Comment by Mark Xu (mark-xu) on Being the (Pareto) Best in the World · 2020-12-18T15:59:31.997Z · LW · GW

I tend to try to do things that I think are in my comparative advantage. This post hammered home the point that comparative advantage exists along multiple dimensions. For example, as a pseudo-student, I have almost no accumulated career capital, so I risk less by doing projects that might not pan out (under the assumption that career capital gets less useful over time). This fact can be combined other properties I have to more precisely determine comparative advantage.

This post also gives the useful intuition that being good at multiple things exponentially cuts down the number of people you're competing with. I use this heuristic a reasonable amount when trying to decide the best projects to be working on.

Comment by Mark Xu (mark-xu) on Understanding “Deep Double Descent” · 2020-12-18T15:39:36.570Z · LW · GW

This post gave a slightly better understanding of the dynamics happening inside SGD. I think deep double descent is strong evidence that something like a simplicity prior exists in SGG, which might have actively bad generalization properties, e.g. by incentivizing deceptive alignment. I remain cautiously optimistic that approaches like Learning the Prior can get circumnavigate this problem.

Comment by Mark Xu (mark-xu) on Cultural accumulation · 2020-12-07T22:28:31.073Z · LW · GW

Why do you believe the Manhattan project cost about $3.3T? Quick googling yields ~$20B in 2007 dollars.

Edit: further googling shows WWII costs about ~$4T, so maybe you confused the two numbers? I'm pretty surprised that the Manhattan project is only ~1% of the cost of WWII, so maybe something is going on.

Comment by Mark Xu (mark-xu) on A space of proposals for building safe advanced AI · 2020-11-21T17:17:51.676Z · LW · GW

I claim that if we call the combination of the judge plus one debater Amp(M), then we can think of the debate as M* being trained to beat Amp(M) by Amp(M)'s own standards.

This seems like a reasonable way to think of debate.

I think, in practice (if this even means anything), the power of debate is quite bounded by the power of the human, so some other technique is needed to make the human capable of supervising complex debates, e.g. imitative amplification.

Comment by Mark Xu (mark-xu) on A space of proposals for building safe advanced AI · 2020-11-20T16:15:11.654Z · LW · GW

Debate: train M* to win debates against Amp(M).

I think Debate is closer to "train M* to win debates against itself as judged by Amp(M)".

Comment by Mark Xu (mark-xu) on Mark Xu's Shortform · 2020-11-08T22:34:24.948Z · LW · GW

If you have DAI right now, minting on https://foundry.finance/ and swapping yTrump for nTrump on catnip.exchange is an almost guaranteed 15% profit.

Comment by Mark Xu (mark-xu) on Does SGD Produce Deceptive Alignment? · 2020-11-07T16:40:33.647Z · LW · GW

Yep. Meant to say "if a model knew that it was in its last training episode and it wasn't going to be deployed." Should be fixed.

Comment by Mark Xu (mark-xu) on Hammers and Nails · 2020-11-03T01:41:27.178Z · LW · GW

I think murphyjitsu is my favorite technique.

  1. sometimes failing lets you approach a problem from a different angle
  2. humor often results from failure, so anticipating how you'll fail and nudging to make it more probable might create more humor
  3. murphyjitsu is normally used in making plans, but you can murphyjitsu your opponent's plans to identify the easiest ways to break them
  4. micro-murphyjitsu is the art of constantly simulating reality like 5 seconds before, sort like like overclocking your OODA loop
  5. murphyjitsu is a fast way to tell if your plan is good or not - you don't always have to make it better
  6. you can get intuitive probabilities for various things by checking how surprised you are at those things
  7. if you imagine your plan succeeding instead of failing, then it might cause you realize some low-probability high-impact actions to take
  8. you can murphyjitsu plans that you might make to get a sense of the tractability of various goals
  9. murphyjitsu might help correct for overconfidence if you imagine ways you could be wrong every time you make a prediction
  10. Can murphyjitsu things that aren't plans. E.g. you can suppose the existence of arguments that would change your mind.
Comment by Mark Xu (mark-xu) on MikkW's Shortform · 2020-10-29T22:26:17.301Z · LW · GW

https://arxiv.org/abs/cs/0406061 is a result showing tht Aumann's Agreement is computationally efficient under some assumptions, which might be of interest.

Comment by Mark Xu (mark-xu) on What are good election betting opportunities? · 2020-10-29T19:04:33.240Z · LW · GW

https://docs.google.com/document/d/1coju1JGwKlnejxkNiqWNJRlknebT_HUA1iz3U_WvZDg/edit is a doc I wrote explaining how to do this in a way that is slightly less risky than betting on catnip directly.

Comment by Mark Xu (mark-xu) on Introduction to Cartesian Frames · 2020-10-25T17:41:40.340Z · LW · GW

Good point - I think the correct definition is something like "rows (or sets of rows) for which there exists a row which is disjoint"

Comment by Mark Xu (mark-xu) on Mark Xu's Shortform · 2020-10-22T21:53:37.386Z · LW · GW

This made me chuckle. More humor

  • Rationalists taxonomizing rationalists
  • Mesa-rationalists (the mesa-optimizers inside rationalists)
  • carrier pigeon rationalists
  • proto-rationalists
  • not-yet-born rationalists
  • literal rats
  • frequentists
  • group-house rationalists
  • EA forum rationalists
  • academic rationalists
  • meme rationalists

:)

Comment by Mark Xu (mark-xu) on Introduction to Cartesian Frames · 2020-10-22T16:43:31.163Z · LW · GW

This is very exciting. Looking forward to the rest of the sequence.

As I was reading, I found myself reframing a lot of things in terms of the rows and columns of the matrix. Here's my loose attempt to rederive most of the properties under this view.

  • The world is a set of states. One way to think about these states is by putting them in a matrix, which we call "cartesian frame." In this frame, the rows of the matrix are possible "agents" and the columns are possible "environments".
    • Note that you don't have to put all the states in the matrix.
  • Ensurables are the part of the world that the agent can always ensure we end up in. Ensurables are the rows of the matrix, closed under supersets
  • Preventables are the part of the world that the agent can always ensure we don't end up in. Preventables are the complements of the rows, closed under subsets
  • Controllables are parts of the world that are both ensurable and preventable. Controlables are rows (or sets of rows) for which there exists rows that are disjoint. [edit: previous definition of "contains elements not found in other rows" was wrong, see comment by crabman]
  • Observeables are parts of the environment that the agent can observe and act conditionally according to. Observables are columns such that for every pair of rows there is a third row that equals the 1st row if the environment is in that column and the 2nd row otherwise. This means that for every two rows, there's a third row that's made by taking the first row and swapping elements with the 2nd row where it intersects with the column.
    • Observables have to be sets of columns because if they weren't, you can find a column that is partially observable and partially not. This means you can build an action that says something like "if I am observable, then I am not observable. If I am not observable, I am observable" because the swapping doesn't work properly.
    • Observables are closed under boolean combination (note it's sufficient to show closure under complement and unions):
      • Since swapping index 1 of a row is the same as swapping all non-1 indexes, observables are closed under complements.
      • Since you can swap indexes 1 and 2 by first swapping index 1, then swapping index 2, observables are closed under union.
        • This is equivalent to saying "If A or B, then a0, else a2" is logically equivalent to "if A, then a0, else (if B, then a0, else a2)"
  • Since controllables are rows with specific properties and observables are columns with specific properties, then nothing can be both controllable and observable. (The only possibility is the entire matrix, which is trivially not controllable because it's not preventable)
    • This assumes that the matrix has at least one column
  • The image of a cartesian frame is the actual matrix part.
  • Since an ensurable is a row (or superset) and an observable is a column (or set of columns), then if something is ensurable and observable, then it must contain every column, so it must be the whole matrix (image).
  • If the matrix has 1 or 0 rows, then the observable constraint is trivially satisfied, so the observables are all possible sets of (possible) environment states (since 0/1 length columns are the same as states).
    • "0 rows" doesn't quite make sense, but just pretend that you can have a 0 row matrix which is just a set of world states.
  • If the matrix has 0 columns, then the ensurable/preventable contraint is trivially satisfied, so the ensurables are the same as the preventables are the same as the controllables, which are all possible sets of (possible) environment states (since "length 0" rows are the same as states).
    • "0 columns doesn't make that much sense either but pretend that you can have a 0 column matrix which is just a set of world state.
  • If the matrix has exactly 1 column, then the ensurable/preventable constraint is trivially satisfied for states in the image (matrix), so the ensurables are all non-empty sets of states in the matrix (since length 1 columns are the same as states), closed under union with states outside the matrix. It should be easy to see that controllables are all possible sets of states that intersect the matrix non-trivially, closed under union with states outside the matrix.
Comment by Mark Xu (mark-xu) on Introduction to Cartesian Frames · 2020-10-22T14:54:56.597Z · LW · GW

In 4.1:

Given a0 and a1, since S∈Obs(C), there exists an a2∈A such that for all e∈E, we have a2∈if(S,a0,a1). Then, since T∈Obs(C), there exists an a3∈A such that for all e∈E, we have a3∈if(S,a0,a2). Unpacking and combining these, we get for all e∈E, a3∈if(S∪T,a0,a1). Since we could construct such an a3 from an arbitrary a0,a1∈A, we know that S∪T∈Obs(C). □

I think there's a typo here. Should be , not .

(also not sure how to copy latex properly).

Comment by Mark Xu (mark-xu) on Babble challenge: 50 ways of solving a problem in your life · 2020-10-22T13:44:29.281Z · LW · GW

problem: I don't do enough focused work in a day.

  1. set aside set times for focused work via calendar
  2. put "do focused work" on my todo list (actually already did this and worked surprisingly well for a week - why doesn't it work as well anymore?)
  3. block various chatting apps
  4. block lesswrong?
  5. do pomodoros
  6. use some coworking space to encourage focus
  7. take more breaks
  8. eat healthier food (possibly no carbs) to have more energy
  9. get a better sleep schedule to have more energy
  10. meditate more for better meta-cognition and focus
  11. try to do deliberate practice on doing focused work
  12. install a number of TAPs related to suppressing desires for distraction, e.g. "impulse to stop working -> check pomodoro timer"
  13. I'm told complice is useful
  14. daily reviews might be helpful?
  15. be more specific when doing weekly review
  16. make more commitments to other people about the amount of output I'm going to have, creating social pressure to actually produce that amount of output
  17. be more careful when scheduling calls with people so i have long series of uninterrupted hours
  18. take more naps when I notice I'm losing focus
  19. be more realistic about the amount of focused work I can do in a day (does "realize this isn't actually a problem" count as solving it? seems like yes)
  20. vary the length of pomodoros
  21. do resolve cycles for solutions to the problem, implementing some of them
  22. read various productivity books, like the procrastination equation, GTD, tiny habits, etc.
  23. exercise more for more energy (unfortunately, the mind is currently still embodied)
  24. make sure I'm focusing on the right things - better to spend half the time focusing on the most important thing than double the time on the 2nd most important thing
  25. spend more time working with people
  26. stop filling non-work time with activities that cause mental fatigue, like reading, podcasts, etc.
  27. stop doing miscellaneous things from my todolist during "breaks", e.g. don't do laundry between pomodoros, just lie on the floor and rest
  28. get into a better rhythm of work/break cycles, e.g. treat every hour as a contiguous block by default, scheduling calls on hour demarcations only
  29. use laptop instead of large monitor - large screens might make it easier to get distracted
  30. block the internet on my computer during certain periods of time so I can focus on writing
  31. take various drugs that give me more energy, e.g. caffeine, nicotine, and other substances
  32. stop drinking things like tea - the caffeine might give more energy, but make focusing harder
  33. wear noise-canceling headphones to block out distractions from noise
  34. listen to music designed to encourage focus, like cool rhythms or video game music
  35. work on things that are exciting - focus isn't a problem if they're intrinsically enjoyable
  36. Ben Kuhn has some good tips - check those out again
  37. RescueTime says most of my distracting time is on messenger and signal. I think quarantine is messing with my desire for social interaction. Figure out how to replace that somehow?
  38. communicate via email/google doc instead of instant messaging
  39. make sure to have snacks to keep up blood sugar
  40. alternate between standing desk and sitting desk to add novelty
  41. reduce cost for starting to do focused work by having a clear list of focused work that needs to be done, leaving computer in state ready to start immediately upon coming back to it
  42. nudge myself into doing focused work by doing tasks that require micro-focus first, like make metaculus predictions, then move on to more important focused work
  43. ask a LW question about how to do more focused work and read the answers
  44. work on more physical substrates, e.g. paper+pen, whiteboard
  45. use a non-linux operating system to get access to better tools for focusing, like cold turkey, freedom, etc.
  46. switch mouse to left hand which will cause more effort to be needed to mindlessly use computer, potentially decreasing mindlessness
  47. acquire more desktoys to serve as non-computer distractions that might preserve focus better
  48. practice focusing on non-work thing, e.g. by studying a random subject, playing a game I don't like, being more mindful in everyday life, etc.
  49. do more yoga to feel more present in body
  50. TAP common idle activity I do with "focus on work", e.g. crack knuckles, stretch arms, adjust seat.

Time taken: 20 minutes

More things I thought of after reading Rafael Harth's response:

  1. use something like beeminder to do more focused work
  2. do research directly into what causes some people to be better at focusing than others
  3. ask people that seem to be good at doing focused work for tips
  4. reread Deep Work and take it more seriously
Comment by Mark Xu (mark-xu) on The Solomonoff Prior is Malign · 2020-10-22T13:19:31.651Z · LW · GW

I personally see no fundamental difference between direct and indirect ways of influence, except in so far as they relate to stuff like expected value.

I agree that given the amount expected influence, other universes are not high on my priority list, but they are still on my priority list. I expect the same for consequentialists in other universes. I also expect consequentialist beings that control most of their universe to get around to most of the things on their priority list, hence I expect them to influence the Solmonoff prior.

Comment by Mark Xu (mark-xu) on The Solomonoff Prior is Malign · 2020-10-22T13:14:49.316Z · LW · GW

Consequentialists can reason about situations in which other beings make important decisions using the Solomonoff prior. If the multiple beings are simulated them, they can decide randomly (because having e.g. 1/100 of the resources is better than none, which is the expectation of "blind mischievousness").

An example of this sort of reasoning is Newcomb's problem with the knowledge that Omega is simulating you. You get to "control" the result of your simulation by controlling how you act, so you can influence whether or not Omega expects you to one-box or two-box, controlling whether there is $1,000,000 in one of the boxes.

Comment by Mark Xu (mark-xu) on Mark Xu's Shortform · 2020-10-21T18:45:07.688Z · LW · GW

My current taxonomy of rationalists is:

  • LW rationalists (HI!)
  • Facebook rationalists
  • Twitter rationalists
  • Blog rationalists
  • Internet-invisible rationalists

Are there other types of rationalists? Maybe like group-chat rationalists? or podcast rationalists? google doc rationalists?

Comment by Mark Xu (mark-xu) on What posts do you want written? · 2020-10-21T17:24:07.047Z · LW · GW

An intuitive explanation of the kelly criterion, with a bunch of worked examples. Zvi's post is good but lacks worked examples and justification for heuristics. Jacobian advises us to Kelly bet on everything, but I don't understand what a "kelly bet" is in all but the simplest financial scenarios.

Comment by Mark Xu (mark-xu) on What posts do you want written? · 2020-10-19T19:37:40.657Z · LW · GW

Maybe you mean that "body memory" is an intuitive subconscious process in the brain?

Yes, but I like thinking of it as "body memory" because it is easier to conceptualize.

Comment by Mark Xu (mark-xu) on What posts do you want written? · 2020-10-19T03:12:54.637Z · LW · GW

I want more people to write down their models for various things. For example, a model I have of the economy is that it's a bunch of boxes with inputs and outputs that form a sparsely directed graph. The length of the shortest cycle controls things like economic growth and AI takeoff speeds.

Another example is that people have working memory in both their brains and their bodies. When their brain-working-memory is full, information gets stored in their bodies. Techniques like focusing are often useful to extact information stored in body-working-memory.

Comment by Mark Xu (mark-xu) on What posts do you want written? · 2020-10-19T03:07:18.720Z · LW · GW

A minimal-assumption description of Updateless Decision Theory. This wiki page describes the basic concept, but doesn't include motivation, examples or intuition.

Comment by Mark Xu (mark-xu) on What posts do you want written? · 2020-10-19T03:04:23.408Z · LW · GW

A thorough description of how to do pair debugging, a CFAR exercise partially described here.

Comment by Mark Xu (mark-xu) on What posts do you want written? · 2020-10-19T03:03:08.955Z · LW · GW

A review of Thinking Fast and Slow that focuses on whether or not various parts of the book replicated.