Posts

Kalman Filter for Bayesians 2018-10-22T17:06:02.783Z · score: 59 (19 votes)
Systemizing and Hacking 2018-03-23T18:01:33.212Z · score: 104 (27 votes)
Inference & Empiricism 2018-03-20T15:47:00.316Z · score: 88 (26 votes)
Examples of Mitigating Assumption Risk 2017-11-30T02:09:23.852Z · score: 63 (24 votes)
Competitive Truth-Seeking 2017-11-01T12:06:59.314Z · score: 101 (44 votes)
How to Learn from Experts 2013-10-04T17:02:01.502Z · score: 39 (37 votes)
Systematic Lucky Breaks 2013-10-03T01:46:26.652Z · score: 36 (40 votes)

Comments

Comment by satvikberi on Inefficient Doesn’t Mean Indifferent · 2019-10-18T14:55:31.766Z · score: 3 (4 votes) · LW · GW

Nit: giving IQ tests is not super cheap, because it puts companies at a nebulous risk of being sued for disparate impact (see e.g. https://en.wikipedia.org/wiki/Griggs_v._Duke_Power_Co.).

I agree with all the major conclusions though.

Comment by satvikberi on Insights from Linear Algebra Done Right · 2019-07-14T22:47:31.261Z · score: 7 (3 votes) · LW · GW

For the orthogonal decomposition, don't you need two scalars? E.g. . For example, in , let Then , and there's no way to write as

Comment by satvikberi on What are good resources for learning functional programming? · 2019-07-05T06:06:01.526Z · score: 15 (5 votes) · LW · GW

My favorite book, by far, is Functional Programming in Scala. This book has you derive most of the concepts from scratch, to the point where even complex abstractions feel like obvious consequences of things you've already built.

If you want something more Haskell-focused, a good choice is Programming in Haskell.

Comment by satvikberi on Raemon's Scratchpad · 2019-07-03T22:16:35.871Z · score: 13 (7 votes) · LW · GW

I didn't downvote, but I agree that this is a suboptimal meme – though the prevailing mindset of "almost nobody can learn Calculus" is much worse.

As a datapoint, it took me about two weeks of obsessive, 15 hour/day study to learn Calculus to a point where I tested out of the first two courses when I was 16. And I think it's fair to say I was unusually talented and unusually motivated. I would not expect the vast majority of people to be able to grok Calculus within a week, though obviously people on this site are not a representative sample.

Comment by satvikberi on What are principled ways for penalising complexity in practice? · 2019-07-02T16:58:29.042Z · score: 11 (3 votes) · LW · GW

A good exposition of the related theorems is in Chapter 6 of Understanding Machine Learning (https://www.amazon.com/Understanding-Machine-Learning-Theory-Algorithms/dp/1107057132/ref=sr_1_1?crid=2MXVW7VOQH6FT&keywords=understanding+machine+learning+from+theory+to+algorithms&qid=1562085244&s=gateway&sprefix=understanding+machine+%2Caps%2C196&sr=8-1)

There are several related theorems. Roughly:

1. The error on real data will be similar to the error on the training set + epsilon, where epsilon is roughly proportional to (datapoints / VC dimension.) This is the one I linked above.

2. The error on real data will be similar to the error of the best hypothesis in the hypothesis class, with similar proportionality

3. Special case of 2 – if the true hypothesis is in the hypothesis class, then the absolute error will be < epsilon (since the absolute error is just the difference from the true, best hypothesis.)

3 is probably the one you're thinking of, but you don't need the hypothesis to be in the class.

Comment by satvikberi on What are principled ways for penalising complexity in practice? · 2019-06-29T20:09:14.047Z · score: 5 (3 votes) · LW · GW

Yes, roughly speaking, if you multiply the VC dimension by n, then you need n times as much training data to achieve the same performance. (More precise statement here: https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension#Uses) There are also a few other bounds you can get based on VC dimension. In practice these bounds are way too large to be useful, but an algorithm with much higher VC dimension will generally overfit more.

Comment by satvikberi on What are principled ways for penalising complexity in practice? · 2019-06-28T19:52:54.023Z · score: 13 (4 votes) · LW · GW

A different view is to look at the search process for the models, rather than the model itself. If model A is found from a process that evaluates 10 models, and model B is found from a process that evaluates 10,000, and they otherwise have similar results, then A is much more likely to generalize to new data points than B.

The formalization of this concept is called VC dimension and is a big part of Machine Learning Theory (although arguably it hasn't been very helpful in practice): https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension

Comment by satvikberi on Crypto quant trading: Naive Bayes · 2019-05-09T17:18:02.987Z · score: 1 (1 votes) · LW · GW

It's a combination. The point is to throw out algorithms/parameters that do well on backtests when the assumptions are violated, because those are much more likely to be overfit.

Comment by satvikberi on Crypto quant trading: Naive Bayes · 2019-05-09T03:24:07.505Z · score: 1 (1 votes) · LW · GW

As an example, consider a strategy like "on Wednesdays, the market is more likely to have a large move, and signal XYZ predicts big moves accurately." You can encode that as an algorithm: trade signal XYZ on Wednesdays. But the algorithm might make money on backtests even if the assumptions are wrong! By examining the individual components rather than just whether the algorithm made money, we get a better idea of whether the strategy works.

Comment by satvikberi on Crypto quant trading: Naive Bayes · 2019-05-08T18:48:01.771Z · score: 17 (7 votes) · LW · GW

Yes, avoiding overfitting is the key problem, and you should expect almost anything to be overfit by default. We spend a lot of time on this (I work w/Alexei). I'm thinking of writing a longer post on preventing overfitting, but these are some key parts:

  • Theory. Something that makes economic sense, or has worked in other markets, is more likely to work here
  • Components. A strategy made of 4 components, each of which can be independently validated, is a lot more likely to keep working than one black box
  • Measuring strategy complexity. If you explore 1,000 possible parameter combinations, that's less likely to work than if you explore 10.
  • Algorithmic decision making. Any manual part of the process introduces a lot of possibilities for overfit.
  • Abstraction & reuse. The more you reuse things, the fewer degrees of freedom you have with each idea, and therefore the lower your chance of overfitting.

Comment by satvikberi on Crypto quant trading: Intro · 2019-04-22T16:07:08.572Z · score: 2 (2 votes) · LW · GW

Yes

Comment by satvikberi on When should we expect the education bubble to pop? How can we short it? · 2019-02-10T19:01:43.507Z · score: 18 (6 votes) · LW · GW

I think the prior that bubbles usually pop is incorrect. We tend to call something a bubble in retrospect, after it's popped.

But if you try to define bubbles with purely forward-looking measures, like a period of unusually high growth, they're more frequently followed by periods of unusually slow growth, not rapid decline. For example, Amazon's stock would pass just about any test of a bubble over most points in its history.

I expect something similar with education, spending will likely remain high, but grow more slowly than it did in the last 20 years. That's especially true because of the structure of student loans, people can't really just default.

But to answer the more direct question: assuming that there is a rapid drop in education spending, how could we profit from it? Vocational schools seem like the most obvious bet, e.g. to become a programmer, dental assistant, massage therapist, electrician, and so on.

Certification services that manage to develop a reputation will become strong as well, e.g. SalesForce certificates are pretty valuable.

You could directly short lenders such as Sallie Mae.

Recruitment agencies that specialize in placing recent college graduates will likely suffer.

Management consulting firms rely heavily on college graduates, and so do hedge funds to a lesser extent.

Comment by satvikberi on Iteration Fixed Point Exercises · 2018-11-28T02:19:15.111Z · score: 9 (2 votes) · LW · GW

#6:

Assume WLOG Then by monotonicity, we have If this chain were all strictly greater, than we would have istinct elements. Thus there must be some uch that By induction, or all

#7:

Assume nd construct a chain similarly to (6), indexed by elements of If all inequalities were strict, we would have an injection from o L.

#8:

Let F be the set of fixed points. Any subset S of F must have a least upper bound n L. If x is a fixed point, done. Otherwise, consider which must be a fixed point by (7). For any q in S, we have Thus s an upper bound of S in F. To see that it is the least upper bound, assume we have some other upper bound b of S in F. Then

To get the lower bound, note that we can flip the inequalities in L and still have a complete lattice.

#9:

P(A) clearly forms a lattice where the upper bound of any set of subsets is their union, and the lower bound is the intersection.

To see that injections are monotonic, assume nd s an injection. For any function, If nd that implies or some which is impossible since s injective. Thus s (strictly) monotonic.

Now s an injection Let e the set of all points not in the image of and let ote that since no element of s in the image of Then On one hand, every element of A not contained in s in y construction, so On the other, clearly so QED.

#10:

We form two bijections using the sets from (9), one between A' and B', the other between A - A' and B - B'.

Any injection is a bijection between its domain and image. Since nd s an injection, s a bijection where we can assign each element o the uch that Similarly, s a bijection between nd Combining them, we get a bijection on the full sets.

Comment by satvikberi on Kalman Filter for Bayesians · 2018-10-27T03:39:15.796Z · score: 3 (2 votes) · LW · GW

Thanks! Edited.

Comment by satvikberi on Kalman Filter for Bayesians · 2018-10-27T03:38:59.513Z · score: 3 (2 votes) · LW · GW

Thanks! Edited. Yeah, I specifically focused on variance because of how Bayesian updates combine Normal distributions.

Comment by satvikberi on Inference & Empiricism · 2018-03-20T16:17:34.215Z · score: 7 (2 votes) · LW · GW

Yes.

Comment by satvikberi on Rationalist Lent · 2018-02-14T21:16:56.695Z · score: 20 (7 votes) · LW · GW

I'm specifically giving up games that encourage many short check-ins, e.g. most phone games and idle games. Binges aren't a big issue for me, they tend to give me joy and renewal. But frequent check-in games make me less happy and less productive.

Comment by satvikberi on Hammers and Nails · 2018-01-29T23:21:18.996Z · score: 19 (4 votes) · LW · GW

"Prefer a few large, systematic decisions to many small ones."

  1. Pick what percentage of your portfolio you want in various assets, and rebalance quarterly, rather than making regular buying/selling decisions
  2. Prioritize once a week, and by default do whatever's next on the list when you complete a task.
  3. Set up recurring hangouts with friends at whatever frequency you enjoy (e.g. weekly). Cancel or reschedule on an ad-hoc basis, rather than scheduling ad-hoc
  4. Rigorously decide how you will judge the results of experiments, then run a lot of them cheaply. Machine Learning example: pick one evaluation metric (might be a composite of several sub-metrics and rules), then automatically run lots of different models and do a deeper dive into the 5 that perform particularly well
  5. Make a packing checklist for trips, and use it repeatedly
  6. Figure out what criteria would make you leave your current job, and only take interviews that plausibly meet those criteria
  7. Pick a routine for your commute, e.g. listening to podcasts. Test new ideas at the routine level (e.g. podcasts vs books)
  8. Find a specific method for deciding what to eat - for me, this is querying system 1 to ask how I would feel after eating certain foods, and picking the one that returns the best answer
  9. Accepting every time a coworker asks for a game of ping-pong, as a way to get exercise, unless I am about to enter a meeting
  10. Always suggesting the same small set of places for coffee or lunch meetings
Comment by satvikberi on A LessWrong Crypto Autopsy · 2018-01-29T22:54:47.855Z · score: 52 (17 votes) · LW · GW

A good general rule here is to think in terms of what percentage of your portfolio (or net worth) you want in a specific asset class, rather than making buying/selling a binary decision. Then rebalance every 3 months.

For example, you might decide you want 2.5%-5% in crypto. If the price quadrupled, you would well about 75% of your stake at the end of the quarter. If it halved, you would buy more.

The major benefit is that this moves you from making many small decisions to one big decision, which is usually easier to get right.

Comment by satvikberi on Why did everything take so long? · 2017-12-30T16:32:54.600Z · score: 16 (4 votes) · LW · GW

It's widely believed that humans basically lived in <200 person tribes that didn't interact with each other too much before agriculture, so one might wonder how anything got any amount of adoption.

And only about 100 years ago, humanity essentially forgot the cure for scurvy: http://idlewords.com/2010/03/scott_and_scurvy.htm

Comment by satvikberi on Gears Level & Policy Level · 2017-11-29T19:16:16.469Z · score: 12 (3 votes) · LW · GW

One thing I'd add to that list is that the post focuses on refining existing concepts, which is quite valuable and generally doesn't get enough attention.

Comment by satvikberi on Gears Level & Policy Level · 2017-11-29T19:11:28.875Z · score: 24 (6 votes) · LW · GW

Providing Slack at the project level instead of the task level is a really good idea, and has worked well in many fields outside of programming. It is analogous to the concept of insurance: the RoI on Slack is higher when you aggregate many events with at least partially uncorrelated errors.

One major problem with trying to fix estimates at the task level is that there are strong incentives not to finish a task too early. For example, if you estimated 6 weeks, and are almost done after 3, and something moderately urgent comes up, you're more likely to switch and fix that urgent thing since you have time. On the other hand, if you estimated 4 weeks, you're more likely to delay the other task (or ask someone else to do it).

As a result, I've found that teams are literally likely to finish projects faster with higher quality if you estimate the project as, say, 8 3-week tasks with 24 weeks of overall slack (so 48 weeks total) than if you estimate the project as a 8 6-week tasks.

This is somewhat counterintuitive but really easy to apply in practice if you have a bit of social capital.

Comment by satvikberi on The Copernican Revolution from the Inside · 2017-11-04T14:38:16.035Z · score: 3 (3 votes) · LW · GW

Nitpick: as I understand, Feyerabend would agree. His main argument seems to be "any simple methodology for deciding whether a scientific theory is true or false (such as falsificationism) would have missed important advances such as heliocentrism, Newton's theory of gravity, and relativity, therefore philosophers of science should stop trying to formulate simple accept/reject methodologies."

Comment by satvikberi on Competitive Truth-Seeking · 2017-11-02T17:47:57.392Z · score: 24 (8 votes) · LW · GW

I could have been clearer - hiring is definitely a case where you get some points for following consensus, unlike, say, active investing where you're typically measured on alpha. And following consensus on some parts of your process is fine if you have an edge elsewhere (e.g. Google and Facebook pay more than most, so having consensus-level assessment is fine.) But I would argue that for most startups you'll see something like order-of-magnitude improvements through Process C.

Comment by satvikberi on Inadequacy and Modesty · 2017-10-30T13:43:50.602Z · score: 18 (6 votes) · LW · GW

I think the main cause is that people who view themselves are solving a problem are often using the procedure "look at the current pattern and try to find issues with it." A process that complements this well is "look at what's worked historically, and do more of it."

Some examples I wrote about a while back: lesswrong.com/lw/iro/systematic_lucky_breaks/

Comment by satvikberi on "Flinching away from truth” is often about *protecting* the epistemology · 2016-12-21T01:03:28.358Z · score: 6 (5 votes) · LW · GW

In my head, it feels mostly like a tree, e.g:

"I must have spelled oshun right"

–Otherwise I can't write well

– –If I can't write well, I can't be a writer

–Only stupid people misspell common words

– –If I'm stupid, people won't like me

etc. For me, to unravel an irrational alief, I generally have to solve every node below it–e.g., by making sure that I get the benefit from some other alief.

Comment by satvikberi on CFAR’s new focus, and AI Safety · 2016-12-03T05:35:07.558Z · score: 13 (13 votes) · LW · GW

Definitely agree with the importance of hypothesis generation and the general lack of it–at least for me, I would classify this as my main business-related weakness, relative to successful people I know.

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-28T16:28:05.639Z · score: 1 (1 votes) · LW · GW

Making shared vocabulary common and explicit usually makes it faster to iterate. For example, the EA community converged on the idea of replaceability as an important heuristic for career decisions for a while, and then realized that they'd been putting too much emphasis there and explicitly toned it down. But the general concept had been floating around in discussion space already, giving it a name just made it easier to explicitly think about.

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-28T05:52:49.828Z · score: 3 (3 votes) · LW · GW

Hacker News was rewritten in something other than Arc ~2-3 years ago IIRC, and it was only after that that they managed to add a lot of the interesting moderation features.

There are probably better technologies to build an HN clone in today–Clojure seems strictly better than Arc, for instance–the parts of HN that are interesting to copy are the various discussion and moderation features, and my sense of what they are mostly comes from having observed the site and seeing comments here and there over the years.

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-28T05:39:46.071Z · score: 7 (7 votes) · LW · GW

I do. I was a product manager for about a year, then founder for a while, and am now manager for a data science team, where part of my responsibilities are basically product management for the things related to the team.

That said, I don't think I was great at it, and suspect most of the lessons I learned are easily transferred.

Edit: I actually suspect that I've learned more from working with really good product managers than I have from doing any part of the job myself. It really seems to be a job where experience is relatively unimportant, but a certain set of general cognitive patterns is extremely important.

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-28T03:50:08.734Z · score: 5 (5 votes) · LW · GW

Yeah, a good default is the UNODIR pattern ("I will do X at Y time unless otherwise directed")

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-28T03:45:39.609Z · score: 2 (4 votes) · LW · GW

I wonder if the correct answer is essentially to fork Hacker News, rather than Reddit (Hacker News isn't open source, but I'm thinking about a site that takes Hacker News's decisions as the default, unless there seems to be a good reason for something different.)

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-28T03:06:25.557Z · score: 15 (15 votes) · LW · GW

Alternative, go with the Hacker News model of only enabling downvotes after you've accumulated a large amount of karma (enough to put you in, say, the top .5% of users.) I think this gets most of the advantages of downvotes without the issues.

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T22:42:41.246Z · score: 12 (12 votes) · LW · GW

Agree with both the sole dictatorship and Vaniver as the BDFL, assuming he's up for it. His posts here also show a strong understanding of the problems affecting less wrong on multiple fronts.

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T22:30:42.045Z · score: 10 (10 votes) · LW · GW

Agree that a lot more clarity would help.

Assuming Viliam's comment on the troll is accurate, that's probably sufficient to explain the decline: http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/di2n

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T22:26:11.146Z · score: 17 (17 votes) · LW · GW

Wow, that is a pretty big issue. Thank you for mentioning this.

Agree with all your points. Personally, I would much rather post on a site where moderation is too powerful and moderators err towards being too opinionated, for issues like this one. Most people don't realize just how much work it is to moderate a site, or how much effort is needed to make it anywhere close to useful.

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T21:53:35.847Z · score: 9 (9 votes) · LW · GW

100% centralization is obviously not correct, but 100% decentralization seems to have major flaws as well–for example, it makes discovery, onboarding, and progress in discussion a lot harder.

On the last point: I think the LW community has discovered ways to have better conversations, such as tabooing words. Being able to talk to someone who has the same set of prerequisites allows for much faster, much more interesting conversation, at least on certain topics. The lack of any centralization means that we're not building up a set of prerequisites, so we're stuck at conversation level 2 when we need to achieve level 10.

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T19:06:55.192Z · score: 10 (10 votes) · LW · GW

Also happy to join. And I'm happy to commit to a significant amount of moderation (e.g. 10/hours a week for the next 3 months) if you think it's useful.

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T17:18:43.105Z · score: 27 (27 votes) · LW · GW

On the idea of a vision for a future, if I were starting a site from scratch, I would love to see it focus on something like "discussions on any topic, but with extremely high intellectual standards". Some ideas:

  • In addition to allowing self-posts, a major type of post would be a link to a piece of content with an initial seed for discussion
  • Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. "agree with the conclusion but disagree with the argument", or "accurate points, but ad-hominem tone".
  • A fairly strict and clearly stated set of site norms, with regular updates, and a process for proposing changes
  • Site erring on the side of being over-opinionated. It doesn't necessarily need to be the community hub
  • Votes from highly-voted users count for more.
  • Integration with predictionbook or something similar, to show a user's track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions
  • A very strong bent on applications of rationality/clear thought, as opposed to a focus on rationality itself. I would love to see more posts on "here is how I solved a problem I or other people were struggling with"
  • No main/discussion split. There are probably other divisions that make sense (e.g. by topic), but this mostly causes a lot of confusion
  • Better notifications around new posts, or new comments in a thread. Eg I usually want to see all replies to a comment I've made, not just the top level
  • Built-in argument mapping tools for comments
  • Shadowbanning, a la Hacker News
  • Initially restricted growth, e.g. by invitation only
Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T16:59:00.436Z · score: 7 (7 votes) · LW · GW

On (4), does anyone have a sense of how much it would cost to improve the code base? Eg would it be approximately $1k, $10k, or $100k (or more)? Wondering if it makes sense to try and raise funds and/or recruit volunteers to do this.

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T16:33:57.282Z · score: 3 (3 votes) · LW · GW

Thinking about this more, I think that moderator status matters more than specific moderator privilege. Without one or more people like this, it's pretty difficult to actually converge on new norms. I could make some posts suggesting new norms for e.g. posting to main vs. discussion, but without someone taking an ownership role in the site there's no way to cause that to happen.

I suspect one of the reasons people have moved discussions to their own blogs or walls is because they feel like they actually can affect the norms there. Unofficial status works (cf. Eliezer, Yvain) but is not very scalable–it requires people willing to spend a lot of time writing content as well as thinking about, discussing, and advocating for community norms. I think you, Ben, Sarah etc. committing to posting here makes a lesswrong revival more likely to succeed, and would place even higher odds if 1 or more people committed to spending a significant amount of time on work such as:

  • Clarifying what type of content is encouraged on less wrong, and what belongs in discussion vs. main
  • Writing up a set of discussion norms that people can link to when saying "please do X"
  • Talking to people and observing the state of the community in order to improve the norms
  • Regularly reaching out to other writers/cross-posting relevant content, along with the seeds of a discussion
  • Actually ban trolls
  • Manage some ongoing development to improve site features
Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T10:09:13.089Z · score: 4 (4 votes) · LW · GW

This seems right to me. It seems to me that "moderation" in this sense is perhaps better phrased as "active enforcement of community norms of good discourse", not necessarily by folks with admin privileges as such. Also simply explicating what norms are expected, or hashing out in common what norms there should be.

I agree that there should be much more active enforcement of good norms than heavy-handed moderation (banning etc.), but I have a cached thought that lack of such moderation was a significant part of why I lost interest in lesswrong.com, though I don't remember specific examples.

In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.

Completely agree. One particularly important mechanism, IMO, is that brains tend to pay substantially more attention to things they perceive other humans caring about. I know I write substantially better code when someone I respect will be reviewing it in detail, and that I have trouble rousing the same motivation without that.

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T09:59:52.829Z · score: 4 (4 votes) · LW · GW

My theory is that the main things that matter are content and enforcement of strong intellectual norms, and both degraded around the time a few major high-status members of the community mostly stopped posting (e.g. Eliezer and Yvain.)

The problem with lack of content is obvious, the problem with lack of enforcement is that most discussions are not very good, and it takes a significant amount of feedback to make them better. But it's hard for people to get away with giving subtle criticism unless they're already a high-status member of a community, and upvotes/downvotes are just not sufficiently granular.

Comment by satvikberi on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T06:07:50.056Z · score: 16 (16 votes) · LW · GW

I think this is completely correct, and have been thinking along similar lines lately.

The way I would describe the problem is that truth-tracking is simply not the default in conversation: people have a lot of other goals, such as signaling alliances, managing status games, and so on. Thus, you need substantial effort to develop a conversational place where truth tracking actually is the norm.

The two main things I see Less Wrong (or another forum) needing to succeed at this are good intellectual content and active moderation. The need for good content seems fairly self-explanatory. Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)

I'll try to post more content here too, and would be happy to volunteer to moderate if people feel that's useful/needed.

Comment by satvikberi on The correct response to uncertainty is *not* half-speed · 2016-01-16T20:56:57.217Z · score: 3 (3 votes) · LW · GW

My brain often defaults to thinking of these situations in terms of potential loss, and I find the CFAR technique of reframing it as potential gain helpful. For example, my initial state might be "If I go ahead at full speed and the hotel is behind me, I'll lose half an hour. But if I turn around and the hotel is ahead of me, I'll also lose time." The better state is "By default, driving at half speed might get me to the hotel in 15 minutes if I'm going in the right direction, and I'll save ~8 minutes by going faster. Even if the hotel is behind me, I'll save time by driving ahead faster."

Comment by satvikberi on The correct response to uncertainty is *not* half-speed · 2016-01-16T20:24:32.564Z · score: 4 (4 votes) · LW · GW

My procedure is probably similar cost, but more general:

  1. State my goal(s), e.g. "get to the hotel"
  2. Find the point of highest uncertainty towards the goal, e.g. "not sure if the hotel is ahead or behind me"
  3. Come up with plans for reducing the uncertainty, e.g. "find the next gas station and ask someone"
  4. Check whether the plan I have actually feels like it'll work

Note that this can be applied pretty broadly, e.g. to business strategy, software design, making friends etc.

Comment by satvikberi on Survey: What's the most negative*plausible cryonics-works story that you know? · 2015-12-23T22:19:22.855Z · score: 2 (2 votes) · LW · GW

The process of revival is imperfect, and pieces of memories are frequently missing. None of your loved ones remember you, and some of them are in permanent Alzheimers-like states. One person claims to have been close to pre-revival you, but you don't remember them. Having felt the pain of being rejected by your closest friend, you decide to trust them. That turns out to be an elaborate scam, possibly motivated by pure sadism, and you're now alone in a world you don't recognize and where you have to be suspicious of everyone you meet.

Comment by satvikberi on Survey: What's the most negative*plausible cryonics-works story that you know? · 2015-12-23T22:12:50.358Z · score: 1 (1 votes) · LW · GW

Playing off of #2: The process of revival also allows for essentially infinite cloning. Unable to reconcile this with a desire for uniqueness, people decided that revived humans aren't quite real, and don't have legal rights. Thousands of copies of you are cloned or simulated for human experimentation, which has become extremely common now that it can be accurately done without hurting "real" humans. No version of you is ever revived in a context you would enjoy, because after all, you don't count as real.

Comment by satvikberi on Realistic epistemic expectations · 2015-06-26T16:35:02.292Z · score: 1 (1 votes) · LW · GW

One approach I've been working with is sharing models rather than arguments. For example, nbouscal and I recently debated the relative importance of money for effective altruism. It turned out that our disagreement came down to a difference in our models of self-improvement: he believes that personal growth mostly comes from individual work and learning, while I believe that it mostly comes from working with people who have skills you don't have.

Any approach that started with detailed arguments would have been incredibly inefficient, because it would have taken much longer to find the source of our disagreement. Starting with a high-level approach that described our basic models made it much easier for us to hone in on arguments that we hadn't thought about before and make better updates.

Comment by satvikberi on Intrinsic motivation is crucial for overcoming akrasia · 2015-06-26T16:20:52.880Z · score: 0 (0 votes) · LW · GW

I found my motivation to frequently depend much more on external factors than the task itself. For example, I worked on Math for essentially every waking hour at Berkeley (and completed all the Master's courses when I was 19 as a result), and I worked on programming/data science tasks 80 hours/week at one job. But I've had very similar types of tasks in different environments and found it quite difficult to put it anything near the same number of hours.

I've found that my motivation seems to be constraint-based: if I feel like I'm getting enough sleep, enough socialization, enough food etc. then I find it very easy to work a lot. But if any one of these is lacking then my motivation plummets. In particular, all the environments where I worked exceptionally hard were ones where I was surrounded by people who felt like part of "my tribe": simply being around people whom I like isn't enough.