Posts

Should we build thermometer substitutes? 2020-03-19T07:43:35.243Z · score: 6 (3 votes)
How can your own death be bad? Los Angeles LW/SSC Meetup #150 (Wednesday, March 4th) 2020-03-04T22:57:16.629Z · score: 4 (1 votes)
Sleeping Beauty - Los Angeles LW/SSC Meetup #149 (Wednesday, February 26th) 2020-02-26T20:30:19.812Z · score: 4 (1 votes)
Newcomb's Paradox 3: What You Don't See - Los Angeles LW/SSC Meetup #148 (Wednesday, February 19th) 2020-02-19T20:55:53.362Z · score: 4 (1 votes)
Newcomb's Paradox: Take Two - Los Angeles LW/SSC Meetup #147 (Wednesday, February 12th) 2020-02-11T06:03:07.552Z · score: 5 (2 votes)
Newcomb's Paradox - Los Angeles LW/SSC Meetup #146 (Wednesday, February 5th) 2020-02-05T03:49:11.541Z · score: 4 (1 votes)
Peter Norvig Contra Chomsky - Los Angeles LW/SSC Meetup #145 (Wednesday, January 29th) 2020-01-28T06:30:50.503Z · score: 4 (1 votes)
Moral Mazes - Los Angeles LW/SSC Meetup #144 (Wednesday, January 22nd) 2020-01-22T22:55:10.302Z · score: 4 (1 votes)
Data Bias - Los Angeles LW/SSC Meetup #143 (Wednesday, January 15th) 2020-01-15T21:49:58.949Z · score: 4 (1 votes)
Iterate Fast - Los Angeles LW/SSC Meetup #142 (Wednesday, January 8th) 2020-01-08T22:53:17.893Z · score: 4 (1 votes)
Predictions - Los Angeles LW/SSC Meetup #141 (Wednesday, January 1st) 2020-01-01T22:14:12.686Z · score: 4 (1 votes)
Your Price for Joining - Los Angeles LW/SSC Meetup #140 (Wednesday, December 18th) 2019-12-18T23:03:16.960Z · score: 4 (1 votes)
Concretize Multiple Ways - Los Angeles LW/SSC Meetup #139 (Wednesday, December 11th) 2019-12-11T21:38:56.118Z · score: 4 (1 votes)
Execute by Default - Los Angeles LW/SSC Meetup #138 (Wednesday, December 4th) 2019-12-04T23:00:49.503Z · score: 4 (1 votes)
Antimemes - Los Angeles LW/SSC Meetup #137 (Wednesday, November 27th) 2019-11-27T20:46:01.504Z · score: 4 (1 votes)
PopSci Considered Harmful - Los Angeles LW/SSC Meetup #136 (Wednesday, November 20th) 2019-11-20T22:08:34.467Z · score: 4 (1 votes)
Do Not Call Up What You Cannot Put Down - Los Angeles LW/SSC Meetup #135 (Wednesday, November 13th) 2019-11-13T22:11:33.257Z · score: 4 (1 votes)
Warnings From Self-Knowledge - Los Angeles LW/SSC Meetup #134 (Wednesday, November 6th) 2019-11-06T23:22:43.391Z · score: 4 (1 votes)
Technique Taboo or Autopilot - Los Angeles LW/SSC Meetup #133 (Wednesday, October 30th) 2019-10-30T20:39:51.357Z · score: 4 (1 votes)
Assume Misunderstandings - Los Angeles LW/SSC Meetup #132 (Wednesday, October 23rd) 2019-10-23T21:10:22.388Z · score: 4 (1 votes)
Productized Spaced Repetition - Los Angeles LW/SSC Meetup #131 (Wednesday, October 16th) 2019-10-16T21:33:55.593Z · score: 4 (1 votes)
The Litany of Tarski - Los Angeles LW/SSC Meetup #129 (Wednesday, October 2nd) 2019-10-02T23:08:54.586Z · score: 4 (1 votes)
Strategies for Inducing Decoupling - Los Angeles LW/SSC Meetup #128 (Wednesday, September 25th) 2019-09-25T19:53:22.012Z · score: 4 (1 votes)
Test Your Understanding Quickly - Los Angeles LW/SSC Meetup #127 (Wednesday, September 18th) 2019-09-18T20:05:59.163Z · score: 4 (1 votes)
SSC Meetups Everywhere: Los Angeles, CA 2019-09-14T03:22:22.125Z · score: 0 (0 votes)
Knowing (When) is Half the Battle - Los Angeles LW/SSC Meetup #126 (Wednesday, September 11th) 2019-09-11T21:02:16.077Z · score: 4 (1 votes)
Clever is not Good - Los Angeles LW/SSC Meetup #125 (Wednesday, September 4th) 2019-09-04T22:27:36.685Z · score: 4 (1 votes)
The Smallest Step - Los Angeles LW/SSC Meetup #124 (Wednesday, August 28th) 2019-08-28T21:19:33.147Z · score: 4 (1 votes)
Don't Argue on the Internet - Los Angeles LW/SSC Meetup #123 (Wednesday, August 21st) 2019-08-21T23:25:33.018Z · score: 4 (1 votes)
Escaping Inadequate Equilibria - Los Angeles LW/SSC Meetup #122 (Wednesday, August 14th) 2019-08-14T20:49:41.418Z · score: 4 (1 votes)
"Objective" Metrics are Shared Ontologies - Los Angeles LW/SSC Meetup #121 (Wednesday, August 7th) 2019-08-07T20:09:43.426Z · score: 4 (1 votes)
$25 Detour for a $20 Bill - Los Angeles LW/SSC Meetup #120 (Wednesday, July 31st) 2019-07-31T21:42:18.484Z · score: 4 (1 votes)
The Costs of Reliability - Los Angeles LW/SSC Meetup #119 (Wednesday, July 24th) 2019-07-24T20:16:30.256Z · score: 4 (1 votes)
Developing Scientific Intuitions - Los Angeles LW/SSC Meetup #118 (Wednesday, July 17th) 2019-07-17T20:22:57.600Z · score: 4 (1 votes)
Explain Weirdness - Los Angeles LW/SSC Meetup #117 (Wednesday, July 10th) 2019-07-10T22:42:13.261Z · score: 6 (2 votes)
Skill Frontiers - Los Angeles LW/SSC Meeup #116 (Wednesday, July 3rd) 2019-07-03T20:30:46.002Z · score: 4 (1 votes)
Skill Frontiers - Los Angeles LW/SSC Meetup #115 (Wednesday, June 26th) 2019-06-26T20:26:57.982Z · score: 4 (1 votes)
Godwin's Contractualism - Los Angeles LW/SSC Meetup #114 (Wednesday, June 19th) 2019-06-19T12:58:58.809Z · score: 4 (1 votes)
How to Run a Meetup - Los Angeles LW/SSC Meetup #113 (Wednesday, June 12th) 2019-06-12T17:37:12.575Z · score: 4 (1 votes)
Cultural Learning - Los Angeles LW/SSC Meetup #112 (Wednesday, June 5th) 2019-06-05T19:50:04.800Z · score: 4 (1 votes)
Culture is not about Esthetics - Los Angeles LW/SSC Meetup #111 (Wednesday, May 29th) 2019-05-29T21:59:47.207Z · score: 4 (1 votes)
The Unspeakable - Los Angeles LW/SSC Meetup #110 (Wednesday, May 22nd) 2019-05-22T20:30:49.103Z · score: 4 (1 votes)
Network Effects on Idea Generation - Los Angeles LW/SSC Meetup #109 (Wednesday, May 15th) 2019-05-15T21:35:35.723Z · score: 4 (1 votes)
What are 5-Year Plans, or, The Multivariate Fallacy - Los Angeles LW/SSC Meetup #108 (Wednesday, May 8th) 2019-05-08T20:54:30.414Z · score: 4 (1 votes)
Copenhagen Interpretation of Ethics - Los Angeles LW/SSC Meetup #107 (Wednesday, May 1st) 2019-05-01T23:14:35.812Z · score: 4 (1 votes)
More People - Los Angeles LW/SSC Meetup #106 (Wednesday, April 24th) 2019-04-24T02:50:25.445Z · score: 4 (1 votes)
Negotiating for Pareto Improvements - Los Angeles LW/SSC Meetup #105 (Wednesday, April 17th) 2019-04-17T05:00:12.880Z · score: 4 (1 votes)
How to Have Ideas - Los Angeles LW/SSC Meetup #104 (Wednesday, April 10th) 2019-04-10T07:19:11.158Z · score: 4 (1 votes)
Internet v. Culture (2019) - Los Angeles LW/SSC Meetup #103 (Wednesday, April 3rd) 2019-04-02T06:00:46.641Z · score: 4 (1 votes)
Move Fast and Break Things - Los Angeles LW/SSC Meetup #102 (Wednesday, March 27th) 2019-03-28T00:02:10.056Z · score: 4 (1 votes)

Comments

Comment by t3t on The Human Condition · 2020-08-17T08:10:05.258Z · score: 6 (4 votes) · LW · GW

This is interesting, and I'm (a little) surprised that I hadn't heard about it yet, but I don't think the parallel with Milgram is quite there. Yes, he's asking them to do something against their conscience, but as romeo points out, they more or less have a gun to their heads. And despite that, one of them (presumably) was brave enough (and quick-thinking enough) to surreptitiously record the hilariously blatant election fraud.

Comment by t3t on Half-Baked Products and Idea Kernels · 2020-06-24T06:47:45.243Z · score: 1 (1 votes) · LW · GW

I wouldn't say it's bad advice; it depends heavily on the context of the work. In an environment where you have some combination of:

1) a tight feedback loop with the relevant stakeholder (ideally the individual(s) who are going to be using the end product),

2) the product itself is amenable to quick iteration (i.e. composed of many smaller features, ideally with a focus on the presentation),

3) the requirements aren't clear (for example, the client has a mostly intuitive sense of how certain features should work; perhaps there are many implicit business rules that aren't formally written down anywhere but will come up as obviously "oh, it's missing the ability to do [x]" as the product gains capabilities)

...then avoiding significant investment in upfront design and adopting an iterative approach will very often save you spending a bunch of time designing something that doesn't fit your stakeholder's needs.


On the other hand, if you're operating in an environment where those conditions don't exist, such as one where you're mostly working on features or products that aren't easily broken down into smaller components that can be individually released or demoed to a stakeholder, and you have fairly clear requirements upfront that don't often change (or you have access to a product manager who you work with to iterate on the requirements until they're sufficiently well-detailed), then doing upfront design can often save you a lot of headache in wandering down dark alleys of "oops, we totally didn't account for how we'd incorporate this niche but relatively predictable use-case, so we optimized our design in ways which makes it very difficult to add without redoing a lot of work".


Having some experience with both, I'll say that the second seems better, in the sense that there are fewer meetings and interruptions, and the work is both faster and more pleasant, since there's less context-switching, conditional on the planning and product design being competent enough to come up with requirements that won't change too often. The downsides when it goes wrong do seem larger (throwing away three months of work feels a lot worse than throwing away two weeks), but ultimately that degenerates into a question of mitigating tail risk vs optimizing for upside, and I have yet to lose three months of work (though I did manage to lose almost two consecutive months working at an agile shop prior to this, which was part of a broader pattern that motivated my departure). I would recommend side-stepping that by attempting to find a place that does the "planning" thing well; at that point whether the team you're on is shipping small features every week or two or working on larger projects that span months is more a question of domain rather than effective strategy.

Comment by t3t on Mark Xu's Shortform · 2020-05-04T07:10:13.738Z · score: 6 (3 votes) · LW · GW

There are a few things to keep in mind:

1) The claim that 40 million Americans "deal with hunger" is, um, questionable. Their citation leads to feedingamerica.org, which cites USDA's Household Food Security in the United States report (https://www.ers.usda.gov/webdocs/publications/94849/err-270.pdf?v=963.1). The methodology used is an 11-question survey (18 for households with children), where answering 3 questions in the affirmative marks you as low food security. The questions asked are (naturally) subjective. Even better, the first question is this: “We worried whether our food would run out before we got money to buy more.” Was that often, sometimes, or never true for you in the last 12 months? That's an a real concern to have, but it is not what people are talking about when they say "dealing with hunger". You can be running on a shoestring budget and often worry about whether you'll have enough money for food without ever actually not having enough money for food.

2) A significant percentage of the population has non-trivial issues with executive function. Also, most of the population isn't familiar with "best practices" (in terms of effective life strategies, basic finances, etc). Most people simply don't think about things like this systematically, which is how you get the phenomenon of ~50% of the population not being able to cover a $400 emergency (or whatever those numbers are, they're pretty close). This would be less of an issue if those cultural norms were inherited, but you can't teach something you don't know, and apparently we don't teach Home Economics anymore (not that it'd be sufficient, but it would be better than nothing). This is a subject that deserves a much more in-depth treatment, but I think as a high-level claim this is both close enough to true and sufficient as a cause for what we might observe here. Making an infographic with a rotating course of 10 cheap, easy-to-prepare, relatively healthy, and relatively tasty meals is a great idea, but it'll only be useful to the sorts of people who already know what "meal prep" means. You might catch some stragglers on the margin, but not a lot.

3) The upfront costs are less trivial than they appear if you don't inherit any of the larger items, and remember, 50% of the population can't cover a mid-3-figure emergency. "Basic kitchen equipment" can be had for under $100, but "basic kitchen equipment" doesn't necessarily set you up to prepare food in a "meal prep" kind of way.

Comment by t3t on Should we build thermometer substitutes? · 2020-03-20T01:12:17.285Z · score: 1 (1 votes) · LW · GW

That's fine, thanks!

Comment by t3t on March 14/15th: Daily Coronavirus link updates · 2020-03-17T00:34:25.909Z · score: 6 (4 votes) · LW · GW

Twitter: Seattle approaching Lombardy levels

The claims in that Twitter thread (now deleted) have been retracted: https://mobile.twitter.com/CT_Bergstrom/status/1239348331186249728

Comment by t3t on rmoehn's Shortform · 2020-03-09T21:28:12.911Z · score: 2 (2 votes) · LW · GW

Kai Faust (not sure if he has an account here) has already developed a prototype desktop application (cross-platform via Electron) for this.

Comment by t3t on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T04:19:58.856Z · score: 4 (3 votes) · LW · GW

To reiterate, I don't explicitly use anything like the procedures I described in my posts to do any sort of interpretation. I came up with them to use as levers to attempt bridging the inferential distance between Said and I; I agree that in practice trying to use those models explicitly would be extremely error-prone (probably better than a random walk, but maybe not by much).

More salient to the point at hand: you understood (to a sufficient degree) the models I was describing, and your criticisms contain information about your understanding of those models. If for whatever reason I wanted to continue discussing those models, those two things being true would make it possible for me to respond further (with clarifications, questions about your interpretations, etc).

Comment by t3t on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T08:48:50.140Z · score: 10 (2 votes) · LW · GW

I was not describing the process I use to interpret novel linguistic compositions such as "authentic relationship" - my brain does that under the hood, automatically, in a process that is fairly opaque to me; despite that, the results are sufficiently accurate that I don't spend hours trying to resolve minutiae, even in highly complex technical domains.

I was attempting to use an analogy with word embeddings in multi-dimensional space to explain why the way you approach information-gathering has asymmetrical costs. I can't come up with another analogy, because your response is totally non-informative with respect to how/why/where my first analogy failed to land. Did you notice that you didn't even tell me whether you're familiar with the concepts used? I have literally zero bytes of information with which to attempt to generate a more targeted analogy.

Would it not be easy for him simply to say that?

This doesn't really seem material to the point I was trying to discuss, but (I imagine) it's because there can be a trade-off between density and precision when trying to convey information. (And, also, how is he supposed to know which parts of his post are going to be incomprehensible to which people? Again, one could put in an unbounded amount of effort into specifying with ever more clarity and precision exactly what they mean by every word.)

Your response to Habryka also seems to not materially respond to his main points (the grossly asymmetrical effort involved, and the fact that the time spent is not free, it is traded off against other pursuits).

You list certain outcomes you consider beneficial, but "things are not easy to explain and have hidden complexities" is true for literally everything given a sufficient level of desired precision. It is a fully general argument in favor of asking arbitrarily vague questions.


EDIT: I did want to thank you for your straightforward answer here:

I don’t know how you generated that guess, so my answer can only be the former.

That, at least, would let me move the conversation forward with a tentative conclusion for that question, but unfortunately that answer seems to imply sufficiently different mental machinery that I'm a bit stuck regardless. I'll come back to this if I come up with something exceptionally clever to try to solve that problem, I suppose.

Comment by t3t on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T07:16:11.765Z · score: 9 (5 votes) · LW · GW

If I read "authentic relationship", "a relationship which is built on honest premises and communication (i.e. neither party has lied or misled the other about their background, motivations, or relevant personality characteristics)" is my first guess as to what that would mean. My question is: are you incapable of performing this sort of "decryption work" (as in, the examples you generated are your best effort), or is your chief complaint that it's effortful and error-prone (as in, you could have extrapolated something similar to what I did, but you believe that doing so is epistemically unjustified)?

I am advocating for this because, in practice, this seems to minimize the amount of time and communication necessary to make sure both parties are on the same page w.r.t. the definitions of terms used and the intent behind what is being communicated. The way you ask questions reveals almost nothing about the state of your mental map of the subject of discussion (what you think the boundaries are, how you think it corresponds to the surrounding context, etc). This increases the amount of communication required to answer your question much more than linearly - you know "where" you are confused much better than the author. The author can guess, but the author is dealing with the entire possibility space of things you can be confused about; the amount of work that can go into resolving that confusion is unbounded. However, if you put forth your interpretation, then ask for clarification/correction, the author has a much more constrained space to explore to attempt to diagnose where your map is insufficiently well-specified/pointing at the wrong thing/has some other conflict with the author's map. ~Linear time for you to come up with the most straightforward possible interpretation (contingent on you actually being able to do so - still not clear to what degree this is a disagreement in the allowable degree of inference), + ~linear time for the author to identify mistakes, vs 0 time for you + unbounded time for the author.


The problem I'm having with trying to respond to the rest of your post (and the previous one in the thread) is that I don't feel like I have a better sense of your position on the more critical underlying issues now than when I first replied.

I will try to be more specific still, though I will be leaning on concepts similar to those in ML, such as embeddings, vectors, dimensionality, etc. I can try to find another set of concepts if this doesn't translate well enough. (I already tried to come up with an analogy with interfaces & generics in the software engineering sense, but couldn't actually come up with a coherent model without bringing in intersection types, at which point I gave up. Maybe that gives you some idea of what I was going for anyways.) When you performed the substitutions for "authentic", it looks like you traveled the smallest possible distance away from the "authentic" node, and not in the direction of any cluster of nodes that would be closer to (or have higher connective weight with, if you prefer) "relationship" (or "expression", or "reaction"). Naturally, the node you landed on fit the surrounding context about as well as a square peg in a round hole.

Now, to be absolutely clear, when you say that "authentic" has no standard meaning, are you claiming that "authentic" is equidistant from every other node in your graph (of all possible concepts)? I feel like we've ruled that out, but I'm not 100% sure; if that is the case then the direction I'm going in with the rest of this is probably fruitless.

If not, if you do indeed have a graph with concepts that are much closer to "authentic" than other concepts, then some of the concepts in the "authentic"-adjacent cluster will likewise be much closer to the "relationship" node along many dimensions than most of the others. What are those dimensions? Relationships have many properties and embedded concepts: participants, duration, style, etc. The dimensions we could say are relevant for linking together "authentic" and "relationship" would be more granular, likely describing the terms on which the participants engaged in the relationship, and the style of communication they use. If you refuse to traverse the graph to any appreciable degree (and make public where you landed; ideally also the path you followed), it's much harder for anybody else to help you. It's not clear at which level of linguistic abstraction the disconnect is - you could be missing the "authentic" node altogether (solved by dictionary), you could be missing connections from "authentic" to "honest" to "honesty about self" (don't think this is the problem; not clear how to solve this if it is), you could be asserting that those connections in your graph have equal weights to, say, the connections from "authentic" to "tangerine" to "random number generator", so there's literally no way for you to privilege the first set when trying to trace a path from "authentic" to "relationship", because you have no idea which direction to go looking in (don't think this is the problem either), or you could be asserting that the first set of connections do indeed have heavier weights, but not to a sufficient degree (if there is any such degree) that you would feel justified in traversing those nodes.


EDIT: I want to note that I started writing this comment well before Habryka posted his response. It strikes me that he hit on some very similar things (at one point I edited out a sentence that called your initial question "underspecified"; it's not that it wasn't an accurate description of my feelings on the subject, but I decided to taboo that word because I thought of a better way to explain what I thought the problem was).

Comment by t3t on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T04:28:44.804Z · score: 11 (3 votes) · LW · GW

I find it surprising that you find definitions 1,2, 4, and 5 inapplicable. "Authentic" is used three times in the original post, and "authenticity" is used twice. "Authentic" is used as a modifier for "expression", "relationships", and "reaction".

Definition 1a from MW:

worthy of acceptance or belief as conforming to or based on fact

"Conforming to or based on fact" feels very similar to "the map corresponds to the territory".

Performing the substitution: "An expression that is worthy of acceptance or belief, as the expression (map) corresponds to the internal state of the agent that generated it (territory)."

This is not necessarily the most trivial possible leap, but to draw in another analogy... if we consider concept-space to a multidimensional space with connected nodes, the weight of the connection between "authentic" and "honest" is much stronger than between "authentic" and "tangerine". I don't know if you're agreeing with this part of Mark's claim:

It is an applause light that can be used by a speaker to mean whatever they want, with no fixed meaning across contexts and speakers.

But if so, that is the part that I am explicitly disagreeing with (moreso along the axis of prescriptivism, but also for descriptivism, just to a lesser degree). That is, ignoring context, "authentic" has a set of definitions and connotations which are relatively tightly clustered, and rule out the possibility of using it as a substitute for, say, "dishonest". Do you disagree, that in both the sense of its formal definitions, and actual in-practice usage, "authentic expression" is much closer to "honest expression" than "dishonest expression"?

The same analysis seems to apply equally well to "authentic reaction"; "authentic relationship" does seem to require linking together slightly more divergent concepts, though "relationship" has enough interfaces with "honesty" that coming up with a better-than-random (or better-than-tangerine) interpretation does not seem difficult.

Comment by t3t on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T02:27:17.614Z · score: 27 (6 votes) · LW · GW

I think I can take a stab at this; timing myself out of curiosity.

@Said: let me draw an analogy to a fictional online interaction (without implying that comment that started all of this is analogous in *all* relevant ways to the fictional one):

Author Andy: "...a destructive mode of communication."

Commenter Cody: "What do you mean by destructive in that context?"

If Andy had written something like "a tangerine mode of communication," it would be understandable if Cody (and most other readers) had *literally* no referents for "tangerine" which would cause that sentence to parse at all. If Andy had instead written something like "...a mode of communication harms the ability of conversational participants to reach agreement on the definition of terms [x, y, and z]," and Cody asked what "harms" meant in that context, as an outsider, it would be very difficult to understand where the communication had broken down, because "harms" is a widely-used term with referents that map relatively cleanly to the concepts at play, even if it is not the most common use for the term. "Destructive" is a more interesting case, because it is rarely used as a modifier to "mode of communication", but if Cody were to claim that there was no "plausible interpretation" or "standard usage" he could assume, it would be difficult to understand how to help him construct the mental machinery to map the dictionary definition (as, for example, a "standard usage") of "destructive" as an adjective to another concept. "Destructive" has a widely-known and well-accepted definition, and while Cody is not claiming that he does not know that definition (or any others), he is claiming that *none* of the definitions he knows produce coherent output when used to modify "mode of communication".

This is what this looks like, from the outside. You are claiming that you have no referents for "authentic" which produce a coherent-in-context (note: no claim about whether it is justified) interpretation for the given sentence(s). Authentic has a dictionary definition of "genuine"; if we replace "authenticity" with "being genuine" in

Similarly, why should “that which can be destroyed by authenticity” be destroyed? Because authenticity is fundamentally more real and valuable than what it replaces, which must be implemented on a deeper level than “what my current beliefs think.”

...it seems to be a coherent claim (though, again, no claim on whether it is sufficiently justified). If you have the same problem with "genuine", then perform another substitution: "truthful self-representation" (that substitution tied together the external context with the modifier, which is maybe a sign that it's a clearer way of communicating the mapping? Need to think about that...). It is difficult to understand what kind of answer you are looking for when you ask "what is the standard usage of authenticity", because this is a query that is trivially resolved by a dictionary lookup/google search. If the answer that procedure provides is insufficient to provide a mapping to the broader context the term is used in, then repeating it back to you won't help; it's clear that your confusion is elsewhere (this is gesturing the direction of a definition for "shape of confusion"). If you don't see any way in which any plausible definition/referent for "authentic", set in that context, allows you generate expectations from the resulting sentence(s) (for example, being able to come up with hypothetical situations which would *not* be accurately described as such), then there's either incompatible mental machinery, or a more subtle misunderstanding. I don't think that's the case, though. I believe you know (or could look up) the definition of "authentic", and I believe that if you ran the iterated procedure of substituting synonyms (or sufficiently close referents in concept-space, accounting for the surrounding context), you would quickly find an interpolation that was sufficiently coherent. It is possible that you ran this procedure and decided that the predictions that could be generated by the result were very "fuzzy" (the distribution of possible expectations would be extremely wide; you would have trouble cleaving reality at a sensible set of joints). If so, this is the point where I would describe to the author of the original post what my interpretation of the claim was, with some hint as to what shape the distribution of generated expectations my interpretation would imply, so that the author could help me narrow the boundaries of that distribution (or point me to another spot on the map entirely, if my interpretation was completely wrong rather than insufficiently well-specified).

...almost an hour, and I don't think I did a great job, but maybe this crosses some inferential distance.

Comment by t3t on Pricing externalities is not necessarily economically efficient · 2019-11-09T20:30:53.488Z · score: 2 (2 votes) · LW · GW

An interesting implication, taking the high-level proposition at face value, is that one would expect to see a lot of behavior (more than one might naively expect) with negative externalities whose costs fall under the threshold of the transaction costs which would be required to compensate those affected.

Comment by t3t on Realism and Rationality · 2019-09-16T06:07:44.929Z · score: 7 (4 votes) · LW · GW

Seconding this - my strong impression is that a substantial percentage of the rationality community rejects moral realism, not normative realism (as you say - what would the point of anything be?).

I'm curious where this impression came from. The only place I can imagine anything similar to an argument against normative realism cropping up would be in a discussion of the problem of induction, which hasn't seen serious debate around here for many years.

Comment by t3t on Am I going for a job interview with a woo pusher? · 2019-08-25T16:55:18.340Z · score: 3 (3 votes) · LW · GW

Evidence for its effectiveness seems to be limited or nonexistent: https://www.ncbi.nlm.nih.gov/books/NBK253825/

Based on the provider's website, I'd be skeptical that they're taking a rigorous approach (even controlling for the shaky fundamentals), given the appearance of a "shotgun" approach, where they seem to have basically targeted a large number of common, disparate conditions with one treatment.

Comment by t3t on Blackmail · 2019-02-20T07:02:09.766Z · score: 8 (5 votes) · LW · GW

You do gesture at it with "maximum amount of harm", but the specific framing I don't quite see expressed here is this:

While a blackmailer may be revealing something "true", the net effect (even if not "maximized" by the blackmailer) is often disproportionate to what one might desire. To give an example, a blackmailer may threaten to reveal that their target has a non-standard sexual orientation. In many parts of the world, the harm caused by this is considerably greater than the (utilitarian) "optimal" amount - in this case, zero. This is a function of not only the blackmailer's attempt at optimizing their long-term strategy, but also of how people/society react to certain kinds of information. Unfortunately this is mostly an object-level argument (that society reacts inappropriately in predictable ways to some things), but it seems relevant.

Comment by t3t on Pedagogy as Struggle · 2019-02-16T03:45:23.862Z · score: 6 (4 votes) · LW · GW

This brings up the question of what you're trying to optimize for when teaching; in particular, which segment of the student population are you trying to best teach? If the median, then this strategy will, at best, be useless, at worst, actively harm their learning. If the top percentile, then it may very well produce better outcomes than a more straightforward approach. But it does seem to be the case that there's a trade-off.

Comment by t3t on Minimize Use of Standard Internet Food Delivery · 2019-02-12T07:12:57.294Z · score: 1 (1 votes) · LW · GW

Grubhub also exclusively uses its own drivers. See my response to Said: https://www.lesswrong.com/posts/z9hqPS6NNdNYLYunT/minimize-use-of-standard-internet-food-delivery#XRNiX7GgZ7pF6HD5Y

Comment by t3t on Minimize Use of Standard Internet Food Delivery · 2019-02-12T07:12:22.843Z · score: 4 (3 votes) · LW · GW

Here is a neutral (from the perspective of potential competition) source, that quotes industry insiders: https://nypost.com/2016/02/06/tech-giants-start-getting-serious-about-food-delivery/

I agree that delivery services provide significant value to the consumer for the reasons you describe. I suspect that in the situation where a specific class of restaurant (pizza places) already have their own delivery network in place (fixed costs already paid, domain-specific efficiencies already captured), a bare-bones online order system could easily beat out a full-service middleman like UberEats or Grubhub.

Comment by t3t on Minimize Use of Standard Internet Food Delivery · 2019-02-11T17:36:44.097Z · score: 8 (6 votes) · LW · GW

In fact for some services it's 30%: https://get.chownow.com/blog/restaurant-delivery-killing-restaurants

I only learned about this a few days ago, and (bizarrely) thought it was only UberEats that had such a high fee schedule.

Comment by t3t on X-risks are a tragedies of the commons · 2019-02-07T06:32:25.667Z · score: 4 (3 votes) · LW · GW

I think there's an important distinction between x-risks and most other things we consider to be tragedies of commons: the reward for "cooperating" against "defectors" in an x-risk scenario (putting in disproportionate effort/resources to solve the problem) is still massively positive, conditional on the effort succeeding (and in many calculations, prior to that conditional). In most central examples of tragedies of the commons, the payoff for being a "good actor" surrounded by bad actors is net-negative, even assuming the stewardship is successful.

The common thread is that there might be a free-rider problem in both cases, of course.

Comment by t3t on Playing Politics · 2018-12-05T06:06:10.816Z · score: 4 (4 votes) · LW · GW

I want to signal-boost this harder than just upvoting it, because a couple examples could have been pulled directly from my life.

It should also be noted that I haven't experienced anybody getting upset about somebody taking charge of organizing something after it's been (unsuccessfully) opened to group coordination. I notice that when I'm on the other side of that equation, I'm mostly just grateful that somebody else is doing the work of organizing/coordinating things.

Comment by t3t on Anyone use the "read time" on Post Items? · 2018-12-02T06:07:36.861Z · score: 4 (3 votes) · LW · GW

Sorry for not specifying - if you hover over the bottom half of the link to a post, i.e. the part that shows Username, points, time since post submission, and read time, it will display "Show Highlight". Clicking on any part of the bottom half except the username will expand the item to show a section of the post, along with "Collapse" and "Continue to Full Post (59 words)" option (word count will vary; I used the one for this post as an example).

Comment by t3t on Anyone use the "read time" on Post Items? · 2018-12-02T00:54:26.105Z · score: 5 (3 votes) · LW · GW

I occasionally use it to gauge approximate post length, since the seeing the word count requires UI interaction. I would rather have the word count be immediately visible, but I probably wouldn't miss "read length" if it was gone entirely either.

Comment by t3t on Paul's research agenda FAQ · 2018-07-02T04:50:02.574Z · score: 23 (12 votes) · LW · GW

Meta-comment:

It's difficult to tell, having spent some time (but not a very large amount of time) following this back-and-forth, whether much progress is being made in furthering Eliezer's and Paul's understanding of each other's positions and arguments. My impression is that there has been some progress, mostly from Paul vetoing Eliezer's interpretations of Paul's agenda, but by nature this is a slow kind of progress - there are likely many more substantially incorrect interpretations than substantially correct ones, so even if you assume progress toward a correct interpretation to be considerably faster than what might be predicted by a random walk, the slow feedback cycle still means it will take a while.

My question is why the two of you haven't sat down for a weekend (or as many as necessary) to hash out the cruxes and whatever confusion surrounds them. This seems to be a very high-value course of action: if, upon reaching a correct understanding of Paul's position, Eliezer updates in that direction, it's important that happen as soon as possible. Likewise, if Eliezer manages to convince Paul of catastrophic flaws in his agenda, that may be even more important.

Comment by t3t on You Are Being Underpaid · 2018-04-19T22:15:42.278Z · score: 4 (1 votes) · LW · GW

From talking to some people in the UK, my impression is that pay is considerably lower (by 50% or more!), but I don't know what interviewing is like. I'll see if I can get some info on that.

Comment by t3t on You Are Being Underpaid · 2018-04-19T19:03:58.454Z · score: 4 (1 votes) · LW · GW

Taking Google as an example, that is what they want at entry-level. If you're more experienced, my impression is that you still get run through the same gauntlet, but then you also get interviewed by a few different teams for more specific skill sets (i.e. mobile will want actual mobile experience, etc).

Keep in mind "data structures and algorithms" is underselling it a bit - you need to know well beyond what you typically cover in an introductory algorithms course.

Comment by t3t on You Are Being Underpaid · 2018-04-17T02:23:51.756Z · score: 4 (1 votes) · LW · GW

Because I'm not sure what the motivations behind asking trivia questions are, I don't know for sure how your answer would be perceived. That is likely how I would answer a question about an API I wasn't familiar with, though filters are more of a structural aspect of .NET MVC than an API (though it's still all functions at the bottom). Not knowing an important structural aspect of a framework you claim to be proficient in can be a red flag - though in my case I knew what they were, but did not know what they were called. (I looked them up after the first interview where I was asked about them, which was a good thing, because I was asked about them again in my last interview.) Another good lesson!

I agree that making the interview pleasant for the interviewer is a good idea. It does seem like a "too obvious to be said" sort of thing, which probably means it needs to be said more often. The question that follows is how to do that, especially if you don't have an instinct for it.

I've also read the advice to practice answering questions on a whiteboard. It's good advice, but in the interview that got me hired I didn't actually do any whiteboarding, so I didn't think to list it.

Thanks!

Comment by t3t on You Are Being Underpaid · 2018-04-16T21:18:43.644Z · score: 4 (1 votes) · LW · GW

In fact, all of my jobs (3 in total) until the current one had placed very lenient demands on my time. I think it's more of a management/operational issue, though. While I won't deny that I can solve some problems fast, most of the downtime was from an inefficient work pipeline.

Comment by t3t on You Are Being Underpaid · 2018-04-15T20:57:50.119Z · score: 3 (1 votes) · LW · GW

Do you have 10-15 hours a week to spend writing code? It's likely possible to frame your absence from the job market in a way which doesn't hurt your prospects too much. Feel free to DM me if you want to talk more.

Comment by t3t on One-Year Anniversary Retrospective - Los Angeles · 2018-04-15T04:56:09.233Z · score: 3 (1 votes) · LW · GW

Sorry for the delayed response - for some reason I never got any notifications about comments on this post.

We never had a discussion about the schedule when the reboot happened, mostly because "weekly" was the way we'd always done it and nobody seemed interested in changing it. Yes, it was explicitly weekly. The "core" members had known each other for anywhere from 3-5 years, but that was mostly in the context of the meetup (with a few exceptions). That's changed significantly - we (including the newer members) spend much more time together socially outside of the context of the meetups now than we used to.

Comment by t3t on One-Year Anniversary Retrospective - Los Angeles · 2018-04-01T06:44:42.892Z · score: 17 (4 votes) · LW · GW

Thanks - I remember finding your post interesting the first time I read it. This time I put it in Evernote so that I actually remember to try some things out.

Comment by t3t on Los Angeles LW/SSC Meetup #50 - Cognitive Bias Round-Robin · 2018-03-12T23:30:23.400Z · score: 2 (1 votes) · LW · GW

Thanks, I'll keep that in mind for future events.

Comment by t3t on Los Angeles LW/SSC Meetup #50 - Cognitive Bias Round-Robin · 2018-03-12T06:09:50.498Z · score: 2 (1 votes) · LW · GW

Yep, group is here: https://www.lesserwrong.com/groups/GSN7BypgiJcjEiRRS

Is it not showing up?

(On a related note: I ran into an error when trying to create the group when pasting an address in the "location" field, not realizing until I tried to create an event instead that it required me to allow it to autocomplete the address by typing it in from scratch. The error consisted of these lines:

  • Mongo location is required.
  • Group Location is required.
  • Location is required.
  • Schema validation error

Suffice it to say, it was not terribly clear why it wasn't accepting the location at first.)

Comment by t3t on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-02-28T05:25:39.974Z · score: 7 (3 votes) · LW · GW

I'm the Los Angeles organizer and can confirm that Elo seems to be fairly put together, as these things go(though we only met for a few hours.)

Comment by t3t on Mana · 2017-12-20T19:24:25.230Z · score: 7 (2 votes) · LW · GW

I found some of this difficult but not impossible to understand, without any prior context. Of course, it's possible that I'm wrong about my level of understanding, in which case I'd prefer to be corrected.

Here is my understanding of the relevant details.

Erfeyah is confused by what you mean by mana, and more specifically what it meant to "apply" mana to a rental car company employee. If my understanding is correct, this was a process of using emotional support techniques, as you describe them (or perhaps the opposite - introducing a "hostile social reality", i.e. dark arts - this is something I'm not clear on), in order to accomplish your goals.

I'm also not sure what this means:

"You must sever your nerve cords. The Khala is corrupted"

Unless it means disconnecting your perception of social reality from your goal structure, while maintaining a surface-level awareness.

Comment by t3t on Placing Yourself as an Instance of a Class · 2017-10-04T03:49:49.209Z · score: 5 (1 votes) · LW · GW

To extend the programming metaphor a bit:

Agents who understand and explicitly use a decision theory along the lines of TDT/FDT may be said to be implementing an interface, which consequently modifies the expected outcomes of their decision process. This is important in situations like deciding how to vote, or even whether you should do so, as you can estimate that most agents in the problem-space will not be implementing that particular interface, so your doing so will only entangle with the outcome of the limited set of agents who do implement it.

Comment by T3t on [deleted post] 2016-12-13T08:52:05.038Z

I don't see what this has to do with rationality, or any other core interest of LW. This seems to be a fairly prototypical example of the genre, so I don't even see what kind of useful analysis you can perform on it. Maybe try /pol/?

Comment by t3t on Open thread, Nov. 16 - Nov. 22, 2015 · 2015-11-17T01:03:31.877Z · score: 1 (1 votes) · LW · GW

I was able to use Square to transfer money from a pre-paid gift card (not sure if it was Visa though) to my bank account. Transaction fee is ~2.75% iirc.

Comment by t3t on Open thread, Nov. 16 - Nov. 22, 2015 · 2015-11-16T23:39:13.951Z · score: 1 (1 votes) · LW · GW

Has anybody donated a car to charity before (in the US? CA in particular, but I imagine it'll generalize outside of location-specific charities).

The general advice online is useful but not very narrowly-tailored. Couple points I'm looking for information on:

1) Good charities (from an EA perspective)

2) Clarification on the tax details (when car's fair market value is between $500 and $5000)

Would appreciate any advice.

Comment by t3t on A Proposal for Defeating Moloch in the Prison Industrial Complex · 2015-06-02T23:41:20.599Z · score: 2 (2 votes) · LW · GW

Missing actor/incentive structure:

Our current justice system is largely based on the idea of retribution, not rehabilitation. This is a trade-off where the State delivers vengeance for victims/families of victims to prevent vigilante justice. It may not make much sense in terms of impact today, but as a cultural norm it still exists and this idea does nothing to address that.

Other thoughts:

Does not really address "recidivism" of victimless crimes, including most drug crimes, except in the most general sense. Convincing people that smoking weed is morally wrong is much harder than convincing them that murder is morally wrong.

Comment by t3t on How to save (a lot of) money on flying · 2015-02-03T23:26:22.087Z · score: 4 (4 votes) · LW · GW

This is not a secret anymore, and the attention I bring to the issue by posting it on LessWrong is pretty marginal. The fact that there's already been a lawsuit over this is an indication that the airlines think it's cheaper to try and suppress it that way than to change their pricing structure.

Comment by t3t on How to save (a lot of) money on flying · 2015-02-03T19:56:56.272Z · score: 0 (0 votes) · LW · GW

I doubt it - this is a trick that high-volume fliers have been using for a while. That said, airlines being annoyed by it is a reasonable concern, though I don't know what they could possibly do about - forbid you from flying with them? That seems like the sort of thing that would get attention.

Edit: see new posted warnings.

Comment by t3t on Memes and Rational Decisions · 2015-01-09T08:11:06.774Z · score: 7 (7 votes) · LW · GW

How should I contact Vassar regarding my willingness to follow his lead regarding whatever projects he deems sensible?