Comment by dagon on Slack Club · 2019-04-19T18:50:40.391Z · score: 10 (2 votes) · LW · GW

(note: I may be part of the problem - I consider myself a subscriber to and student of the rationalist philiosophy, but not necessarily a member of whatever is meant by "rationalist community". I don't know if your definition includes me or not.)

This topic might benefit from some benchmarking and comparison with other "communties". Which ones seem more effective than rationalists? Which ones seem less? I've been involved with a lot of physical community groups (homeowner associations, local charities, etc.), and in almost no case would I say the community is effective on it's own - some have very effective leaders who manage to get a lot of impact out of the community.

Comment by dagon on Criticizing Critics of Structural-Functionalism · 2019-04-18T22:19:41.251Z · score: 2 (1 votes) · LW · GW

I don't have enough knowledge of the theory or the criticism to have any idea if your defense is appropriate to the attack. Looking at https://en.wikipedia.org/wiki/Structural_functionalism, I think my concerns about it would revolve around why "solidarity and stability" is used as the functional target, rather than something more fundamental like "individual competitive advantage". My worry would be that structural functionalism seeks explanations on the wrong level for the evidence it uses, much like "functional biology" could be led astray with evolutionary just-so stories.

Comment by dagon on Where to Draw the Boundaries? · 2019-04-18T18:24:41.873Z · score: 2 (1 votes) · LW · GW

Accents are a good example. It's easy to offend someone or to make incorrect predictions based on "has a British accent", when you really only know some patterns of pronunciation. In some contexts, that's a fine compression; way easier to process, communicate and remember. In other contexts, you're better off highlighting and acknowledging that your data supports many interpretations, and you should be preserve that uncertainty in your communication and predictions.

"casual" vs "precise" are themselves lossy compression of fuzzy concepts, and what I really mean is that the use of compression is valid and helpful sometimes, and harmful and misleading at other times. My point is that the distinction is _NOT_ primarily about how tight the cluster or how close the match to some dimensions of reality in the abstract. The acceptability of the compression is about context and uses for the compressed or less-compressed information, and whether the lost details are important for the purpose of the communication or prediction. It's whether it meets the needs of the model, not how close it is to "reality".

Note also that I recognize that no model and no communication is actually full-fidelity. Everything any agent knows is compressed and simplified from reality. The question is how much further compression is valuable for what purposes.

Essentialism is wrong. Conceptual compression and simplified modeling is always necessary, and sometimes even an extreme compaction is good enough for a purpose.

Comment by dagon on Liar Paradox Revisited · 2019-04-17T21:35:43.079Z · score: 4 (2 votes) · LW · GW

Consider patching by tabooing "truth". Declarative sentences don't actually have truth value, except in the sense where "truth" is a handwave toward conveying information which allows an update of the receiver's beliefs. The improvement of predictions enabled by this update is sometimes referred to as "truth".

Don't get me wrong - it's a very useful shorthand, and in many many cases you don't need to expand it. But in the adversarial case where statements are picked to break the normal use of "truth", the right response is to abandon the simple concept for those cases.

Comment by dagon on Robin Hanson on Simple, Evidence Backed Models · 2019-04-17T20:28:39.554Z · score: 2 (1 votes) · LW · GW

Quite agree - depending on how you aggregate individual values and weigh the adversarial motives, it's quite possible that "we" are often worse off with secrets. It's not clear whether or when that's the case from the "simple model" argument, though.

And certainly there are cases where unilateral revelations while others retain privacy are harmful. Anytime you'd like to play poker where your cards are face-up and mine are known only to me, let me know.

I would love to explore whether private information is similar to other capital, where overall welfare can be improved by redistribution, but only under certain assumptions of growth, aggregation and individual benefits canceling out others' harms.

Comment by dagon on Robin Hanson on Simple, Evidence Backed Models · 2019-04-17T16:30:10.601Z · score: 3 (2 votes) · LW · GW

In the simple model that information is used only to make better predictions, more (correct) information is better. Models that add the complexity of adversarial motives generally do show that secrecy has value.

Comment by dagon on Complex value & situational awareness · 2019-04-16T23:48:55.200Z · score: 4 (3 votes) · LW · GW

Both are highly valued by many organizations, but incredibly hard to measure or see, so don't make very good primary job descriptions. You usually need to produce more conventional work product for quite some time before your contributions on these dimensions can be trusted enough for it to be most of your energy.

Comment by dagon on Agency and Sphexishness: A Second Glance · 2019-04-16T21:24:21.086Z · score: 3 (2 votes) · LW · GW

Thank you for bringing this up - it's a comparison that doesn't resonate with me. I suspect that "sphexishness" is a different modeling layer than "agency", so a direct comparison is confusing. More importantly, it's assumed without explanation that one is bad and one is good.

For some reason, nobody's talking about the amazing success of the Sphex wasps, and looking for ways to ensure successful behavior without everyone having to model reality individually. And we don't talk (much) about the horror of bad choices, and how all suffering is caused by agency.

Comment by dagon on Where to Draw the Boundaries? · 2019-04-15T17:27:07.739Z · score: 2 (1 votes) · LW · GW

Sure, casual use of categories is convenient and pretty good for a lot of purposes. For unimportant cases (including cases where the exceptions don't come into play, like sailors calling dolphin "fish"), go for it. Use whatever words minimize the cognitive load on your conversational partners and allow them to best navigate the world they're in.

Where precision matters, though, you're better off using more words. Don't try to cram so much inferential power into a categorization that's not a good fit for the domain of predictions you're making.

And because these are different needs, be aware that different weights and rigor will be applied. If someone is casually using a category "wrong", you have to decide if the exceptions matter enough to point them out (that is, use more words to get more precision), or if they're just optimizing for brevity on a different set of dimensions than you prefer. Worse, they (and you!) may not fully know what dimensions are important, so your compression may be more wrong than the one you're trying to improve.

Comment by dagon on Scrying for outcomes where the problem of deepfakes has been solved · 2019-04-15T16:31:16.112Z · score: 5 (3 votes) · LW · GW

There's no reason to trust manufacturers or governments in this. There's already plenty of fakery in text and photographic reporting, and political topics are some of the worst cases for it. Basically, these are human reliability problems - why would you expect better for video technology, and why would you expect human institutions to solve them now, when they haven't for hundreds of years?

The only "solutions" (really more "viable responses" than "solutions") I see are:

  • More cameras, more sources, less coordination. If an event is seen from 100 angles on 100 devices, it is going to be very hard to suppress the real images, even if some of them are fake. (but remember that even blockchains are subject to majority attack).
  • Conditional trust. Honestly, this is all we have today - some sources are careful on some topics, and they do the work to ensure their sources are also trusted (or explicitly tell you that their sources are unreliable).
  • Human attestation. Legally-binding statements (including "this video matches what I saw") remain as valuable and trustworthy as ever.
Comment by dagon on Where to Draw the Boundaries? · 2019-04-15T15:57:39.696Z · score: 2 (3 votes) · LW · GW

I worry that we're spending a LOT of energy on trying to "carve at the joints" of something that has no joints, or is so deep that the joints don't exist in the dimensions we perceive. Categories, like all models, can be better or worse for a given purpose, but they're never actually right.

The key to this is "for a purpose". Models are useful for predictions of something, and sometimes for shorthand communication of some kinds of similarity.

Don't ask whether dolphins are fish. Don't believe or imply that category is identity. Ask whether this creature needs air. Ask how fast it swims. etc. When talking with people of similar background and shared context, call it a fish or an aquatic mammal, depending on what you want to communicate.

Comment by dagon on The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence · 2019-04-14T23:09:13.202Z · score: 4 (2 votes) · LW · GW

This seems to hinge on something you haven't defined nor attempted to measure: consciousness. You've left out some possibilities:

  • perhaps nothing is conscious.
  • perhaps only you are conscious and the rest of who you perceive are automata.
  • perhaps everything is conscious. every calculation does indeed feel.
  • perhaps consciousness and qualia aren't even close to what we've considered.

Basically, why would you expect that ANY consciousness exists, if a simulation/calculation doesn't have it? all the creatures you see are just neural calculations/reactions, aren't they? I'll grant that you may have special knowledge that you yourself exist, but what makes you believe anyone else does?

Comment by dagon on Excerpts from a larger discussion about simulacra · 2019-04-11T20:45:14.865Z · score: 7 (3 votes) · LW · GW

I love this example, because the stages are decoupled with truth. stage 1 is "everyone is wrong, but has beliefs which make cooperation easier", and stage 4 is "people have more accurate beliefs, but the social cohesion is weaker".

Comment by dagon on Excerpts from a larger discussion about simulacra · 2019-04-11T16:38:16.472Z · score: 2 (1 votes) · LW · GW

I think I'd filter my "technical" requirement a bit further. Not "only possible in technical domains", but "only possible for those parts of technical domains for which jargon and terms of art have been developed and accepted". Technical domains that are changing or being explored require a lot of words and interactive probing before any sort of terse communication is possible.

Even armies and trained emergency workers are very limited in the types of information they can transfer quickly and correctly, and that's AFTER a whole lot of training and preparation so that most commands are subroutine triggers, not idea transfers.

I sympathize with the desire to "make important domains technical", but I suspect it's a mix of levels that is ultimately incoherent. In domains where there is a natural feedback loop to precision, it'll happen by itself. In domains where the feedback loops _don't_ favor precision and territory-matching, it won't and can't. One could claim that is the difference between an "important" domain and one that isn't, but one would be falling for the very same problem we're discussing: the word "important" doesn't mean the same thing to each of us.

Note that small groups of shared-context individuals _CAN_ have technical discussions on topics that are otherwise imprecise and socially constructed. It's just impossible for larger or more heterogeneous groups to do so.

Comment by dagon on Excerpts from a larger discussion about simulacra · 2019-04-11T16:04:05.250Z · score: 2 (1 votes) · LW · GW
"No single word carries much weight and what matters is how they behave and what they can get done" is really not game-1. Game-1 is all about efficient denotative communication so that you don't have to personally inspect what's going on, and can use the map instead of directly inspecting the territory.

Wow. I really missed that. I suspect because I I don't see how anyone can claim that sort of game-1 is possible outside of technical topics (which STILL take many thousands of words to communicate concepts) or very small groups of high-trust shared-context participants (where the thousands of words are implicit). I guess I start in game-2, and I don't see much difference between games 2-4.

Language, and especially common short words and phrases, just doesn't carry that kind of precision. More generally, language is just as subject to Goodhart's Law as any other knowledge proxy.

Comment by dagon on Excerpts from a larger discussion about simulacra · 2019-04-11T14:24:01.347Z · score: 4 (2 votes) · LW · GW

Thanks for the specificity! These examples ("director" being used for prestige, without any connection to actual effort/impact/power) are good ones to explore how context plays into things.

There exist idiots who will take the introduction at face value. There exist particularly insane organizations who will only accept contracts signed by a director, and for those, one kind of has to play the game. This isn't universal, or even common - SIMULTANEOUSLY, anyone who talks to or works with these directors will understand something closer to the truth.

These examples are none of the listed worlds - they have superficial elements of world 3 or 4 for the title of "director", but nobody seriously cares about that. It's world 1 for the interactions between people, where no single word carries much weight and what matters is how they behave and what they can get done.

The examples are also nowhere near universal (they're not distinct "worlds", they're examples that the world is diverse in use of words). They don't remove any of the weight from someone saying "I'm the director of a 150-person research group at X fortune-500 company".

Comment by dagon on Excerpts from a larger discussion about simulacra · 2019-04-10T23:28:13.985Z · score: 4 (2 votes) · LW · GW

I very much like that this topic is being explored, but I fear you're on the wrong track in thinking that these worlds are distinct. Jessica doesn't go far enough in the critique that this is assuming uniformity of use and knowledge of such. In fact, all these worlds are simultaneously overlaid on one another, among different people and often among different parts of the same conversation. Sometimes people are aware of the ambiguity or outright misleading use of words, sometimes they're not, and sometimes they think they are but it still has emotional impact. And we should probably add world 0: brutal honesty where titles are conservative estimates of value rather than broad categories of job, and world -1 where labels don't exist and people are referred to by the sum total of what they've done in their life.

It should be clearer that language is _ALWAYS_ a mix of cooperative and adversarial games. Everyone is trying to put ideas into each other's heads that benefit the speaker. Some of them also benefit the listener, and that's great. But it's impossible to separate from those times when the goals diverge.

On the object level of your example, I do a fair bit of interviewing, and I guarantee you that competent recruiters know what different company's titles generally mean, and even then take them with a grain of salt. Competent hiring managers focus on impact and capability in interviews, not on titles. Agreed that titles and self-described resume entries carry a lot of weight in getting and framing the interview. But outright lies won't get you anything, even if a small amount of puffery likely gets you a small improvement in success rate and initial offer.

Comment by dagon on "Intelligence is impossible without emotion" — Yann LeCun · 2019-04-10T19:38:30.263Z · score: 9 (6 votes) · LW · GW

Is there a transcript or a summary available? I can't stand videos (except sometimes as auxiliary to written information).

Comment by dagon on Ideas ahead of their time · 2019-04-03T23:33:02.265Z · score: 5 (4 votes) · LW · GW

I don't think I would, unless the comment stream comes up with some really great things. It's a fine prompt for thinking outside the box, but it completely misses the mark on the way ideas and truth actually works, and would benefit from a read of https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences .

In the ancient world, some people _DID_ wonder what the sun and moon looked like from very far away. Some of them actually _WENT_ very far away and looked (and saw minimal difference, but did do some clever calculations to measure shadows and times to figure out how far "very far away" was). Even if someone HAD postulated that there existed a distance so great that the sun would look like a point, and that our stars might be suns to them, they wouldn't be "right" in any useful sense of the word. There are zero predictions nor behavior changes to make based on that hypothesis.

I'd argue that we _do_ have a start at some ideas that might pan out in the same way (good models for questions we can't yet ask) - simulation argument, quantum immortality, etc.), and the big problem isn't finding more ideas, but in deciding which ones are worth giving up immediate resources to pursue sooner.

edit: this came out way more negative than I intended. I like the topic, and even though I'm skeptical that we'll identify any novel ideas or ways to evaluate them, I do hope that I'm wrong.

Comment by dagon on What are the advantages and disadvantages of knowing your own IQ? · 2019-04-03T19:55:04.285Z · score: 11 (7 votes) · LW · GW

Eh. It's hard for me to argue that knowing the truth of something is a bad thing. The harm comes from believing falsehoods. Knowing the results of one or more IQ tests can be true (you did in fact get that score on that test) and helpful.
Believing that the number on the test is identical to your actual IQ, or that IQ by itself means very much, is incorrect and harmful.

It can help (if high) in inspiring you to study and think about more difficult topics. It can help (high or not) by reminding you that you need to work harder than some on difficult topics. It can help in helping you figure out what personality strengths to develop that complement your pure cognitive strengths. It can help in reminding you that there is _always_ someone smarter, and that most of the time that difference is overrated.

It can harm by forgetting that it's a very imperfect measure, if it makes you feel superior or if it causes you to dismiss others' ideas and opinions. Or if it demotivates you, or makes you think you can't approach difficult topics.

Comment by dagon on [HPMOR] "the Headmaster set fire to a chicken!" · 2019-04-03T18:26:16.802Z · score: 2 (1 votes) · LW · GW

I think Dumbledore is (portrayed as) someone who _does_ strongly believe in roles, tropes, and categories, and who thinks death is a tragic, but necessary and inevitable part of life. He would think it absolutely permissible to set fire to a chicken (magical or normal) if there were some reason (including a reason as vague as "necessary to impress Harry that I'm mysterious").

Comment by dagon on [HPMOR] "the Headmaster set fire to a chicken!" · 2019-04-03T18:11:09.064Z · score: 2 (1 votes) · LW · GW

I'll have to go back and re-read - was it clear that the chicken that burned wasn't actually Fawkes? I took that scene as Harry's interpretation of "normal" phoenix renewal.

As to your questions, I believe the standard non-magical answers apply pretty well:

1. Almost nobody opposes the creation of animals (or people) by any possible means (today that's breeding or cloning), even though they're expected to fade. Why oppose it here?

2. Why is it wrong to burn a real chicken alive? If I thought there was an important lesson to teach a human, I'd do that in a heartbeat. It's a chicken, it has very low moral weight to most people. In fact, I burn chicken often, then eat it (granted, I have someone else kill it and dissect it first, but that's not an important moral distinction IMO).

Comment by dagon on Could waste heat become an environment problem in the future (centuries)? · 2019-04-03T17:42:28.476Z · score: 3 (2 votes) · LW · GW

Geoengineering includes getting better at radiating heat as well as reducing heat received. Superconductor to dark-side cold farms might do the trick. Also, include bioengineering in your list of possible mitigations: if we can live in a very hot environment (or upload/emulate on a more durable substrate), it's less of a problem.

Longer-term, #3 is the only way. Intelligent life needs to elsewhere when this planet's used up.

Comment by dagon on Degrees of Freedom · 2019-04-03T15:58:53.661Z · score: 4 (2 votes) · LW · GW

It's helpful to keep in mind the human hubris in thinking anyone knows what's optimal for themselves, let alone others. Add in actual individual divergence in goals and beliefs and it's kind of ludicrous to try to make many decisions for others, or to accept others' decisions about your behaviors. Note that policy and rulemaking is always about enforcement/influence on others.

I don't believe it's possible for normal humans to fully distinguish "what's good for my personal indexical experiences" and "what's good for the average or median human". It's _always_ a mix of cooperative and adversarial. I do believe it's possible to acknowledge both motives and to be humble about what limits I'll impose on others. When I talk about "freedom" in that context, this is what it means to me: very minimal human imposition of additional consequences for actions which don't have obvious, immediate harm.

Choosing "optimal for my current beliefs and preferences" vs "what others will judge as optimal for what they think my beliefs and preferences should be" is very different, and I lean toward the former as my definition of "freedom".

cf https://wiki.lesswrong.com/wiki/Other-optimizing

Comment by dagon on User GPT2 is Banned · 2019-04-02T17:54:41.313Z · score: 4 (3 votes) · LW · GW

Is there a writeup (or open source code) for the training and implementation? It would be interesting to personalize it - train based on each user's posts/comments (in addition to high-karma comments from others), and give each of us a taste of our own medicine in replies to our comments/posts.

Comment by dagon on List of Q&A Assumptions and Uncertainties [LW2.0 internal document] · 2019-04-02T04:17:19.295Z · score: 4 (2 votes) · LW · GW

I was mostly hoping for an explanation of why you think compensation and monetary incentives are among the first problems you are considering. A common startup failure mode (and would-be technocrat ineffectual bloviating) is spending a bunch of energy on mechanism and incentive design to handle massive scale, before even doing basic functionality experiments. I hope I'm wrong, and I'd like to know your thinking about why I am.

I may well be over-focused on that aspect of the discussion - feel free to tell me I'm wrong and you're putting most of your thought into mechanisms for tracking, sharing, and breaking down problems into smaller pieces. Or feel free to tell me I'm wrong and incentives are the most important part.

Comment by dagon on User GPT2 Has a Warning for Violating Frontpage Commenting Guidelines · 2019-04-01T23:05:03.880Z · score: 14 (5 votes) · LW · GW

I hope tomorrow (presuming this stops at someone's midnight), we start a topic "best of GPT2", with our favorite snippets of the crazy April Fool spam. There have been some pretty good sentences generated.

Comment by dagon on On the Nature of Agency · 2019-04-01T21:33:35.573Z · score: 4 (3 votes) · LW · GW

Consider the possibility that you're (and many are) conflating multiple distinct things under the term "agency".

1) Moral weight. I'll admit that I used the term "NPC" in my youth, and I regret it now. In fact, everyone has a rich life and their own struggles.

2) Something like "self-actualization", perhaps "growth mindset" or other names for a feeling of empowerment and the belief that one has significant influence over one's future. This is the locus-of-control belief (for the future).

3) Actual exercised influence over one's future. This is the locus-of-control truth (in the past).

4) Useful non-comformity - others' perceptions of unpredictability in desirable dimensions. Simply being weird isn't enough - being successfully weird is necessary.

I'm not sure I agree that "planning" is the key element. I think belief (on the agent's part and in those evaluating agency of others) in locus of control is more important. Planning may make control more effective, but isn't truly necessary to have the control.

I'm not at all sure that these are the same thing. But I do wonder if they're related in the sense that they classify into a cluster in an annoying but strong evolutionary strategy: "ally worth acquiring". Someone powerful enough (or likely to become so) to have an influence on my goals, and at the same time unpredictable enough that I need to spend effort on cultivating the alliance rather than taking it for granted.

Conflating (or even having a strong correlation between) 1 the others is tricky because considering any significant portion of humanity to be "non-agents" is horrific, but putting effort into coordinating with non-agents is stupid. I suspect the right middle ground is to realize that there's a wide band of potential agency, and humans occupy a narrow part of it. What seems like large variance to us is really pretty trivial.

Comment by dagon on Experimental Open Thread April 2019: Socratic method · 2019-04-01T20:08:02.889Z · score: -1 (5 votes) · LW · GW
Have you noticed when GPT2 started commenting?

Ah. Clever but too much IMO. I hate "social distrust day".

Comment by dagon on List of Q&A Assumptions and Uncertainties [LW2.0 internal document] · 2019-04-01T19:12:28.910Z · score: 2 (1 votes) · LW · GW

Can you make a similar comment (or post) talking about incentive-focused vs communication-structure-focused features in this area? My intuition (less-well-formed than yours seems to be!) is that incentives are fun to work on and interesting to techies, and quite necessary for true scaling to tens of thousands to millions of people. But also that incentives are the smaller barrier to getting started with a shift from small, independent, lightweight interactions (which "compete with insight porn") to larger, more valuable, more durable types of research.

The hard part IMO is in identifying and breaking down problems that CAN be worked on by fungible LWers (smart, interested, but not already invested in such projects). My expectation is that if you can solve that, the money part will be much easier.

Comment by dagon on Experimental Open Thread April 2019: Socratic method · 2019-04-01T16:09:23.102Z · score: 6 (5 votes) · LW · GW

claim: LW commenter GPT2 is a bot that generates remarkably well-formed comments, but devoid of actual thought or meaning. confidence: 20% that it's no or minimal human intervention, 90%+ that it's computer-generated text, but a human might be seeding, selecting, and posting the results.

subclaim: this should be stopped, either by banning/blocking the user, or by allowing readers to block it.

update: based on a comment, I increase my estimate that it's fully automated to 95%+ I look forward to learning what the seed corpus is, and whether it's customized in any way based on comment context.

update 2: previous estimate too high, a wider space of possibilities has been proposed in other threads. My current best guess is that it's a large human-moderated (curated, possibly edited) list of potential comments, being selected and posted automatically. probably only 50% confident of that.

Comment by dagon on Will superintelligent AI be immortal? · 2019-03-31T03:22:30.602Z · score: 2 (3 votes) · LW · GW

I apologize to anyone offended, but I stand by my statement. I do believe that the space of possible minds is bigger than any individual mind can conceive.

Comment by dagon on Will superintelligent AI be immortal? · 2019-03-30T15:35:18.599Z · score: 6 (6 votes) · LW · GW

The space of possible futures is a lot bigger than you think (and bigger than you CAN think). Here are a few possibilities (not representative of any probability distribution, because it's bigger than I can think too). I do tend to favor a mix of the first and last ones in my limited thinking:

  • There's some limit to complexity of computation (perhaps speed of light), and a singleton AI is insufficiently powerful for all the optimizations it wants. It makes new agents, which end up deciding to kill it (value drift or belief drift if they think it less-efficient than a replacement). Repeat with every generation forever.
  • The AI decides that it's preferred state of the universe is on track without it's interventions, and voluntarily terminate. Some conceptions of a deity are close to this - if the end-goal is human-like agency, make the humans then get out of the way.
  • It turns out optimal to improve the universe by designing and creating a new AI and voluntarily terminating oneself. We get a sequence of ever-improving AIs.
  • Our concept of identity is wrong. It barely applies to humans, and not to AIs at all. The future cognition mass of the universe is constantly cleaving and merging in ways that make counting the number of intelligences meaningless.

The implications that any of these have as to goals (expansion, survival for additional time periods, creation of aligned agents that are better or more far-reaching than you, improvement of local state) is no different from the question of what are your personal goals as a human. Are you seeking immortality, seeking to help your community, seeking to create a better human replacement, seeking to create a better AI replacement, etc.? Both you and the theoretical AI can assign probability*effect weights to all options, and choose accordingly.

Comment by dagon on Parable of the flooding mountain range · 2019-03-30T15:10:24.477Z · score: 4 (3 votes) · LW · GW

The key is that "humanity" doesn't make decisions. Individuals do. The vast majority of individuals care more about themselves than about strangers, or about the statistical future masses. Public debate is mostly about signaling, so will be split between (a) and (b), depending on cultural/political affiliation. Actual behavior is generally selfish, so most will chose (a), maximizing their personal chances.

Comment by dagon on What would you need to be motivated to answer "hard" LW questions? · 2019-03-30T15:00:07.633Z · score: 7 (4 votes) · LW · GW
LessWrong seems basically fine, don't fix what's not broke.

That's not how I'd summarize it. Much credit to you and the team and all the other participants for how well it's doing, but I remember the various ups and downs, and the near-death in the "dark times". I also hope it can be even better, and I don't want to prevent all changes so it stagnates and dies again.

I do fear that a complete pivot (such that monetary prizes are large and common enough that money is a prime motivator) will break it. The previous prizes all seemed small enough that they were basically a bit above the social status of a giant upvote, and I didn't see any day-long efforts from any of the responders. That's very different from what you seem to be considering.

So I support cautious experimentation, and gradual changes. Major experiments (like prizes big enough to motivate day- or week-long efforts) probably should be labeled as experiments and done with current site features, rather than investing very much in. I'm actually more gung-ho than "so let's think about it and figure out how to make it the best version" in many cases - I'd rather go with "let's try it out cheaply and then think about what worked and what didn't". Pick something you'd like to fund (or find someone who has such a topic and the money to back it up), run it in Google Docs, with a link and summary here.

This applies to the more interesting (to me; I recognize that I'm not the only constituent) ideas as well. Finding ways to break problems down into manageable questions, and to link/synthesize the results seems like a HUGE potential, and it can be tested pretty cheaply. Have someone start a "question sequence" - no tech change, just titled as such. The asker seeks input on how to split the problem, as well as on sub-problems.

Really, I don't mean to say "this is horrible, please don't do anything in this cluster of ideas!" I do mean to say "I'm glad you're thinking about the issues, but I see a _LOT_ of risk in introducing monetary incentive where social incentives are far more common. Please tread very carefully."

(Not sure how serious I am about the following - it may just be an appeal to meta) You could use this topic as an experiment. Ruby's posting some documents about Q&A thinking - put together an intro post, and label them all (including this post) "LW Q&A sequence". Ask people how best to gather data and perform experiments along the way.

Comment by Dagon on [deleted post] 2019-03-29T20:36:49.636Z

Is this in response to https://www.lesswrong.com/posts/zEMzFGhRt4jZwyJqt/what-would-you-need-to-be-motivated-to-answer-hard-lw? If so, it might be better as a comment there than as a top-level post, or at least a link to it and a summary stating what the heck you're talking about.

Actually, regardless of what it's about, it would benefit from a more meaningful title (keep the catchphrase in the text, if you like) and a tl;dr so folks can decide whether to invest time in the post.

Comment by dagon on What would you need to be motivated to answer "hard" LW questions? · 2019-03-29T17:46:44.107Z · score: 0 (2 votes) · LW · GW
(but, we now have a track record of occasional bounty posts successfully motivating such work).

Can you elaborate on this? I haven't seen any bounty-driven work adjacent to LW, and I'd like to look at a few successes to help me understand whether adding some of those mechanisms to LW is useful, comparing to adding some LW interactions (ads or links) to those places where bounties are already successful.

I'm much more excited about such a project bootstrapping off LW than trying to start from scratch.

I totally get that, but those aren't the only two options, and that excitement doesn't make it the right choice.

Comment by dagon on Parable of the flooding mountain range · 2019-03-29T17:03:42.321Z · score: 5 (4 votes) · LW · GW

Even if you don't have exact values, it's possible to model the distribution of peak heights and flood depths, to determine how many peaks you'd need to see before a given confidence that you're high enough. And then your search mechanism becomes "don't climb a peak entirely - set a path to see as many peaks as possible before committing to one, then climb the best one you know", or if the flood is slow, you might get stuck on a peak during exploration, so it reduces to https://en.wikipedia.org/wiki/Secretary_problem .

The question of whether it's better for the entire group to take it's best chance on one peak (all live or all die), or whether it's best to spread out, making it almost certain that some will die and others will live is rather distinct from the best search strategy. I weakly believe that there is no preference aggregation which makes sense to treat a "group agency" as a distinct thing from "set of individual agents". So it will depend on the altruism of the individuals whether they want the best chance of individual survival (by following the best searcher) or if they want a lower chance of their survival to get a higher chance that SOMEONE survives.

Comment by dagon on Please use real names, especially for Alignment Forum? · 2019-03-29T15:37:29.751Z · score: 3 (2 votes) · LW · GW

This seems like an interesting request for a site that explores agent continuity of beliefs and identity. Why is my government name more "real" than my online name? Both are just convenient handles to different (but somewhat overlapping, granted) clusters of behaviors and interactions you might have with aspects of us.

[edit: to clarify, this comment is mostly pointing out an amusing (to me) self-referential topic. I don't intend to use my "real" name here, but I have no objection to others asking for or providing theirs. ]

Comment by dagon on What would you need to be motivated to answer "hard" LW questions? · 2019-03-28T23:42:41.010Z · score: 7 (4 votes) · LW · GW

Thanks. Still triggers my "money would be a de-motivator for what I like about LW" instinct, but I'm glad you're acknowledging that it's only one aspect of the question you're asking.

The relevant questions are "how do you know what things need additional motivation" and "why do you think LW is best suited for it"? For the kind of things you're talking about (summarizing research, things that take "a few days to a few weeks" of "not intrinsically-fun"), I think that matching is more important than motivation. Finding someone with the right skillset and mindset to be ABLE to do the work at an acceptable cost is a bigger filter than motivating someone who just doesn't know it's needed. And I don't think LW is the only place you'd want to advertise such work anyway.

Fortunately, it's easy to test. Don't add any site features, just post a job that you think is typical of what you're thinking. See how people (both applicants and observers) react.

Note that I really _DO_ like your thinking about breaking down into managable sub-questions and managing inquiries that are bigger than a single post. I'd love to explore that completely separately from motivation and taskrabbit-like knowledge work.

Comment by dagon on What would you need to be motivated to answer "hard" LW questions? · 2019-03-28T21:24:39.813Z · score: 17 (8 votes) · LW · GW

I don't think it's possible on LW. It's not a matter of money (ok, it is, in that I don't think anyone's likely to offer a compelling bounty that I expect to be able to win). It's not a matter of reliability of available offers (except that I don't expect ANY).

It's _is_ a question of reliability and trust, though. There are no organizations or people I trust enough to define a task well and make sure multiple aren't competing in some non-transparent way, so that I actually expect to get paid for the work posted on a discussion site. And I don't expect that I have enough track record for any bidder to prefer me for the kind of tasks you're talking about at the rates I expect. [edit to add] Nor do I have any tasks where I'd prefer a bounty or open-bid rather than finding a partner/employee and agreeing on specific terms.

It's also a question of what LW is for - posting and discussion of thought-provoking, well-researched, interestingly-modeled, and/or fun ideas is something that's very hard to measure in order to reward monetarily. Also, I'll be massively demotivated by thinking of this as a commercial site, even if I'm only in the free area.

My recommendation would be to use a different place to manage the tasks and the bid/ask process, and the acceptance of work and payment. Some tasks and their outputs might be appropriate to link here, but not the job management.

tl;dr: don't mix money into LW. Social and intellectual rewards are working pretty well, and putting commerce into it could well kill it.

Comment by dagon on What I've Learned From My Parents' Arranged Marriage · 2019-03-28T20:05:03.761Z · score: 7 (3 votes) · LW · GW

Everyone is different, and I'd avoid hyperbole like "if I valued my time at all". I know of a number of 15-year or longer marriages that included long distances for part of the courtship, and sometimes parts of the marriage. On the topic of this post (existence proofs for unconventional courtship success), I got 'em for LDRs.

But you should acknowledge that it's a burden, and both you and your partner will have to work harder to develop and maintain bonds when you're not near each other most of the time. And you should have a pretty good hope that the distance is temporary - I don't know of any successful cases where the couple permanently lives apart.

Comment by dagon on Open Thread March 2019 · 2019-03-28T16:32:20.451Z · score: 2 (1 votes) · LW · GW

Be very aware of https://wiki.lesswrong.com/wiki/Typical_mind_fallacy for discussions about what people do or could find meaning in. I know at least a few hourly retail employees who do get some self-worth from helping customers. It's mixed with drudgery and annoyance, but not completely meaningless.

The good thing about employment is that it's guaranteed that someone (your employer) thinks you're providing value to other humans, and there is (outside of government, or government-sized behemoth organizations) a feedback look for that to be a true belief. If you weren't providing value, you wouldn't be paid.

That's not true of unpaid work - it's still the case that you can do good and provide value to others, but there's much less feedback about whether and how much.

I predict that there will be no true post-scarcity world. We'll reduce scarcity, we'll make many unrewarding and low-paying jobs unnecessary, so that one can likely live a minimum-wage lifestyle without actually working. But we'll still have a large amount of luxury available only to the lucky and productive (rich), and a larger amount of semi-luxury available only to those who are employed by the rich. In this reduced-scarcity world, those who find meaning in employment can partake, and will enjoy a bit of luxury as (part of) the reward.

Comment by dagon on The Politics of Age (the Young vs. the Old) · 2019-03-27T19:30:20.125Z · score: 2 (1 votes) · LW · GW
we don't know the 1. narcissism or 2. epistemic competence distributions across parents

or across non-parents, or old people, or teenagers, or any other group. If we think we CAN measure them well, we should just measure them and set voting standards for individuals, not age-based demographic groups (though I'd be fine with a combo: everyone can vote between 25 and 65, and anyone who passes the competence/non-narcissism/whatever threshold can vote regardless of age).

SITG-suffrage

Not familiar with the term, and Google doesn't show anything that looks relevant on the first few pages of "SITG" suffrage. I assume this is the theory "landholders are the only ones with standing to care about the land, and they happen to be the rich and powerful" idea. If you don't mean to guilt-by-association an argument, then please don't do so.

I dispute the assumption that 70-year olds only care about the same things that the previous cohort did, and not about the things they cared about as 60-year-olds. That caricature is at least as bad as saying 16-year-olds care about the same things that all 16-year-olds have cared about forever (sex/freedom/unearned respect/bad music). I'd argue there's more truth in the latter, but not enough truth to make a valid argument.

I'd also like to point out that dotards select themselves out of voting by not having spare energy to participate. The young and stupid/naive have no such selection mechanism.

Comment by dagon on Dependability · 2019-03-27T19:02:27.895Z · score: 3 (2 votes) · LW · GW

Way more complicated path than "school teaches X". School (talking primary and secondary here, not college) teaches basic conformity in a direct way - you avoid punishment by not calling attention to yourself except in approved ways. School also directly teaches a very small set of facts and skills.

School INDIRECTLY teaches a lot more. Or maybe it's better to say that school provides an environment and opportunity for parents and peers to teach/reinforce a whole lot of societal values and skills.

Some will learn to "sit there and do the task even if you don't like/value it", some will learn "do the minimum to not be punished", some will learn that if you get sent to the library for not participating, you can read all day instead of sitting there. Some will even learn to decide what's the best path for themselves, and how to get some value from the routine without letting it crush them.

So, for some (maybe even many), school is an important part of teaching/training reliability. It's wrong to say that "school teaches it", but it's also wrong to imply that school is irrelevant in teaching it.

Comment by dagon on What I've Learned From My Parents' Arranged Marriage · 2019-03-27T18:04:46.873Z · score: 19 (9 votes) · LW · GW

I'd agree that the null hypothesis (most common mechanisms work equally well) probably applies in the marriage game. I don't think Squidious was making a claim that arranged marriages are better (and I note that Squidious isn't using their parents to arrange a mate), just a claim that it can work pretty well.

Also, a less-explicit claim that many western narratives about love and marriage are misleading, in that they focus too strongly on finding a perfect match, and not enough on creating and maintaining a bond with a good-enough match. I agree with this claim, but also agree with MrMind that individual examples are existence proofs that something is possible, but not evidence for how common or available it is.

Comment by dagon on A Tale of Four Moralities · 2019-03-27T16:13:22.559Z · score: 4 (3 votes) · LW · GW

The kids also pretty easily abandon their values (which they're named after). Maxie is sorry, and seems surprised that his actions hurt his friends, rather than defending his choice by saying that the teddies help the other neighborhood more than this one. Ivan gives up his indignation and desire for retribution very quickly as well.

More importantly, nobody is acknowledging that all property is theft, and that the parents have made sacrifices and moral compromises to get the initial teddies, rather than feeding starving people or doing other more useful things that match the goals implied by their children's names. Supporting the horrific conditions in the teddy mines renders the whole parable suspect.

Comment by dagon on The Game Theory of Blackmail · 2019-03-26T14:25:07.468Z · score: 2 (1 votes) · LW · GW
Furthermore, defect-defect is traditionally super bad for both players. But I would not say that this is a necessary condition for something to be a Game of Chicken.

The traditional game of chicken, with cars racing at each other or toward a cliff edge, has likely death in the defect-defect box. If you're considering iterated games, an early death stops the series (and, depending on your modeling of utility, wipes out all prior gains in any other games). I would say this is a necessary condition, and is the primary thing which makes Chicken different from PD.

And this distinction makes modeling it trickier - the game is mostly about the unknown chance that one will be unable to defect when one decides to (due to physical constraints). It's best modeled as a series of decisions, with known ending (death), and increasing chance of accidental defection.

Comment by dagon on Do you like bullet points? · 2019-03-26T14:04:30.209Z · score: 2 (1 votes) · LW · GW

I use bullets almost exclusively when taking notes or writing for myself. When writing for others, I use them as part of a narrative, but rarely the main text. I have gotten feedback that when I over-rely on bullet style lists of points, it's difficult to find a flow in my documents, and I tend to use too much shorthand so some of the points are less compelling than they can be.

Comment by dagon on A Tale of Four Moralities · 2019-03-25T23:16:30.593Z · score: 4 (2 votes) · LW · GW

Medidations on Moloch and the stories in Inadequate Equilibria (and HPMOR, and Luminosity and Friendship is Optimal and pretty much all popular rationalist-centric stories) were written in a way that referenced and built on a whole lot of explicit theory.

And you're right - I don't want to mandate a sequence of publication. Everyone should do what works, and there are probably some story/theory pairs where it works to publish the story first. I can't think of any, and I'd advise having the theory post ready if it turns out it's needed sooner than you thought, but I won't say "never". This didn't work for me. I think because I have pretty serious reservations about the theory (both whether it's appropriate here and about the theory itself), but I can't know whether those concerns are valid or not, as the story is fairly inexplicit.

I think story-first runs the very large risk that people will infer a theory different than you intend, and then downvote you for that (flawed interpretation of your) theory. I may be doing exactly this. It also runs the risk that if the theory has holes, people will retroactively feel tricked by the misleading story, and be angry at your presentation style in addition to respectfully disagreeing with your theory.

Did the recent blackmail discussion change your beliefs?

2019-03-24T16:06:52.811Z · score: 37 (14 votes)