Comment by gworley on One-step hypothetical preferences · 2019-06-17T17:48:09.191Z · score: 4 (2 votes) · LW · GW

I agree that viewing preferences as conditioned on the environment, up to and including the entire history of the observable universe, is a sensible improvement over many more simplistic models that result in clear violations of preference normativity and eliminates many of those violations. My concern is that, given that this is not so obvious as to be the normal way of thinking about preferences in all fields and was nonobvious enough that you had to write a post about the point, this makes me cautious about updating to thinking this is sufficient to make the current value abstraction you use sufficient for purposes of AI alignment. I basically view conditionality of preferences as neutral evidence about the explanatory power of the theory (for the purpose of AI alignment).

Comment by gworley on Discourse Norms: Moderators Must Not Bully · 2019-06-16T19:58:06.920Z · score: 6 (5 votes) · LW · GW

I suspect most of the challenge with this as a policy is that it's hard and requires empathy. It's fun and easy to keep people out, call them names, say they are dumb, and elevate your own status by pushing others down. It seems to be the default human behavior. Doing otherwise is hard, requires patience and training, and sometimes you still get it wrong.

I agree with the proposal, though, and when I have time I try to help those along who post here who seem new and confused with gentle words and encouragement towards our norms. It's not that we have to let low quality content pass so much as we can respond to it in a compassionate and loving way that fosters growth and encouragement rather than in a spiteful and exclusionary way that discourages growth, learning, and engagement.

In short, be nice first, keep things nice second.

Comment by gworley on Unknown Unknowns in AI Alignment · 2019-06-14T17:56:02.540Z · score: 4 (2 votes) · LW · GW

Donald Hobson gives a comment below explaining some reasoning around dealing with unknown unknowns, but it's not a direct answer to the question, so I'll offer that.

The short answer is "yes".

The longer answer is that this is one of the fundamental considerations in approaching AI alignment and is why some organizations, like MIRI, have taken an approach that doesn't drive straight at the object-level problem and instead tackles issues likely to be foundational to any approach to alignment that could work. In fact you might say the big schism between MIRI and, say, OpenAI, is that MIRI places greater emphasis on addressing the unknown whereas OpenAI expects alignment to look more like an engineering problem with relatively small and not especially dangerous unknown unknowns.

(note: I am not affiliated with either organization so this is an informed opinion on their general approaches, and also note that neither organization is monolithic and individual researches vary greatly in their assessment of these risks.)

My own efforts addressing AI alignment are largely about addressing these sorts of questions, because I think we still poorly understand what alignment even really means. In this sense I know that there is a lot we don't know, but I don't know all of what we don't know that we'll need to (so known unknown unknowns).

Comment by gworley on Cryonics before natural death. List of companies? · 2019-06-13T23:21:09.991Z · score: 4 (2 votes) · LW · GW

I've not heard of anyone trying this. I imagine it's a bad idea for a couple reasons:

  • most would-be patients live in countries where what you want is illegal
  • those would-be patients' access to cryonics depends largely on cryonics organizations being in good standing with the local government in case they die unexpectedly or don't want to travel far
  • even if you found a jurisdiction where you could do what you want it might have repercussions back where the organization is based and stores the brains/bodies because the home country/state/municipality might forbid import due to how the brain/body was obtained
  • the countries where would-be patients live might forbid them from contracting for such a service in a location where it is legal (compare the way some countries require their citizens follow national laws when abroad, and that such citizens can be prosecuted for actions they took in foreign nations)

I think if you wanted to do this you would need to find a jurisdiction that would be okay with this and be also otherwise suitable for basing a cryonics operation. My guess is the set of places that meet both criteria is empty. And this ignoring the patient access issues I mentioned. Since the current market for cryonics is quite small, my guess is that there just isn't enough demand to make this happen, since I'm sure with enough demand you'd have the money to make a favorable jurisdiction suitable for basing a cryonics operation in.

Comment by gworley on Cryonics before natural death. List of companies? · 2019-06-13T19:44:21.294Z · score: 7 (4 votes) · LW · GW

Aside from the case where you may have access to euthanasia, the answer is no. The issue is that cryonics, where it is legally allowed, is considered a mortuary procedure rather than a medical procedure. The reasons for doing this are a bit involved, but can be summed up by saying it was easier to get legal approval for a novel procedure on dead bodies than on live ones.

In theory it seems likely you could get a better preservation by anesthetizing a live patient, replacing their blood, and slowing cooling the body and letting them die slowly while freezing rather than dying first and then starting the cooling process, but this is extremely legally complicated because it both involves a live patient, so it's a medical procedure, and it kills the patient, so it's euthanasia (or so we hope; if it wasn't painless you definitely wouldn't be allowed to do it!). This would require a level of acceptance of cryonics we have no reason to believe is forthcoming.

So we are left with the case where you have to die first before being cryo-preserved. However, it's even a bit more complicated than that, because how you die matters. Mortuary procedures can't begin until a patient has a completed death certificate from a doctor in most places, and in some cases you can't formally complete that process without an autopsy to determine cause of death, especially in cases that look suspicious like a murder or suicide. In fact, without modern assisted suicide laws, suicide generally requires an autopsy by law, which will of course ruin your chance of preservation.

The only known, reliable way of doing what you propose (and I know of cases in there past where it successfully happened), is that a patient with a terminal illness entered a hospice near a cryonics facility with a cryonics team on standby and then refused all food and water. It takes several days to die this way depending on body composition, and at time of death the doctor on staff can quickly mark that you died of natural causes (I don't entirely understand why this doesn't count as suicide, but it apparently doesn't) and the procedure can begin within minutes. That, to the best of my knowledge, is the state-of-the-art in cryonic preservation: cryocide by starvation/dehydration.

Comment by gworley on How much can value learning be disentangled? · 2019-06-12T22:05:39.236Z · score: 2 (1 votes) · LW · GW

Right, both of these views on truth, traditional rationality and postmodernism, result in theories of truth that don't quite line up with what we see in the world but in different ways. The traditional rationality view fails to account for the fact that humans judge truth and we have no access to the view from nowhere, so it's right that traditional rationality is "wrong" in the sense that it incorrectly assumes it can gain privileged access to the truth of claims to know which ones are facts and which ones are falsehoods. The postmodernist view makes an opposite and only slightly less equal mistake by correctly noticing that humans judge truth but then failing to adequately account for the ways those judgements are entangled with a shared reality. The way through is to see that both there is something shared out there that there can in theory be a fact of the matter of and also realizing that we can't directly ascertain those facts because we must do so across the gap of (subjective) experience.

As always, I say it comes back to the problem of the criterion and our failure to adequately accept that it demands we make a leap of faith, small though we may manage to make it.

Comment by gworley on Conclusion to the sequence on value learning · 2019-06-12T02:30:04.922Z · score: 2 (1 votes) · LW · GW

The standard rebuttal here is that even if a superintelligent AI system is not goal directed, we should be concerned that the AI will spontaneously develop goal directed behavior because it is instrumentally valuable to doing whatever it is doing (and is not "doing whatever it is doing" a "goal", even if the AI does not conceive of it as a goal, the same way as the calculator has a "goal" or purpose, even if the calculator is unaware of it). This is of course contingent on it being "superintelligent".

For what it's worth this is also the origin, as I recall it, of concerns about paperclip maximizers: you won't build an AI that sets out to tile the universe with paperclips, but through a series of unfortunate misunderstandings it will, as a subagent or an instrumental action, end up optimizing for paperclips anyway because it seemed like a good idea at the time.

Comment by gworley on One-step hypothetical preferences · 2019-06-11T20:15:14.513Z · score: 4 (2 votes) · LW · GW
Finally, the human expresses a judgement about the states of M, mentally categorising a set of states as better than another. This is an anti-symmetric partial function J:S×S→R, a partial function that is non trivial on at least one pair of inputs.

I continue to be unsure if we can even claim anti-symmetry of the preference relation. For example, let be the state "I eat an apple" and the state "I eat an orange", and today but tomorrow , seemingly violating antisymmetry. Now of course maybe I misunderstood my own understanding of and such that they actually included a hidden-to-my-awareness property conditioning them on time or something else such that anti-symmetry is not violated, but the fact that there may be some property on the states that I didn't think about at first that salvages anti-symmetry makes my worry that this model is confused in this and other ways because it was so easily to think of and construct something that seemingly violated the property but then on further reflection seems like it doesn't.

That's not a slam-dunk argument against this formalization. This is more me sharing some thoughts on my reservations of using this type of model. If we can so easily fail to notice something relevant about how we formalize some simple preferences, what else may we be failing to notice? And if so what happens if we build an AI based in part on this formalization? Will it also fail to account for relevant aspects of how human preferences are calculated because they are not easily visible to us in the model, or is that a failure of humans to understand themselves rather than the model? These are the things I'm wrestling with lately.

I also have some reservations about whether we can even really model humans has having discrete preferences that we can reason about in this way without getting ourselves into trouble and confused. Not to say that I doubt that this model often works, only that I worry that it's missing some important details that are relevant for alignment and without accounting for them we will fail to produce aligned AI. I worry this because there doesn't seem to be anything in the human mind that actually is a preference; preferences are more like reifications of a pattern of action that appears in humans. Getting closer to understanding the mechanism that produces the pattern we interpret as preferences seems valuable to me in this work because I worry we're missing crucial details when we reason about preferences at the level of detail you pursue here.

Comment by gworley on To first order, moral realism and moral anti-realism are the same thing · 2019-06-11T01:24:20.117Z · score: 4 (2 votes) · LW · GW
I apologise for my simplistic understanding and definitions of moral realism. However, my partial experience in this field has been enough to convince me that there are many incompatible definition of moral realism, and many arguments about them, so it's not clear there is a single simple thing to understand. So I've tried to define is very roughly, enough so that the gist of this post makes sense. ↩︎

I think this is mostly because there are lots of realist and anti-realist positions and they cluster around features other than their stance on realism, i.e. whether or not moral facts exist, or said less densely, whether or not moral claims can be true or false. The two camps seems to have a lot more going on, though, than is captured by this rather technical point, as you point out. In fact, most of the interesting debate is not about this point, but about things that can be functionally the same regardless of your stance on realism, hence your noticing how realists and anti-realists can look like each other in some cases.

(My own stance is to be skeptical, since I'm not even sure we have a great idea of what we really mean when we say things are true or false. It seems like we do at first, but if we poke too hard the whole thing starts to come apart at the seams, which makes it a bit hard to worry too much about moral facts when you're not even sure about facts in the first place!)

Comment by gworley on For the past, in some ways only, we are moral degenerates · 2019-06-07T23:34:10.955Z · score: 6 (4 votes) · LW · GW

Recently Robin Hanson posted about the difference between fighting along the frontier vs. expanding the frontier. It's a well known point, but given I was recently reminded of it it's salient to me, and it seems quite relevant here.

When we ask if human values have "improved" or "degenerated" over time we have to have some way of judging increase or decrease. One way to understand this is to check if humans get to realize more value, as judged by each individual and then normalized and aggregated, along certain dimensions within the multidimensional space of values. To take your example of "engagement with extended family", most moderns have less of this than ancients did both on average and it seems at maximum, i.e. modern systems preclude as much engagement as was possible in the past, such that a modern person maximally engaged with their extended family is less engaged than was maximally possible in the past. This seems to be traded-off, though, against greater freedom from need to engage with extended family because alternative systems allow a person to fulfill other values without reliance on extended family. As a result this looks much like a "fight", i.e. a trade-off along the value frontier of one value to another.

You give the example of reduced slavery being a general benefit, but I think we can tell a similar story that it is a trade-off. We trade-off individual choice of labour use, living conditions, etc. for the right of the powerful to make those decisions for the less powerful. In this sense the reduction in slavery takes away something of value from someone—the would be slaveholders—to give it to someone else—the would be slaves. We may judge this to be an expansion or value efficiency improvement under two conditions (which change slightly what we mean by expansion):

  • (1) there is more value overall, i.e. we traded less value away than we got back in return
  • (2) there is more value overall along all dimensions

I would argue that case (1) is really still a fight though because we are still making a tradeoff, we are just moving to somewhere more efficient along the frontier. From this perspective the end of slavery was not an expansion of values, but it was a trade-off for more value.

But if we are so strict, is anything truly a pure expansion? This seems quite tricky, because humans can value arbitrary things, and so for every action that increases some value it would seem that we are necessarily decreasing the ability the realize some counter-value. For example, it might seem that something like "greater availability of calories" would result in pure value expansion, assuming we can screen off all the complicated details of how we make more calories available to humans and how that process will affect values. But suppose you value scarcity of calories, maybe even directly, then for you this will be a fight and we must interpret an increase in the availability of calories as a trade-off rather than as a pure expansion in values.

This is potentially troubling because it means there's no universal way to judge moral progress if there can be no expansion without some contraction somewhere. It would seem that there must always be contraction of something, even if it is an efficient contraction that generates more value than it gives up.

So in the end I guess I am forced to (mostly) agree with your assessment even though you frame it in a way that seem foreign to me. It feels foreign to me because it seems every improvement is also a degeneration and vice versa, and the relevant question of improvement is mostly whether or not we are generating more value in aggregate (an efficiency improvement) if we want to be neutral on which value dimensions to optimize along.

I actually don't love the idea of making aggregate value something we optimize for, though, because I worry about degenerate cases like highly optimizing along a single value dimension at the expense of all others such that it results in an overall increase in value but in a way we wouldn't want, even though arguably if we were measuring value correctly in this system such a situation would be impossible because it would be factored in by a decrease in whatever value we had that was being traded off against that made us dislike the "optimization".

I instead continue to think that value is a confused concept that we need to break apart and reunderstand, but I'm still working on deconfusing myself on this, so I have nothing additional to report in that direction for now.

Comment by gworley on Steelmanning Divination · 2019-06-06T19:27:51.860Z · score: 22 (7 votes) · LW · GW

Some years ago I got interested in the Yi Jing after reading Philip K. Dick's The Man in the High Castle, which features the Yi Jing prominently where the book within the book (which is the alternate dimension/history version of The Man in the High Castle) is written by using the Yi Jing to make plot decisions and one of the characters relies on it heavily to navigate life. I went on to write a WebOS Yi Jing phone app so I could more easily consult it from my phone and played around with it myself.

My experience of it was mostly that it offered me nothing I wasn't already doing on my own, but I could see how it would have been helpful to others who lack my particular natural disposition to letting my mind go quiet and seeing what it has to tell me. As you note, it seems a good way to be able to step back and consider something from a different angle, and to consider different aspects of something you may be currently ignoring. The commentary on the Yi Jing is carefully worded such that it's more about the decision generation process than the decision itself, and when used well I think can result in the sort of sudden realization of the action you will take the same way my sitting quietly and waiting for insight does.

I also know a decent number of rationalists who enjoy playing with Tarot cards for seemingly this same reason. Tarot works a bit different because it more tells a story than highlights a virtue, but I think like you much of the value comes from placing an random framing on events, injecting noise into an otherwise too stable algorithm, and helping people get out of local maxima/minima traps.

I'd also include rubber ducking as a modern divination method. I think it does something similar, but by using a different method to get you to see things more clearly and find out what you already implicitly knew but weren't making explicit enough to let it have an impact on your actions. My speculation at a possible mechanism of action here is something like what happens when I sit quietly with a decision and wait for an answer: you let the established patterns of thought get out of the way and let other things come through so you can consider them, in part because you can generate your own internal noise if you stop trying to direct your thought. But not everyone finds this easy or possible, in which case more traditional divination methods with external noise injection are likely useful.

Comment by gworley on All knowledge is circularly justified · 2019-06-06T19:05:27.506Z · score: 3 (2 votes) · LW · GW

Actually, good thing you asked, because I gave wrong information in my original comment. Chisholm is an expert on the problem of the criterion, but I was actually thinking of William Alston in my comment. Here's two papers, one by Alston and one by another author that I've referenced in the past and found useful:

William P. Alston. Epistemic Circularity. Philosophy and Phenomenological Research, 47(1):1, sep 1986.

Jonathan Dancy. Ethical Particularism and Morally Relevant Properties. Mind, XCII(368):530– 547, 1983.

Comment by gworley on All knowledge is circularly justified · 2019-06-06T00:47:27.938Z · score: 3 (2 votes) · LW · GW

You might like the work of Roderick Chisholm on this topic. He spent a good deal of effort on addressing the issue of epistemic circularity (the issue created by the problem of the criterion) and gives what is, in my opinion, one of the better and more technical treatments of the topic. His work also lets us make a distinction between particularism (making minimal leaps of faith) and pragmatism (making any leaps of faith), which I find useful because in practice most people seem to be pragmatists (they have other things to do than wrestle with epistemology) while thinking they are particularists because their particular leaps of faith (the facts they assume without justification) are intuitive to them and they can't think of a way to make them smaller.

Comment by gworley on "But It Doesn't Matter" · 2019-06-06T00:07:49.311Z · score: 12 (3 votes) · LW · GW

First, let me start by saying this comment is ultimately a nitpick. I agree with the thrust of your position and think in most cases your point stands. However, there's no fun and nothing to say if I leave it at that, so grab your tweezers and let's get that nit.

Even if Hypothesis H is true, it doesn't have any decision-relevant implications,

So to me there seems to be a special case of this that is not rationalization, and that's in cases where one fact dominates another.

By "dominates" I here mean that for the purpose for which the fact is being considered, i.e. the decision about which the truth value of H may have relevant implications, there may be another fact about another hypothesis, H', such that if H' is true or H' is false then whether or not H is true or false will have no impact on the outcome because H' is relatively so much more important than H.

To make this concrete, consider the case of the single-issue voter. They will vote for a candidate primarily based on whether or not that candidate supports their favored position on the single issue they care about. So let's say Candidate Brain Slug is running for President of the World on a platform whose main plank is implanting brain slugs on all people. You argue with your single-issue voter friend they should not vote for Brain Slug because it will put a brain slug on them, but they say even if that's true, it's not relevant to their decision, because Brain Slug also supports a ban on trolley switches, which is your friend's single issue.

Now maybe you think your friend is being stupid, but in this case they're arguably not rationalizing. Instead they're making a decision based on their values that place such a premium on the issue of trolley switch bans that they reasonably don't care about anything else, even if it means voting for President Brain Slug and its brain slug implanting agenda.

Comment by gworley on Book Summary: Consciousness and the Brain · 2019-05-31T23:27:05.533Z · score: 9 (2 votes) · LW · GW

To my reading, all of this seems to pretty well match a (part of) the Buddhist notion of dependent origination, specifically the way senses beget sense contact (experience) begets feeling begets craving (preferences) begets clinging (beliefs/values) begets being (formal ontology). There the focus is a bit different and is oriented around addressing a different question, but I think it's tackling some of the same issues via different methods.

Comment by gworley on Egoism In Disguise · 2019-05-31T17:38:10.843Z · score: 3 (2 votes) · LW · GW
1) The bedrock of our values are probably the same for any human being, and any difference between conscious values is either due to having seen different data, but more likely due to different people situationally benefitting more under different moralities. For example a strong person will have "values" that are more accepting of competition, but that will change once they become weaker.

I continue to find minimization of confusion while maintaining homeostasis around biologically determined set points a reasonable explanation for the bedrock of our values. Hopefully these ideas will coalesce well enough in me soon to be able to write something more about this than that headline.

Comment by gworley on Boo votes, Yay NPS · 2019-05-30T01:46:20.635Z · score: 4 (2 votes) · LW · GW

I agree there are other possible interpretations; mainly wanted to document for myself in case I wanted to reference it later, and it seems potentially relevant, especially if we wanted to go back and interview the voters or analyze the comments.

Comment by gworley on Debate AI and the Decision to Release an AI · 2019-05-30T01:43:07.394Z · score: 4 (2 votes) · LW · GW

This is a pretty interesting idea. I can imagine this being part of a safety-in-depth approach: not a single method we would rely on but one of many fail-safes along with sandboxing and actually trying to directly address alignment.

Comment by gworley on Boo votes, Yay NPS · 2019-05-29T18:00:18.976Z · score: 6 (2 votes) · LW · GW

Additional evidence of boo/yay voting culture: Ben Hoffman's "Downing children are rare" post (on EAF, on LW, my comment about votes)

Comment by gworley on Micro feedback loops and learning · 2019-05-28T20:52:28.722Z · score: 5 (3 votes) · LW · GW

To give an additional example of tight feedback loops being helpful, I've been taking Alexander lessons for nearly a year. Each lesson consists of 30 minutes of me doing movements (although sometime the "movement" is holding a posture, like sitting, standing, standing on toes, or crouching) and 30 minutes of "table time", i.e. I lay on a massage table while my teacher users her hands to very subtly suggest changes to my posture. Although I could go on about how great this has been and how much value I get from it, what I mostly want to say about it here is that it depends very much on tight feedback loops to perform a kind of reinforcement learning. As I make a movement she uses her hands and some taught jargon (part of the technique involves associating jargon with postures and movements so you can easily call them up on command by saying or thinking the jargon) to adjust what I do, giving me rapid feedback on how I'm doing. The result was that within the first 10 hours of training I dramatically improved my posture and reduced posture and movement related pain.

For comparison, overlapping with learning Alexander technique I've been more deeply practicing formal meditation, and learning formal meditation has very long feedback loops and requires months to make significant progress. Now, maybe the long feedback cycles are not why it takes months to make progress, and I can think of reasonable stories as to why that would be, I can also imagine finding ways to shorten feedback cycles would have made progress much faster. For example, when I've done biofeedback stuff in the past it only took 4 or 5 hours of sessions before I could make myself fall asleep at will (sadly I've forgotten how to do this), and I think it's quite likely that it was helped a lot by having a computer telling me when I got a little closer to what I needed to do to make that happen and when I got a little farther away, such that I didn't have to spend as much time guessing and waiting for strong evidence that I was doing the right thing before I could reliably train that ability and then go on to the next step.

Comment by gworley on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T08:55:11.454Z · score: 17 (7 votes) · LW · GW

While these objects may be unidentified, the idea that they are the products of aliens, a simulation, AI, or something else seems unlikely given the low quality of the evidence. In all cases I'm aware of evidence for something like this being the true origin of a UFO would have to overcome the more likely alternatives of

  • secret, experimental, or stealth aircraft, probably military, with advanced capabilities undisclosed to the public;
  • observational errors and instrumentation glitches;
  • misremembering, embellishment, and outright lying.

For a comparison, the literature on cryptids (claimed to be real but unobserved by science animals like bigfoot, the Loch Ness monster, and the chupacabra) is full of cases where the evidence looks pretty compelling...so long as we only look for evidence that confirms the hope that a cryptid exists. Perhaps sadly, there are no cryptid humanoids or sea monsters that we know of, and all evidence of them thus far collected is either best categorized as hoaxes, misidentifications, and hopeful misinterpretations or turned out to be evidence of real, undiscovered, and not fantastical animals.

Comment by gworley on Separation of Concerns · 2019-05-24T22:30:11.867Z · score: 5 (5 votes) · LW · GW

The natural argument against this is of course that separation is an illusion. I don't say that to sound mysterious, I mean that just in the simple way that everything is tangled up together, dependent on each other for its existence, and its only in our models that clean separation can exist, and then only by ignore some parts of reality in order to keep our models clean.

As a working programmer, I'm very familiar with the original context of the idea of separation of concerns, and I can also tell you that even there is never totally works. It's a tool we use to help us poor humans who can't fathom the total, complete, awesome complexity of the world to get along well enough anyway to collect a paycheck. Or something like that.

Relatedly, every abstraction is leaky, and if you think it isn't you just haven't looked hard enough.

None of that is to say we shouldn't respect the separation of concerns when useful, but also that we shouldn't elevate it more than it deserves to be, because the separation is a construction of our minds, not a natural feature of the world.

Say Wrong Things

2019-05-24T22:11:35.227Z · score: 89 (34 votes)
Comment by gworley on What is your personal experience with "having a meaningful life"? · 2019-05-24T19:24:57.715Z · score: 2 (1 votes) · LW · GW

Thanks. I am still working out how to explain this myself, as this is written very much from the edge of my understanding. I don't really have a persistent understanding of what I'm talking about above; much of it comes from memories of peak moments of understanding and trying to reconstruct what was going on. Maybe in the future I'll have a better grasp that will allow me to explain it.

Comment by gworley on What is your personal experience with "having a meaningful life"? · 2019-05-24T19:23:02.590Z · score: 2 (1 votes) · LW · GW

Right, this is my current model of what meaning is and where the feeling of meaning comes from. This framing does have some power to help restore a sense of meaning that often arises after seeing the emptiness of the world and the meaning nihilism that often arises from that, but it doesn't much create any particular meaning so much as just argue that meaning is possible and makes sense in some way, but you are still left to work out what feels meaningful to you right now and see how that evolves over time.

Most of my thinking on this comes from a combination of Buddhist philosophy (specifically the Madhyamaka school founded by Nagarjuna), especially the notions of emptiness and dependent origination, perceptual control theory, the transcendental idealism of Husserl (and some related philosophers like Heidegger, Sartre, Schopenhauer, and, of course, Hegel), and my own investigations which lead me to those sources as my thinking converged toward what had already been worked out. Alas there is no simple place to link you that really explains this; maybe I'll write someone one day, although there's a good deal of my background for this way of thinking about meaning laid out in a sequence of post I made in 2017 ending with this one).

As for my personal experience of meaning, it's a little hard to explain because it's undergoing transition towards being depersonalized (held as object), but right now I am very much subject to it (it's internalized as part of my sense of self and can't be easily investigated directly, although I can experience what I believe to be its effects), and both of those being opposed to the dissociated way most people deal with meaning that I'm contrasting with in my answer above (the way meaning is thought of as something external and reducible rather than inter-ternal and dependent/transcendental/irreducible (mondaic?)). But I make sense of my personal experience of meaning as something like reduced confusion about the universe dependent with the life history that led to the confusion arising, that including very much the way I am part of a homeostatic system that constantly jitters around set points.

Comment by gworley on Does the Higgs-boson exist? · 2019-05-23T20:37:16.903Z · score: 5 (7 votes) · LW · GW

For what it's worth I think there is a way to rescue Sabine's bad philosophy. She says:

Look, I am a scientist. Scientists don’t deal with beliefs. They deal with data and hypotheses. Science is about knowledge and facts, not about beliefs.

And from a certain point of view this is right, science is pragmatic and about ignoring large parts of problem space in order to act as if we did have the ability to know facts directly. And it works pretty well as far as it goes, as the modern world makes clear with all it has enabled us to do and have that did not happen without science.

There are still some technical problems here. Claiming that "scientists don't deal with belief" is sort of like trying to say "scientists are not embedded agents" which is just nonsensical given what we observe about the world. But if we squint we can see this as more saying science is not about the folk notion of belief, i.e. it's not about all your thoughts, regardless of how well they predict future experiences. I also think when she says that science is about "knowledge and facts" she doesn't necessarily mean what a philosopher would mean by knowledge (a kind of belief that is believed to be correlated with "reality" or at least predictions of future experiences) or facts (universally, eternally true statements), but instead a folk version of these where knowledge some set of accessible facts that can be called up when needed and worked with and facts are statements that accurately predict the world up to the limit of our predictive abilities (and if she lacks a subjective, probabilistic model of knowledge, maybe just statements that predict future experiences 100% of the time).

And in this way it's not at all at odds with philosophy, although it is antagonistic. Speculating, it seems Sabine is hoping that philosophy can remain a separate magisterium she doesn't have to deal with by carving out hard lines between it and science, and although she's clearly not a strong philosopher or else she would have chosen her words more carefully (although I am reading them out of full context, so it's possible I am the one being insufficiently careful!), this is a stance a large part of Western philosophy took for most of the 20th century before it was proven unworkable since no such hard line can be made to exist.

As I like to think of it, we are all philosophers whether we like it or not, whether we know it or not, and whether we're good at it or not. We can wish it were otherwise, but the world we are embedded in is not shaped such that it could be, so even if we want to be pragmatists and avoid dealing with many deeper issues in philosophy, we should at least be honest and upfront about that, rather than what Sabine seems to be trying to do, which is slyly dismiss the need for philosophy rather than simple admitting to her pragmatism so she can get on with being a scientist.

Comment by gworley on Does the Higgs-boson exist? · 2019-05-23T20:10:45.525Z · score: 13 (6 votes) · LW · GW
This seems like another instance of "people who say they're not doing philosophy are in fact doing bad philosophy."

I think a lot of folks hope they can avoid the philosophical tangle and just get on with what they care about and find ways to not have to deal with nasty philosophical problems, especially I think the problem of the criterion. And you can, and philosophy even gives it a name: pragmatism. You can be a pragmatist about whatever you want by putting up a stop sign that says "yep, not going to look at this, going to take it as not only ontologically basic but real so I can avoid dealing with the infinite regress we find whenever we try to reduce everything". And the catch is that we all must be pragmatists about something if we are to get on with anything, since the alternative seems to be uncomputable (again, due to the problem of the criterion and its many guises). So far so good, philosophy work discharged.

But then some people, especially people who identify as scientists and rationalists, have this idea that they don't put up stop signs, they always keep going to the best of their ability, and when reality says "here, enjoy some actual unknowability" this creates serious problems for the person. Their identity is at stake in so much as it is tied to reductionism and realism (or, as is the case with Sabine, some kind of shadow realism? her position is not self consistent as you point out), they suffer cognitive dissonance, and they choose to resolve it by making the same epistemological leap of faith we are all forced to make by the problem of the criterion, but then denying that any leap was made and instead claiming it was just seeing things as they really are, and if pressed on it then doing some weird gymnastics like Sabine seems to do here to try to hide from what knowing even means.

Comment by gworley on Free will as an appearance to others · 2019-05-23T19:54:05.254Z · score: 3 (2 votes) · LW · GW

Only if you stop interact with yourself. Remove the feedback loop where you observe the self and there would be no free will because it would be irrelevant, as there would be nothing being modeled that one could assess to have free will or not.

Comment by gworley on What is your personal experience with "having a meaningful life"? · 2019-05-23T17:42:32.160Z · score: 2 (1 votes) · LW · GW

Yes, sorry, typo.

Comment by gworley on Where are people thinking and talking about global coordination for AI safety? · 2019-05-23T17:41:32.641Z · score: 5 (2 votes) · LW · GW

There was some in-person conversation about the papers among us, but that's about it. I've not seen a strong community develop around this so far; mostly people just publish things one-off and then they go into a void where no one builds on each others work. I think this mostly represents the early stage of the field and the lack of anyone very dedicated to it, though, as I got the impression that most of us were just dabbling in this topic because it was near-by things we were already interested in and had some ideas about it.

Comment by gworley on Multi-agent predictive minds and AI alignment · 2019-05-23T00:10:59.636Z · score: 2 (1 votes) · LW · GW

I think you do a nice job of capturing many of the details of why I also think alignment is hard, although to be fair you are driving at a different point. I agree with you that most alignment research, despite the efforts for the researchers, is still not reductive enough in terms of what sort of constructs it expects to be able to operate on in the world, and especially is likely to fall down because it doesn't recognize that values and beliefs are the same kind of thing but serving different purposes in different contexts and so present different reifications, which regardless are not the real things that exist in humans against which AI needs to be aligned.

Comment by gworley on Multi-agent predictive minds and AI alignment · 2019-05-23T00:06:32.822Z · score: 2 (1 votes) · LW · GW
All other things being equal, it seem safer to try to align AI with humans which are self-aligned.

For what it's worth I've concluded something similar and it's part of why I'm spending a decent chunk of my time trying to become such a person, although thankfully the process has plenty of other reasons to recommend it so this is just one of many reasons why I'm doing that.

Comment by gworley on What is your personal experience with "having a meaningful life"? · 2019-05-22T21:45:53.101Z · score: 3 (6 votes) · LW · GW

I think most approaches to this question end up confused because seeing clearly where meaning lies is not easy because it's a subtle thing that, without training and considerable experience, is very hard to see directly. Even our term for this things, "meaning", is wrapped up in a web of connotations that suggests it is something other than what it is.

Most people tend to approach meaning as if it were either a thing (or at least a reification) or an essence that things have. I think all of your quotes have interpretations that show something of this, though some more or less than others, and all contain a hint of what I propose is properly meaning without all the extra confusion.

So where is the confusion and how do we pull it back? The confusion arises from trying to understand meaning as something more ontologically complex than it is, i.e. to understand it at the level of words and, less often, feelings, than at the level of perceptions. If we pull back and ask not "where is meaning?" or "what is meaningful?" but "how do I see meaning?" we can begin to approach an answer, which I'll skip over a lot of explanation because it would take longer than I care to explain in this answer and just give you the conclusion I've drawn: it's reduction of confusion. Yes, I'm saying the meaning of life, as you experience meaning, is ultimately about becoming less confused.

If you've payed attention to things like Friston's free energy and procedural control theory, this probably doesn't come as a surprise and you already have some idea of how this adequately explains many observed behaviors we might classify as "seeking meaning", although often only after multiple layers of complex interactions, but no more complex than the way the simple mechanisms of evolution explain speciation even though if you told someone "speciation is genes trying to maximize reproduction" you'd naively be dissatisfied without the gaps being filled in.

So I give this answer knowing full well I failed to fill in those gaps, but you may find it useful and, for what it's worth, I believe it's grounded in millennia of investigation into meaning.

Comment by gworley on Where are people thinking and talking about global coordination for AI safety? · 2019-05-22T21:17:40.297Z · score: 12 (6 votes) · LW · GW

Last year there was a prize for papers and the authors spoke on a panel about this subject at HLAI 2018.

Comment by gworley on Constraints & Slackness Reasoning Exercises · 2019-05-22T21:06:01.669Z · score: 8 (4 votes) · LW · GW

In my work as a software engineer working on distributed systems we tend to think about these things in terms of scaling limits, bottlenecks, limiting factors, tightest constraints, etc.. For example, a system might be limited by a single resource hitting a scaling limit (maybe the CPU is maxed or the network between two servers is saturated or something else), and then that is the only thing that matters to fix at that moment because nothing else will change the scalability of the system since it's the limiting factor. Working on anything else is wasted effort in the short term.

I'll just add that the real world is messy, and often the constraints are not independent. For example, you might have two systems that send work back and forth to each other, and they may work because they are in equilibrium, and scaling one and not the other, even if it appears to be the limiting factor, may not work because it will cause effects in the other service that will seem to suggest that it's the limiting factor, and if you change the second service you end up in the same state, so really the issue is that the limiting factor exists as a result of an interaction between those two systems that isn't naturally a single constraint but emerges in the running system.

FWIW, this is why my job is more like being a zookeeper or vet than being, say, a mechanic: the interactions are so complex I can't model or know them all, so I instead have to rely on fuzzier methods even if I can get very precise when I can narrow the complexity down enough to make that possible.

Comment by gworley on Discourse Norms: Justify or Retract Accusations · 2019-05-22T20:47:44.540Z · score: 4 (2 votes) · LW · GW
This will inevitably lead the most competent and busy people to not share their assessments of anything, since they will be met with the expectation of having to justify every assessment in detail, which is simply not workable in terms of time.

See some of the nuance Davis gives in other comment threads on this post that I think address your concerns, but I think this is on net beneficial if we also cause them to not share their assessment either way without explanation. Just because someone is competent or busy doesn't make their assessments that much more useful unless they actually take the time to make a thoughtful assessment, and I expect the amount their gut reactions to things will be not much better than anyone else's, caveat being there is probably some appropriate discount function on which their gut assessments are marginally more useful but the discount rate is so high we might as well just ignore it. Put another way, being competent, busy, or an expert doesn't make you less likely to do the things humans tend to do that lead to low-value judgements (cf. literally everything written about decision under uncertainty in humans) outside of limited contexts with training and rapid feedback, so in most places at most times having this norm is useful.

I'll also point out this already is the norm in in-person conversation where the only way to express pleasure/displeasure with someone or what they are saying is to do so in a way that opens you up to being asked for justification and failure to provide justification can result in dramatic discounting of your position. Or so the norm seems to be to me.

Comment by gworley on Figuring out what Alice wants: non-human Alice · 2019-05-22T20:32:06.392Z · score: 2 (1 votes) · LW · GW

So here's an alternative explanation on what proto-preferences and preferences are, which is to say what is the process that produces something we might meaningfully reify using the "preference" construct.

Preferences are a model for answering questions about "why do this and not that?". There's a lot going on in this model, though, because in order to choose what to do we have to even be able to form a this and that to choose between. If we strip away the this and that (the ontological), we are left with not what is (the ontic), but instead the liminal ontology naturally implied by sense contact and the production of phenomena and experience prior to understanding it (e.g. the way you perceive color already creates a separation between what is and what you perceive by encoding interactions with what is in less bits that it would take to express an exact simulation of it). This process is mostly beyond conscious control in humans we so we tend to think of it as automatic, outside the locus-of-control, not part of the self, and thus not part of our felt sense of preference, but it's important because it's the first time we "make" a "choice", and choice is what preference is all about.

So how do these choices get made? There are many principles we might derive to explain why we perceive things one way or another, but the one that to me seems most parsimonious and maximally descriptive is minimization of uncertainty, which to really cache out at this level probably requires some additional effort to deconstruct what that means in a sensible way that doesn't fall apart the way "minimize description length" seems to because it ignores the way sometimes minimizing uncertainty over a long term requires not minimizing uncertainty over a short term (avoiding local minima) and other caveats that make too simple an explanation incomplete. Although I mostly draw on philosophy I'm not explaining here to come to this point, see Friston's free energy, perceptual control theory, etc. for related notions and support.

This gives us a kind of low level operation then that can power preferences, which get built up at the next level of ontological abstraction (what we might call feeling or sensation), which is the encoding of a judgement about success or failure at minimizing uncertainty and could either be positive (below some threshold of minimization), negative (over some threshold), or neutral (within error bounds and unable to rule either way). From here we can build up to more complex sorts of preferences over additional levels of abstraction, but they will all be rooted in judgements about whether or not uncertainty was minimized at the perceptual level, keeping in mind that the brain senses itself through circular networks of neurons allowing it to perceive itself and thus apply this same process to perceptions we reify as "thoughts".

What does this suggest for this discussion? I think it offers a way to dissolve many of the confusions arising from trying to work with our normally reified notions of "preference" or even the simpler but less cleanly bounded notion of "proto-preference".

(This was a convenient opportunity to work out some of these ideas in writing since this conversation provided a nice germ to build around. I'll probably refine and expand on this idea elsewhere later.)

Comment by gworley on Huntington's Disease; power and duty of parents over offsprings · 2019-05-21T17:41:10.436Z · score: 3 (2 votes) · LW · GW
Separately, legal and social rights/responsibilities are more about tradition and common perception than about logic. Actual personal moral choices that you make can be much more nuanced. There's no puzzle there. Public discussion still has a very low sanity waterline,

Sure, but why is this relevant to bring up? It seems like the author of the post is attempting to work out a position on Huntington's Disease and its impact that is more consistent with their values than what is traditional or common. I get that you seem to disagree with the author's value assessment, but as you say this is a separate point although I'm confused as to what it's a point about (or maybe you just wanted to emphasize a fact? it's unclear to me).

Comment by gworley on The E-Coli Test for AI Alignment · 2019-05-21T01:46:45.178Z · score: 4 (2 votes) · LW · GW
Humans have easy-to-extract preferences over possible "wiser versions of ourselves." That is, you can give me a menu of slightly modified versions of myself, and I can try to figure out which of those best capture my real values (or over what kind of process should be used for picking which of those best capture my real values, or etc.). Those wiser versions of ourselves can in turn have preferences over even wiser/smarter versions of ourselves, and we can hope that the process might go on ad infinitum.

This seems a pretty bold claim to me. We might be tempted to construe our regular decision making process as doing this (I come up with what wiser-me might do in the next instant, and then do it), but this to me seems to be misunderstanding how decisions happen by confusing the abstraction of "decision" and "preferences" for the actual process that results in the world ending up in a causally subsequent state which I might later look back on and reify as myself having made some decision. Since I'm suspicious that something like this is going on when the inferential distance is very short, I'm even more suspicious when the inferential distance is longer, as you seem to be proposing.

I'm not sure if I'm arguing against your claim that the situations are not symmetrical, but I do think this reasoning for thinking the situations are not symmetrical is likely flawed because it seems to be to be assuming something about humans being fundamentally different from e-coli that is not.

(There are of course many differences between the two, just not ones that seem relevant to this line of argument.)

Comment by gworley on Embedded Agency (full-text version) · 2019-05-21T01:06:43.073Z · score: 4 (2 votes) · LW · GW

I'm pretty impressed by this, and especially the content on embedded agents causes me to update in the direction of thinking MIRI researchers are less confused about certain issues of epistemology than I previously thought. I would have framed some of these issues differently, but overall I can complain far less than I have in the past based on what I've read here.

Comment by gworley on Boo votes, Yay NPS · 2019-05-17T17:42:58.852Z · score: 3 (2 votes) · LW · GW

An interesting feature would be to show the ratio instead of only the vote count in addition to the score.

Comment by gworley on Offer of collaboration and/or mentorship · 2019-05-16T17:41:33.639Z · score: 10 (5 votes) · LW · GW

I'm really excited that you are doing this. I recognize that it's time consuming and not often immediately profitable (in various senses of that word, depending on your conditions) to do this sort of thing, especially when you might be working with someone more junior who you have to spend time and effort on training or otherwise bringing-up-to-speed on skills and knowledge, but I expect the long-term benefits to the AI safety project may be significant in expectation, and hope more people find ways to do this in settings like this outside traditional mentoring and collaboration channels.

I hope it works out, and look forward to seeing what results it produces!

Comment by gworley on Boo votes, Yay NPS · 2019-05-15T18:36:04.358Z · score: 4 (2 votes) · LW · GW

Well, I do say "maybe"; this is a guess based on how the score evolved over time and how the the total score compares to the number of votes.

Comment by gworley on Boo votes, Yay NPS · 2019-05-14T23:35:58.105Z · score: 2 (1 votes) · LW · GW
But I'm skeptical of whether people will actually cast explicit neutral votes, in most cases; that would require them to break out of skimming, slow down, and make a lot more explicit decisions than they currently do. A more promising direction might be to collect more granular data on scroll positions and timings, so that we can estimate the number of people who read a comment and skimmed a comment without voting, and use that as an input into scoring.

This is very much a problem in collecting NPS data in its original context, too: you get lots of data from upset customers and happy customer and meh customers stay silent. You can do some interpolation about what missing votes mean, and coupled with scrolling behavior you could get some sense of read count that you could use to make adjustments, but that obviously makes things a bit more complicated.

Comment by gworley on Boo votes, Yay NPS · 2019-05-14T23:22:13.207Z · score: 4 (2 votes) · LW · GW

Maybe that's useful, but we'd have to figure out what votes are supposed to mean in the first place, i.e. I'm not sure there is a well defined notion of what votes are for now such that we could change the UI to encourage using them in the expected manner.

I have an idea of what problems I'd like solved and voting is one way to solve some of those problems and in that context we might have some sense of how we would like to ask users to use voting, but that only makes sense in that context. On it's own voting is just incrementing/decrementing counters in the database and that counter is used to inform some algorithms about the order in which content is displayed on the site; we have to decide what that means to us and what we would like it to do beyond what it naturally does on its own such that providing instruction and shaping the UI to encourage particular behaviors is meaningful.

So that's a long way to say yes, but conditional on having explicit norms.

Comment by gworley on Boo votes, Yay NPS · 2019-05-14T23:18:04.818Z · score: 2 (1 votes) · LW · GW

I'm not sure how much it would help, but that's mainly because I am both not troubled by voting up things I disagree with or voting down things I disagree with and because I have a history of leaving constructive feedback comments, especially in cases where I feel like a comment/post is being treated overly harshly or where the author seems unfamiliar with community norms. I can imagine that others who are less willing to do that might be more willing to leave reacts that at least convey some of that information.

Comment by gworley on The Relationship Between the Village and the Mission · 2019-05-14T19:29:14.222Z · score: 5 (2 votes) · LW · GW
But my impression is this is not true for everyone. One clearcut thing is that there's a certain threshold of agency and self-efficacy that someone needs to have demonstrated before I feel comfortable inviting them to mission-centric spaces (over the longterm), and I think I'm not alone in that. I think there are people who have "mixed competencies", where they've gotten good at some things but others, and they want to be able to help the mission, and there are subtle and not-so-subtle social forces that push them away.
And I'm not sure there's anything wrong with that, but it seems important to acknowledge.

I think there's something proper in the function of a sangha (and by extension, our community) that it discourages those who don't have, as you put it, the "agency and self-efficacy" to properly engage in the mission, and also pushes out those who are only half in it, such that what I can imagine as "mixed competencies" results in them not staying despite the fact that they could have stayed if they had been more committed and willing to make space for themselves in a place that was willing to tolerate them but not usher them in.

Of course, it feels a bit weird because in sangha that's directly tied to the purpose of the community and can be done skillfully as part of transmitting the dharma, whereas in our community this seems at cross-purposes with the mission and can feel to some like defecting on paying the cost to train and develop the people it needs. Probably this is part of what sets apart sangha from other forms of community: it's shape is directly tied to its function, and is a natural extension of the mission, where elsewhere other shapes could be adopted because the mission does not directly suggest one.

Boo votes, Yay NPS

2019-05-14T19:07:52.432Z · score: 34 (11 votes)
Comment by gworley on The Relationship Between the Village and the Mission · 2019-05-13T23:00:22.111Z · score: 17 (5 votes) · LW · GW

For a related notion, let me relate some things about sangha, as I tend to think it's a good model for the kind of community that is likely shaped to fit the present situation.

"Sangha" is a Sanskrit word usually translated as "community". It has a couple different meanings within Buddhism. One is mission focused: everyone is a member of the ideal sangha that transcends any particular space and time who has, in various definitions, taken the three refuges, taken the precepts, achieved stream entry, or is otherwise somehow on The Path. Another is location focused: "sangha" can refer to the specific community in a particular monastery, order, lineage, practice center, etc. of people who are "committed" or "serious" in one of the ways just enumerated. There are some others but those are the ones that seem relevant here.

Some things might look on the outside like sangha but might not be. For example, I facilitate a weekly meditation meetup at the REACH. It's not really a sangha, though, because lots of people casually drop in who may or may not have committed themselves to liberation from suffering, they just want a place to practice or to hang out with some cool people or something else. And that's fine; the point of a group like this is to be accessible in a way a sangha is not because although a sangha may be welcoming (the local one to which I belong certainly tries to be), many people bounce off sanghas because, I theorize, they aren't ready to make the kind of commitment that really being part of one asks of you.*

*Because sangha will ask for commitment, even if you just try to be a "casual"—there's not really a way to do the equivalent of hiding in the pews in the back. And it's not the way you can't hide in an Evangelical Christian group where you will be pointedly asked about your seriousness and shunned if you aren't committed. Rather it's like the practice pervades everything around the sangha and if you get too close to it and want to maintain distance you'll feel very out-of-place.

And there's the opposite situation, where a sangha may be real but not look much like it to outsiders. Sometimes this is just two friends coming together who are fellow stream winners who create sangha through their every interaction with each other, but to an outsider they might just look like good friends and not see the deeper connection to practice pervading their relationship.

The point being, sangha is something special, valuable, with real but somewhat fuzzy borders, and a strong commitment to a "mission".

Now, our community (the one Ray is talking about here) hasn't existed for long enough that we've had time to come to agreement on just what the criteria for inclusion are (or, put another way, exactly how we would phrase what the mission is, though I like what Ray says above), but whatever it is we can use this as the foundation of our community. I've seen in my time in Berkeley that, in my estimation, the mission is strong and powerfully creates a center of gravity that pulls in people sufficiently aligned with it and pushes out people who are not, usually not by force but because they simply get pulled away by other interests because although they might care about the mission some, they don't care about it so much to make it a top priority in their life. This to me makes it like a sangha, but rather than a community committed to enlightenment it's a community committed to long-term flourishing.

To me this suggests a couple things about how to build what Ray has called the village:

  • Keep the mission strong. The mission is the thing that holds the village together.
  • Leave the village open. The mission is both lighthouse and craggy shore that draws some people in and keeps other people out of the metaphorical harbor of the village because threading the currents to the harbor's mouth is just hard enough that it keeps out anyone who doesn't deeply care about the mission but not so hard as to keep out anyone who is serious.
  • Fix up the village. Right now the village is like a shanty town built around a small fort. Things in the fort are okay, but it's a remote outpost far from home, requires frequently resupply from the outside, and the shanty town is better than living rough and more a place than anything else nearby the fort, but that's about it. I believe most of the problems are not a consequence of keeping the mission strong or leaving the village open, but of not caring for the village.
Comment by gworley on alternative history: what if Bayes rule had never been discovered? · 2019-05-13T19:15:48.755Z · score: 3 (2 votes) · LW · GW

I'm tempted to speculate about a "harder" version of this question: what if we lived in a universe where Bayes' theorem not only hand't been discovered but wasn't true? Like a universe with different physics of causality. But I digress.

I don't have a direct answer for you, but it might be constructive to reflect that Bayes' theorem is a particular mathematical understanding of a pattern people understand and use implicitly and pops up all over the place because Bayes' is a view onto the mechanisms of causation. This suggests that even without Bayes' theorem formally stated by anyone in any way, we'd still see it pop up all over the place, only no one would have identified it as a common pattern.

Comment by gworley on Disincentives for participating on LW/AF · 2019-05-13T18:13:47.773Z · score: 16 (5 votes) · LW · GW

An interesting pattern I see in the comments and have picked out from other conversations but that no one has called out, is that many people seem to have a preference for a style of communication that doesn't naturally fit "I sit alone and write up my thoughts clearly and then post them as a comment/post". My personal preference is very much to do exactly that, as talking to me in person about a technical subject is maybe interesting but actually requires more of my time and energy than it does for me to write about it. This suggests to me that the missing engagement is all folks who don't prefer to write out their thoughts carefully, and the existing engagement is largely from people who do prefer this.

I have some kind of pet theory here about different internet cultures (I grew up with Usenet and listservs; younger/other folks grew up with chat and texting), but I think the cause of this difference in preferences is not especially relevant.

Comment by gworley on Disincentives for participating on LW/AF · 2019-05-10T23:38:48.339Z · score: 15 (6 votes) · LW · GW

I continue to be concerned with issues around downvotes and upvotes being used as "boos" and "yays" rather than saying something about the worthiness of a thing to be engaged with (I've been thinking about this for a while and just posted a comment about it over on EA forum). The result is that to me votes a very low in information value, which is unfortunate because they are the primary feedback mechanism on LW. I would love to see a move towards something that made voting costlier, although I realize that might impact engagement. There are probably other solutions that overcome these issues by not directly tweaking voting but instead pulling sideways at voting to come up with something that would work better for what I consider the import thing you want votes for: to identify the stuff worth engaging with.

Highlights from "Integral Spirituality"

2019-04-12T18:19:06.560Z · score: 21 (20 votes)

Parfit's Escape (Filk)

2019-03-29T02:31:42.981Z · score: 40 (15 votes)

[Old] Wayfinding series

2019-03-12T17:54:16.091Z · score: 9 (2 votes)

[Old] Mapmaking Series

2019-03-12T17:32:04.609Z · score: 9 (2 votes)

Is LessWrong a "classic style intellectual world"?

2019-02-26T21:33:37.736Z · score: 30 (7 votes)

Akrasia is confusion about what you want

2018-12-28T21:09:20.692Z · score: 18 (15 votes)

What self-help has helped you?

2018-12-20T03:31:52.497Z · score: 34 (11 votes)

Why should EA care about rationality (and vice-versa)?

2018-12-09T22:03:58.158Z · score: 16 (3 votes)

What precisely do we mean by AI alignment?

2018-12-09T02:23:28.809Z · score: 29 (8 votes)

Outline of Metarationality, or much less than you wanted to know about postrationality

2018-10-14T22:08:16.763Z · score: 19 (17 votes)

HLAI 2018 Talks

2018-09-17T18:13:19.421Z · score: 15 (5 votes)

HLAI 2018 Field Report

2018-08-29T00:11:26.106Z · score: 49 (20 votes)

A developmentally-situated approach to teaching normative behavior to AI

2018-08-17T18:44:53.515Z · score: 12 (5 votes)

Robustness to fundamental uncertainty in AGI alignment

2018-07-27T00:41:26.058Z · score: 7 (2 votes)

Solving the AI Race Finalists

2018-07-19T21:04:49.003Z · score: 27 (10 votes)

Look Under the Light Post

2018-07-16T22:19:03.435Z · score: 25 (11 votes)

RFC: Mental phenomena in AGI alignment

2018-07-05T20:52:00.267Z · score: 13 (4 votes)

Aligned AI May Depend on Moral Facts

2018-06-15T01:33:36.364Z · score: 9 (3 votes)

RFC: Meta-ethical uncertainty in AGI alignment

2018-06-08T20:56:26.527Z · score: 18 (5 votes)

The Incoherence of Honesty

2018-06-08T02:28:59.044Z · score: 22 (12 votes)

Safety in Machine Learning

2018-05-29T18:54:26.596Z · score: 17 (4 votes)

Epistemic Circularity

2018-05-23T21:00:51.822Z · score: 5 (1 votes)

RFC: Philosophical Conservatism in AI Alignment Research

2018-05-15T03:29:02.194Z · score: 27 (9 votes)

Thoughts on "AI safety via debate"

2018-05-10T00:44:09.335Z · score: 33 (7 votes)

The Leading and Trailing Edges of Development

2018-04-26T18:02:23.681Z · score: 24 (7 votes)

Suffering and Intractable Pain

2018-04-03T01:05:30.556Z · score: 13 (3 votes)

Evaluating Existing Approaches to AGI Alignment

2018-03-27T19:57:39.207Z · score: 22 (5 votes)

Evaluating Existing Approaches to AGI Alignment

2018-03-27T19:55:57.000Z · score: 0 (0 votes)

Idea: Open Access AI Safety Journal

2018-03-23T18:27:01.166Z · score: 64 (20 votes)

Computational Complexity of P-Zombies

2018-03-21T00:51:31.103Z · score: 3 (4 votes)

Avoiding AI Races Through Self-Regulation

2018-03-12T20:53:45.465Z · score: 6 (3 votes)

How safe "safe" AI development?

2018-02-28T23:21:50.307Z · score: 27 (10 votes)

Self-regulation of safety in AI research

2018-02-25T23:17:44.720Z · score: 33 (10 votes)

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

2018-02-23T21:42:20.604Z · score: 15 (4 votes)

AI Alignment and Phenomenal Consciousness

2018-02-23T01:21:36.808Z · score: 10 (2 votes)

Formally Stating the AI Alignment Problem

2018-02-19T19:07:14.000Z · score: 0 (0 votes)

Formally Stating the AI Alignment Problem

2018-02-19T19:06:04.086Z · score: 14 (6 votes)

Bayes Rule Applied

2018-02-16T18:30:16.470Z · score: 12 (3 votes)

Introduction to Noematology

2018-02-05T23:28:32.151Z · score: 11 (4 votes)

Form and Feedback in Phenomenology

2018-01-24T19:42:30.556Z · score: 29 (6 votes)

Book Review: Why Buddhism Is True

2018-01-15T20:54:37.431Z · score: 23 (9 votes)

Methods of Phenomenology

2017-12-30T18:42:03.513Z · score: 6 (2 votes)

The World as Phenomena

2017-12-06T02:35:20.681Z · score: 0 (2 votes)

Towards an Axiological Approach to AI Alignment

2017-11-15T02:07:47.607Z · score: 11 (6 votes)

Towards an Axiology Approach to AI Alignement

2017-11-15T02:04:14.000Z · score: 0 (0 votes)

Doxa, Episteme, and Gnosis

2017-10-31T23:23:38.094Z · score: 4 (4 votes)

React and Respond

2017-10-24T01:48:16.405Z · score: 5 (5 votes)

Regress Thyself to the Mean

2017-10-19T22:42:08.925Z · score: 9 (4 votes)