Posts

[Link] Sarah Constantin: "Why I am Not An AI Doomer" 2023-04-12T01:52:48.784Z
Is There a Valley of Bad Civilizational Adequacy? 2022-03-11T19:49:49.049Z

Comments

Comment by lbThingrb on Anxiety vs. Depression · 2024-03-17T18:49:46.151Z · LW · GW

What’s more interesting is that I just switched medications from one that successfully managed the depression but not the anxiety to one that successfully manages the anxiety but not the depression

May I ask which medications?

Comment by lbThingrb on re: Yudkowsky on biological materials · 2023-12-28T20:00:47.059Z · LW · GW

For macroscopic rotation:

  • Blood vessels cannot rotate continuously, so nutrients cannot be provided to the rotating element to grow it.
  • Without smooth surfaces to roll on, rolling is not better than walking.

There are other uses for macroscopic rotation besides rolling on wheels, e.g. propellers, gears, flywheels, drills, and turbines. Also, how to provide nutrients to detached components, or build smooth surfaces to roll on so your wheels will be useful, seem like problems that intelligence is better at solving than evolution.

Comment by lbThingrb on The other side of the tidal wave · 2023-11-03T17:34:54.621Z · LW · GW

I'm middle-aged now, and a pattern I've noticed as I get older is that I keep having to adapt my sense of what is valuable, because desirable things that used to be scarce for me keep becoming abundant. Some of this is just growing up, e.g. when I was a kid my candy consumption was regulated by my parents, but then I had to learn to regulate it myself. I think humans are pretty well-adapted to that sort of value drift over the life course. But then there's the value drift due to rapid technological change, which I think is more disorienting. E.g. I invested a lot of my youth into learning to use software which is now obsolete. It feels like my youthful enthusiasm for learning new software skills, and comparative lack thereof as I get older, was an adaptation to a world where valuable skills learned in childhood could be expected to mostly remain valuable throughout life. It felt like a bit of a rug-pull how much that turned out not to be the case w.r.t. software.

But the rise of generative AI has really accelerated this trend, and I'm starting to feel adrift and rudderless. One of the biggest changes from scarcity to abundance in my life was that of interesting information, enabled by the internet. I adapted to it by re-centering my values around learning skills and creating things. As I contemplate what AI can already do, and extrapolate that into the near future, I can feel my motivation to learn and create flagging.

If, and to the extent that, we get a "good" singularity, I expect that it will have been because the alignment problem turned out to be not that hard, the sort of thing we could muddle through improvisationally. But that sort of singularity seems unlikely to preserve something as delicately balanced as the way that (relatively well-off) humans get a sense of meaning and purpose from the scarcity of desirable things. I would still choose a world that is essentially a grand theme park full of delightful experience machines over the world as it is now, with all its sorrows, and certainly I would choose theme-park world over extinction. But still ... OP beautifully crystalizes the apprehension I feel about even the more optimistic end of the spectrum of possible futures for humanity that are coming into view.

Comment by lbThingrb on AI#28: Watching and Waiting · 2023-09-07T23:35:05.868Z · LW · GW

This all does seem like work better done than not done, who knows, usefulness could ensue in various ways and downsides seem relatively small.

I disagree about item #1, automating formal verification. From the paper:

9.1 Automate formal verification:

As described above, formal verification and automatic theorem proving more generally needs to be fully automated. The awe-inspiring potential of LLMs and other modern AI tools to help with this should be fully realized.

Training LLMs to do formal verification seems dangerous. In fact, I think I would extend that to any method of automating formal verification that would be competitive with human experts. Even if it didn't use ML at all, the publication of a superhuman theorem-proving AI, or even just the public knowledge that such a thing existed, seems likely to lead to the development of more general AIs with similar capabilities within a few years. Without a realistic plan for how to use such a system to solve the hard parts of AI alignment, I predict that it would just shorten the timeline to unaligned superintelligence, by enabling systems that are better at sustaining long chains of reasoning, which is one of the major advantages humans still have over AIs. I worry that vague talk of using formal verification for AI safety is in effect safety-washing a dangerous capabilities research program.

All that said, a superhuman formal-theorem-proving assistant would be a super-cool toy, so if anyone has a more detailed argument for why it would actually be a net win for safety in expectation, I'd be interested to hear it.

Comment by lbThingrb on Does decidability of a theory imply completeness of the theory? · 2023-07-30T17:23:20.589Z · LW · GW

Correct. Each iteration of the halting problem for oracle Turing machines (called the "Turing jump") takes you to a new level of relative undecidability, so in particular true arithmetic is strictly harder than the halting problem.

Comment by lbThingrb on Does decidability of a theory imply completeness of the theory? · 2023-07-30T17:03:39.165Z · LW · GW

The true first-order theory of the standard model of arithmetic has Turing degree . That is to say, with an oracle for true arithmetic, you could decide the halting problem, but also the halting problem for oracle Turing machines with a halting-problem-for-regular-Turing-machines oracle, and the halting problem for oracle Turing machines with a halting oracle for those oracle Turing machines, and so on for any finite number of iterations. Conversely, if you had an oracle that solves the halting problem for any of these finitely-iterated-halting-problem-oracle Turing machines, you could decide true arithmetic.

Comment by lbThingrb on Distinguishing test from training · 2022-12-06T02:49:42.562Z · LW · GW

That HN comment you linked to is almost 10 years old, near the bottom of a thread on an unrelated story, and while it supports your point, I don't notice what other qualities it has that would make it especially memorable, so I'm kind of amazed that you surfaced it at an appropriate moment from such an obscure place and I'm curious how that happened.

Comment by lbThingrb on It’s Probably Not Lithium · 2022-07-06T21:51:37.118Z · LW · GW

All right, here's my crack at steelmanning the Sin of Gluttony theory of the obesity epidemic. Epistemic status: armchair speculation.

We want to explain how it could be that in the present, abundant hyperpalatable food is making us obese, but in the past that was not so to nearly the same extent, even though conditions of abundant hyperpalatable food were not unheard of, especially among the upper classes. Perhaps the difference is that, today, abundant hyperpalatable food is available to a greater extent than ever before to people in poor health.

In the past, food cultivation and preparation were much more labor intensive than in the present, so you either had to pay a much higher price for your hyperpalatable food, or put in the labor yourself. Furthermore, there were fewer opportunities to make the necessary income from sedentary work, and there wasn't much of a welfare state. Thus, if you were in poor health, you were much more likely in the past than today to be selected out of the class of people who had access to abundant hyperpalatable food. Obesity is known to be a downstream effect of various other health problems, but only if you are capable of consuming enough calories, and have access to food that you want to overeat.

Furthermore, it is plausible that some people, due to genetics or whatever, have a tendency to be in good health when they lack access to abundant hyperpalatable food, and to become obese and thus unhealthy when they have access to abundant hyperpalatable food. Thus there is a feedback loop where being healthier makes you more productive, which makes hyperpalatable food more available to you, which makes you less healthy, which makes you less productive, which makes hyperpalatable food less available to you. Plausibly, in the past, this process tended towards an equilibrium at a much lower level of obesity than it does today, because of today's greater availability of hyperpalatable food to people in poor health.

It is also plausible that our technological civilization has simply made considerable progress in the development of ever more potent gustatory superstimuli over the past century. This is a complex optimization problem, and it's not clear why we should have come close to a ceiling on it long before the present, or why just contemplating the subjective palatability of past versus present-day food would give us conscious awareness of why we are more prone to overeating the latter.

Both of these proposed causes are consistent with pre-obesity-epidemic overfeeding studies of metabolically healthy individuals failing to cause large, long-term weight gain: They suggest that the obesity epidemic is concentrated among metabolically unhealthy people who in the past simply couldn't afford to get fat, and that present-day food is importantly different.

Comment by lbThingrb on Ukraine Post #2: Options · 2022-03-11T20:01:33.994Z · LW · GW

This question is tangential to the main content of your post, so I have written it up in a separate post of my own, but I notice I am confused that you and many other rationalists are balls to the wall for cheap and abundant clean energy and other pro-growth, tech-promoting public policies, while also being alarmist about AI X-risk, and I am curious if you see any contradiction there:

Is There a Valley of Bad Civilizational Adequacy?

Comment by lbThingrb on Jimrandomh's Shortform · 2020-04-15T04:12:26.980Z · LW · GW

This Feb. 20th Twitter thread from Trevor Bedford argues against the lab-escape scenario. Do read the whole thing, but I'd say that the key points not addressed in parent comment are:

Data point #1 (virus group): #SARSCoV2 is an outgrowth of circulating diversity of SARS-like viruses in bats. A zoonosis is expected to be a random draw from this diversity. A lab escape is highly likely to be a common lab strain, either exactly 2002 SARS or WIV1.

But apparently SARSCoV2 isn't that. (See pic.)

Data point #2 (receptor binding domain): This point is rather technical, please see preprint by @K_G_Andersen, @arambaut, et al at http://virological.org/t/the-proximal-origin-of-sars-cov-2/398… for full details.
But, briefly, #SARSCoV2 has 6 mutations to its receptor binding domain that make it good at binding to ACE2 receptors from humans, non-human primates, ferrets, pigs, cats, pangolins (and others), but poor at binding to bat ACE2 receptors.
This pattern of mutation is most consistent with evolution in an animal intermediate, rather than lab escape. Additionally, the presence of these same 6 mutations in the pangolin virus argues strongly for an animal origin: https://biorxiv.org/content/10.1101/2020.02.13.945485v1…
...
Data point #3 (market cases): Many early infections in Wuhan were associated with the Huanan Seafood Market. A zoonosis fits with the presence of early cases in a large animal market selling diverse mammals. A lab escape is difficult to square with early market cases.
...
Data point #4 (environmental samples): 33 out of 585 environmental samples taken from the Huanan seafood market showed as #SARSCoV2 positive. 31 of these were collected from the western zone of the market, where wildlife booths are concentrated. 15/21 http://xinhuanet.com/english/2020-01/27/c_138735677.htm…
Environmental samples could in general derive from human infections, but I don't see how you'd get this clustering within the market if these were human derived.

One scenario I recall seeing somewhere that would reconcile lab-escape with data points 3 & 4 above is that some low-level WIV employee or contractor might have sold some purloined lab animals to the wet market. No idea how plausible that is.

Comment by lbThingrb on Topological Fixed Point Exercises · 2018-11-24T04:50:09.985Z · LW · GW

Generalized to n dimensions in my reply to Adele Lopez's solution to #9 (without any unnecessary calculus :)

Comment by lbThingrb on Topological Fixed Point Exercises · 2018-11-24T03:36:38.063Z · LW · GW

Thanks! I find this approach more intuitive than the proof of Sperner's lemma that I found in Wikipedia. Along with nshepperd's comment, it also inspired me to work out an interesting extension that requires only minor modifications to your proof:

d-spheres are orientable manifolds, hence so is a decomposition of a d-sphere into a complex K of d-simplices. So we may arbitrarily choose one of the two possible orientations for K (e.g. by choosing a particular simplex P in K, ordering its vertices from 1 to d + 1, and declaring it to be the prototypical positively oriented simplex; when d = 2, P could be a triangle with the vertices going around counterclockwise when you count from 1 to 3; when d = 3, P could be a tetrahedron where, if you position your right hand in its center and point your thumb at the 1-vertex, your fingers curl around in the same direction in which you count the remaining vertices from 2 to 4). Then any ordering of the vertices of any d-simplex in K may be said to have positive or negative orientation (chirality). (E.g. it would be positive when there's an orientation-preserving map (e.g. a rotation) sending each of its vertices to the correspondingly numbered vertices of P.)

So here's my extension of parent comment's theorem: Any coloring of the vertices of K with the colors {1, ..., d + 1} will contain an equal number of positively and negatively oriented chromatic d-simplices—that is, the reason the number of chromatic d-simplices in K must be even is that each one can be paired off with one of the opposite (mirror) orientation. (Does this theorem have a name? If so I couldn't find it.)

Following parent comment, the proof is by induction on the dimension d. When d = 1, K is just a cycle graph with vertices colored 1 or 2. As we go around clockwise (or counterclockwise), we must traverse an equal number of 1→2 edges and 2→1 edges (i.e. oppositely oriented 1-simplices), by the time we return to our starting point.

Now let d > 1, and assume the theorem is true in the (d-1)-dimensional case. As in parent comment, we may choose any vertex v of K, and then the faces opposite to v in each d-simplex in K that contains v will, together, form a (d-1)-dimensional subcomplex K′ of K that is homeomorphic (topologically equivalent) to a (d-1)-sphere.

Suppose v has color i. We will show that changing v's color to ji will add or remove the same number of positively oriented chromatic d-simplices as negatively oriented ones: Forget, for the moment, the distinction between colors i and j—say any i or j-colored vertex of K′ has color "i-or-j." Then K′ is now d-colored, so, by inductive hypothesis, the chromatic (d-1)-simplices of K′ must occur in pairs of opposite orientation (if any exist—if none exist, v can't be part of any chromatic d-simplex regardless of its color). Consider such a pair, call them F₁ and F₂.

Now cease pretending that i and j are a single color. Since F₁ was chromatic in K′, it must have had an i-or-j–colored vertex. Suppose, WOLOG, that that vertex was actually j-colored. Then, together with i-colored v, F₁ spans a chromatic d-simplex of K, call it S₁, which we may assume WOLOG to be positively oriented. Once we change the color of v from i to j, however, S₁ will have two j-colored vertices, and so will no longer be chromatic. To see that balance is maintained, consider what happens with F₂:

If F₂'s i-or-j–colored vertex was, like F₁'s, actually j-colored, then the d-simplex spanned by F₂ and v, call it S₂, was chromatic and negatively oriented (because F₂ had opposite orientation to F₁ in K′), and thus S₂ also ceased to be chromatic when we changed v's color from i to j, balancing S₁'s loss of chromatic status. Otherwise, F₂'s i-or-j–colored vertex must have been i-colored, in which case S₂ wasn't chromatic when v was also i-colored, but changing v's color to j turned S₂ into a new d-chromatic simplex. But what is S₂'s orientation? Well, it was negative under the assumption that S₂'s i-or-j–colored vertex was j-colored and v was i-colored, and swapping the labels of a pair of vertices in an oriented simplex reverses its orientation, so, under the present assumption, S₂'s orientation must be positive! Thus the loss of S₁ as a positively oriented chromatic d-simplex is balanced by the addition of S₂ as a new positively oriented chromatic d-simplex.

If all of K's vertices are the same color, it has the same number (zero) of positively and negatively oriented chromatic d-simplices, and since this parity is preserved when we change the colors of the vertices of K one at a time, it remains no matter how we (d+1)-color K. ☐

We can relate this theorem back to Sperner's lemma using the same trick as parent comment: Suppose we are given a triangulation K of a regular d-simplex S into smaller d-simplices, and a (d+1)-coloring of K's vertices that assigns a unique color to each vertex v of S, and doesn't use that color for any of K's vertices lying on the face of S opposite to v. We form a larger simplicial complex L containing K by adding d + 1 new vertices as follows: For i = 1, ..., d + 1, place a new i-colored vertex exterior to S, some distance from the center of S along the ray that goes through the i-colored vertex of S. Connect this new vertex to each vertex of K lying in the face of S opposite from the (i+1)-colored (or 1-colored, when i = d + 1) vertex of S. (Note that none of the d-simplices thereby created will be chromatic, because they won't have an (i+1)-colored vertex.) Then connect all of the new vertices to each other.

Having thus defined L, we can embed it in a d-sphere, of which it will constitute a triangulation, because the new vertices will form a d-simplex T whose "interior" is the complement of L in the sphere. Thus we can apply our above theorem to conclude that L has equally many positively and negatively oriented chromatic d-simplices. By construction, none of L's new vertices are included in any chromatic d-simplex other than T, so K must contain an equal number (possibly zero) of positively and negatively oriented chromatic d-simplices, plus one more, of opposite orientation to T. And what is the orientation of T? I claim that it is opposite to that of S: Consider T by itself, embedded in the sphere. T's boundary and exterior (the interior of L) then constitute another chromatic d-simplex, call it U, which is essentially just a magnification of S, with correspondingly colored vertices, and so share's S's orientation. Applying our theorem again, we see that T and U must have opposite orientations*, so we conclude that K must contain exactly one more chromatic d-simplex of the same orientation as S than of the opposite orientation. (As proved in nshepperd's comment for the case d = 2.)

*The observation, that, on the surface of a sphere, the interior and exterior of a trichromatic triangle have opposite orientations, is what sent me down this rabbit hole in the first place. :)

Comment by lbThingrb on Embedded World-Models · 2018-11-05T01:11:32.101Z · LW · GW

Thanks, this is a very clear framework for understanding your objection. Here's the first counterargument that comes to mind: Minimax search is a theoretically optimal algorithm for playing chess, but is too computationally costly to implement in practice. One could therefore argue that all that matters is computationally feasible heuristics, and modeling an ideal chess player as executing a minimax search adds nothing to our knowledge of chess. OTOH, doing a minimax search of the game tree for some bounded number of moves, then applying a simple board-evaluation heuristic at the leaf nodes, is a pretty decent algorithm in practice.

Furthermore, it seems like there's a pattern where, the more general the algorithmic problem you want to solve is, the more your solution is compelled to resemble some sort of brute-force search. There are all kinds of narrow abilities we'd like an AGI to have that depend on the detailed structure of the physical world, but it's not obvious that any such structure, beyond hypotheses about what is feasibly computable, could be usefully exploited to solve the kinds of problem laid out in this sequence. So it may well be that the best approach turns out to involve some sort of bounded search over simpler strategies, plus lots and lots of compute.

Comment by lbThingrb on Realism about rationality · 2018-10-04T21:12:44.094Z · LW · GW

I didn't mean to suggest that the possibility of hypercomputers should be taken seriously as a physical hypothesis, or at least, any more seriously than time machines, perpetual motion machines, faster-than-light, etc. And I think it's similarly irrelevant to the study of intelligence, machine or human. But in my thought experiment, the way I imagined it working was that, whenever the device's universal-Turing-machine emulator halted, you could then examine its internal state as thoroughly as you liked, to make sure everything was consistent with the hypothesis that it worked as specified (and the non-halting case could be ascertained by the presence of pixie dust 🙂). But since its memory contents upon halting could be arbitrarily large, in practice you wouldn't be able to examine it fully even for individual computations of sufficient complexity. Still, if you did enough consistency checks on enough different kinds of computations, and the cleverest scientists couldn't come up with a test that the machine didn't pass, I think believing that the machine was a true halting-problem oracle would be empirically justified.

It's true that a black box oracle could output a nonstandard "counterfeit" halting function which claimed that some actually non-halting TMs do halt, only for TMs that can't be proved to halt within ZFC or any other plausible axiomatic foundation humans ever come up with, in which case we would never know that it was lying to us. It would be trickier for the device I described to pull off such a deception, because it would have to actually halt and show us its output in such cases. For example, if it claimed that some actually non-halting TM M halted, we could feed it a program that emulated M and output the number of steps M took to halt. That program would also have to halt, and output some specific number n. In principle, we could then try emulating M for n steps on a regular computer, observe that M hadn't reached a halting state, and conclude that the device was lying to us. If n were large enough, that wouldn't be feasible, but it's a decisive test that a normal computer could execute in principle. I suppose my magical device could instead do something like leave an infinite output string in memory, that a normal computer would never know was infinite, because it could only ever examine finitely much of it. But finite resource bounds already prevent us from completely ruling out far-fetched hypotheses about even normal computers. We'll never be able to test, e.g., an arbitrary-precision integer comparison function on all inputs that could feasibly be written down. Can we be sure it always returns a Boolean value, and never returns the Warner Brothers dancing frog?

Actually, hypothesizing that my device "computed" a nonstandard version of the halting function would already be sort of self-defeating from a standpoint of skepticism about hypercomputation, because all nonstandard models of Peano arithmetic are known to be uncomputable. A better skeptical hypothesis would be that the device passed off some actually halting TMs as non-halting, but only in cases where the shortest proof that any of those TMs would have halted eventually was too long for humans to have discovered yet. I don't know enough about Solomonoff induction to say whether it would unduly privilege such hypotheses over the hypothesis that the device was a true hypercomputer (if it could even entertain such a hypothesis). Intuitively, though, it seems to me that, if you went long enough without finding proof that the device wasn't a true hypercomputer, continuing to insist that such proof would be found at some future time would start to sound like a God-of-the-gaps argument. I think this reasoning is valid even in a hypothetical universe in which human brains couldn't do anything Turing machines can't do, but other physical systems could. I admit that's a nontrivial, contestable conclusion. I'm just going on intuition here.

Comment by lbThingrb on Realism about rationality · 2018-10-03T01:43:17.370Z · LW · GW

This can’t be right ... Turing machines are assumed to be able to operate for unbounded time, using unbounded memory, without breaking down or making errors. Even finite automata can have any number of states and operate on inputs of unbounded size. By your logic, human minds shouldn’t be modeling physical systems using such automata, since they exceed the capabilities of our brains.

It’s not that hard to imagine hypothetical experimental evidence that would make it reasonable to believe that hypercomputers could exist. For example, suppose someone demonstrated a physical system that somehow emulated a universal Turing machine with infinite tape, using only finite matter and energy, and that this system could somehow run the emulation at an accelerating rate, such that it computed n steps in seconds. (Let’s just say that it resets to its initial state in a poof of pixie dust if the TM doesn’t halt after one second.)

You could try to reproduce this experiment and test it on various programs whose long-term behavior is predictable, but you could only test it on a finite (to say nothing of computable) set of such inputs. Still, if no one could come up with a test that stumped it, it would be reasonable to conclude that it worked as advertised. (Of course, some alternative explanation would be more plausible at first, given that the device as described would contradict well established physical principles, but eventually the weight of evidence would compel one to rewrite physics instead.)

One could hypothesize that the device only behaved as advertised on inputs for which human brains have the resources to verify the correctness of its answers, but did something else on other inputs, but you could just as well say that about a normal computer. There’d be no reason to believe such an alternative model, unless it was somehow more parsimonious. I don’t know any reason to think that theories that don’t posit uncomputable behavior can always be found which are at least as simple as a given theory that does.

Having said all that, I’m not sure any of it supports either side of the argument over whether there’s an ideal mathematical model of general intelligence, or whether there’s some sense in which intelligence is more fundamental than physics. I will say that I don’t think the Church-Turing thesis is some sort of metaphysical necessity baked into the concept of rationality. I’d characterize it as an empirical claim about (1) human intuition about what constitutes an algorithm, and (2) contingent limitations imposed on machines by the laws of physics.

Comment by lbThingrb on Who Wants The Job? · 2018-07-24T00:08:38.032Z · LW · GW
Apologies for the stark terms if it felt judgmental or degrading!

No worries! I mostly just wrote that comment for the lulz. And the rest was mostly so people wouldn't think I was using humor to obliquely endorse social Darwinism.

Comment by lbThingrb on Who Wants The Job? · 2018-07-22T18:59:37.974Z · LW · GW

I've never heard of anyone doing this directly. Has anyone else? If not, there's probably a reason. I suppose occupational certification programs serve a similar filtering function. Anyway, your suggestion might be more palatable if it were in the form of a deposit refundable if and only if the applicant shows up/answers the phone for scheduled interviews. You would also need a trusted intermediary to hold the deposits, or else we would see a flood of fake job-interview-deposit scams. And even if you had such a trusted intermediary, I suspect that, in a world in which job-interview deposits were the norm, scammers would find all sorts of creative ways to impersonate that intermediary convincingly enough to fool a lot of desperate, marginally employable marks.

Also, the deposit would have to be quite small for low-wage entry-level jobs whose applicant pool would include a lot of people who can't reliably scrounge up 20 bucks, and even then, some would be hindered by limited/expensive access to basic financial services like checking accounts and electronic payments. Maybe those are mostly people you're trying to filter out? Then again, who is the ideal applicant, from the minimum-wage employer's perspective? Someone reliable and competent, of course, but also someone who really needs the money, and so will be highly motivated. So, someone who wouldn't normally be desperately poor, but happens to be at the moment. Maybe access to those applicants is worth enough to some employers that they're willing to pay the price of similar-looking applicants flaking on their interviews and such.

Comment by lbThingrb on Who Wants The Job? · 2018-07-22T17:58:14.391Z · LW · GW
It's really unclear, as a society, how to get them into a position where they can provide as much value as the resources they take.

Harvest their reusable organs, and melt down what remains to make a nutrient paste suitable for livestock and the working poor?

(Kidding! But that sort of thing is always where my mind wanders when people put the question in such stark terms, perhaps because I myself am a chronically unemployed "taker" (mental illness). Anyway, one of the long-term goals of AI and automation research, as I understand it, is to turn everyone into takers ("full unemployment"). Meanwhile, one of the long-term goals of transhumanism is to be able to cure all of the various disorders and disabilities that render a large fraction of currently umemployed people unemployable. Until at least one of those goals is achieved, we will continue to have an unemployable underclass. I guess progress towards the transhumanist goal, or better public policy, could make that underclass smaller, but right now more people seem to be worrying about progress on the AI/automation front making it larger.)

Comment by lbThingrb on Who Wants The Job? · 2018-07-22T17:24:14.523Z · LW · GW

I'm curious if anyone has recollections of what it was like trying to hire for similar positions in recent years, when the unemployment rate was much higher. That is to say, how much of this is base-rate human flakiness, and how much is attributable to the tight labor market having already hoovered up almost all the well-functioning adults?

Comment by lbThingrb on Are ethical asymmetries from property rights? · 2018-07-03T02:21:53.845Z · LW · GW
I’m not actually sure what to make of that—should we write off some moral intuitions as clearly evolved for not-actually-moral reasons ... ?

All moral intuitions evolved for not-actually-moral reasons, because evolution is an amoral process. That is not a reason to write any of them off, though. Or perhaps I should say, it is only a reason to "write them off" to the extent that it feels like it is, and the fact that it sometimes does, to some people, is as fine an example as any of the inescapable irrationality of moral intuitions.

If we have the moral intuition, does that make the thing of moral value, regardless of its origins?

Why would one ever regard anything as having moral value, except as a consequence of some moral intuition? And if one has a second moral intuition to the effect that the first moral intuition is invalid on account of its "origins," what is one to do, except reflect on the matter, and heed whichever of these conflicting intuitions is stronger?

This actually gets at a deeper issue, which I might as well lay out now, having to do with my reasons for rejecting the idea that utilitarianism, consequentialism, or really any abstract principle or system of ethics, can be correct in a normative sense. (I think I would be called a moral noncognitivist, but my knowledge of the relevant literature is tissue-thin.) On a purely descriptive level, I agree with Kaj's take on the "parliamentary model" of ethics: I feel (as I assume most humans do) a lot of distinct and often conflicting sentiments about what is right and wrong, good and bad, just and unjust. (I could say the same about non-ethical value judgements, e.g. beautiful vs. ugly, yummy vs. yucky, etc.) I also have sentiments about what I want and don't want that I regard as being purely motivated by self-interest. It's not always easy, of course, to mark the boundary between selfishly and morally motivated sentiments, but to the extent that I can, I try to disregard the former when deciding what I endorse as morally correct, even though selfishness sometimes (often, tbh) prevails over morality in guiding my actions.

On a prescriptive level, on the other hand, I think it would be incoherent for me to endorse any abstract ethical principle, except as a rule of thumb which is liable to admit any number of exceptions, because, in my experience, trying to deduce ethical judgements from first principles invariably leads to conclusions that feel wrong to me. And whereas I can honestly say something like "Many of my current beliefs are incorrect, I just don't know which ones," because I believe that there is an objective physical reality to which my descriptive beliefs could be compared, I don't think there is any analogous objective moral reality against which my moral opinions could be held up and judged correct or incorrect. The best I can say is that, based on past experience, I anticipate that my future self is likely to regard my present self as morally misguided about some things.

Obviously, this isn't much help if one is looking to encode human preferences in a way that would be legible to AI systems. I do think it's useful, for that purpose, to study what moral intuitions humans tend to have, and how individuals resolve internal conflicts between them. So in that sense, it is useful to notice patterns like the resemblance between our intuitions about property rights and the act-omission distinction, and try to figure out why we think that way.

Comment by lbThingrb on Are ethical asymmetries from property rights? · 2018-07-02T23:58:51.782Z · LW · GW
• You are not required to create a happy person, but you are definitely not allowed to create a miserable one

Who's going around enforcing this rule? There's certainly a stigma attached to people having children when those children will predictably be unhappy, but most people aren't willing to resort to, e.g., nonconsensual sterilization to enforce it, and AFAIK we haven't passed laws to the effect that people can be convicted, under penalty of fine or imprisonment, of having children despite knowing that those children would be at high risk of inheriting a severe genetic disorder, for example. Maybe this is just because it's hard to predict who will have kids, when they will have them, and how happy those kids will be, thereby making enforcement efforts unreasonably costly and invasive? I don't know, just commenting because this supposed norm struck me as much weaker than the other ones you name. Very interesting post overall though, this isn't meant as criticism.

Comment by lbThingrb on Oops on Commodity Prices · 2018-06-11T04:51:46.644Z · LW · GW
A shy person cannot learn, an impatient person cannot teach

The translation you linked actually gives "A person prone to being ashamed cannot learn," which is even more on the nose. Does anyone have any advice on how to deal with this issue? I have a pretty severe case of it, especially because I tend to take a lot (a lot) longer than other people to do pretty much every kind of work I've ever tried, independently of how much intelligence I needed to apply to it. Aside from seeking medical advice for that problem (which hasn't helped much) the obvious thing to do was try to exploit my comparative advantage in intelligence, so at least I'd be focusing on the most valuable kind of work I was capable of. Trouble is, when you do smart-people stuff, you soon find yourself surrounded by, and often in competition with, other smart people, and being too slow to keep up looks an awful lot like being too dumb or lazy (except that people get confused to the extent that they notice I'm not actually dumb or lazy). It's hard enough trying to get over the fear of "learning in public" without having to explain that, due to an ill-defined, poorly understood disability that seems to afflict hardly anyone but me, I'm going to continue being surprisingly unproductive no matter how how patient anyone who tries to teach me is willing to be.

I actually tried, for several years, to be less outspoken and convey less confidence in my written voice.  My impression is that the attempt didn’t work for me, and caused me some emotional and intellectual damage in the meanwhile.  I think verbally; if I try to verbalize less, I think less.

That's interesting. Lately I've been noticing a lot how most people on the internet write in a manner that conveys much more self-confidence than I feel when I write, even when they write about things they probably shouldn't feel so confident about. I think my writing style noticeably conveys my lack self-confidence, but that might just be the illusion of transparency getting the better of me. Anyway, I've also noticed over the years that my internal monologue mostly consists of me talking out of my ass quite boldly about topics of which I am ignorant. I don't seem to have any difficulty squelching that voice when it comes time to share my thoughts with others, for better or worse.

Comment by lbThingrb on Global insect declines: Why aren't we all dead yet? · 2018-04-08T07:20:37.707Z · LW · GW

The Wikipedia link on amphibian decline mentioned the effects of artificial lighting on the behavior of insect prey species as a possible contributor. I suppose it’s possible that that’s a factor in the observations from the German study as well, particularly since they only looked at flying insects. But the observations were apparently made in nature preserves, so one would think that artificial lighting wouldn’t be that common in those habitats. There could still be indirect effects, though.

Comment by lbThingrb on Is Rhetoric Worth Learning? · 2018-04-07T19:50:17.344Z · LW · GW
• How to be aware of other people’s points of view without merging with them
...
• How to restrain yourself from anger or upset
• How to take unflattering comments or disagreement in stride
...
• How to resist impulses to evade the issue or make misleading points
...
• How to understand another person’s perspective

It wasn't the reason I got into it in the first place, but I have found mindfulness practice helpful for these things. I think that's because mindfulness involves a lot of introspection and metacognition, and those skills transfer pretty well to modeling other people, and are useful for restraining unhelpful emotional impulses. In particular, knowing how to recognize and restrain anger in oneself, I strongly suspect, makes one better at anticipating how one's words might set off other people's anger, and coming up with strategies to avert or de-escalate such confrontations. It's analogous to a phenomenon that I expect will sound familiar to LW readers, which is that getting better at noticing mistakes and biases in one's own thinking also makes one better at noticing them in other people's thinking.

Comment by lbThingrb on Is Rhetoric Worth Learning? · 2018-04-07T19:16:06.297Z · LW · GW

Toastmasters? Never tried them myself, but I get the impression that they aim to do pretty much the thing you're looking for.

Comment by lbThingrb on Becoming stronger together · 2017-07-14T03:18:34.773Z · LW · GW

Groups of friends often coalesce around common interests. This group of friends coalesced around a common interest in rationality and self-improvement. That this is possible to do is potentially useful information to other people who are interested in rationality and self-improvement and making new friends.