Posts

SMBC about Eliezer 2021-04-20T17:28:15.402Z
2021-03-01 National Library of Medicine Presentation: “Atlas of AI: Mapping the social and economic forces behind AI” 2021-02-17T18:23:47.320Z

Comments

Comment by IrenicTruth on [deleted post] 2023-06-27T04:51:39.244Z

Your argument boils down to:

  • Objectivity is X
  • Y is not X
  • (Because you want to be objective) Don't do Y

I want to Win. Being Pascal Mugged is not Winning. Therefore I will make choices to not be Pascal Mugged. If that requires not being "objective," according to your definition, I don't want to be objective.

However, I have my own use of "objective" that comports well with adapting to new information and using my predictive powers. But I don't want to argue that my usage is better or worse; it will be fruitless. I mention it so readers won't think I'm hypocritical if I say I'm attempting to be objective.

This is a drive-by comment. I write it with hopes for our mutual benefit. However, do not expect me to check back or reply to anything you say.

Comment by IrenicTruth on [deleted post] 2023-02-21T15:19:27.062Z

I haven't listened to the video yet. (It's very long, so I put it on my watch-later list.) Nor have I finished Eliezer's Sequences (I'm on "A Technical Explanation of Technical Explanation.") However, I looked at the above summaries to decide whether it would be worth listening to the video.

Potential Weaknesses

  • None of the alternative books say anything about statistics. A rough intro to Bayesian statistics is an essential part of the Sequences. Without this, you have not made them superfluous.
    • A rough understanding of Bayesian statistics is a valuable tool.
    • Anecdote: I took courses in informal logic when I was a teenager and was aware of cognitive biases. However, the a-ha moment that took me out of the religion of my childhood was to ask whether a particular theodicy was probable. This opened the way to ask whether some of my other beliefs were probable (not possible, as I'd done before). Within an hour of asking the first question, I was an atheist. (Though it took me another year to "check my work" by meeting with the area pastors and elders.) I thought to ask it because I'd been studying statistics. So, for me, the statistical lens helped in the case where the other lenses failed to reveal my errors. I already knew a hoard of problems with the Bible, but the non-probabilistic approaches allowed me to deal with the evidence piece by piece. I could propose a fix for each one. For example, following Origen, I could say that Genesis 1 was an allegory. Then it didn't count against the whole structure.
    • The above anecdote took place several years before I encountered LessWrong. I'm not saying that the Sequences/LessWrong helped me escape religion. I'm saying that Bayesian stats worked where other things failed, so it was useful to me, and you should not consider that you've replaced the sequences if you leave it out.
  • Handbook of the History of Logic: The Many Valued and Nonmonotonic Turn in Logic is on the reading list. I haven't read it, but the title gives me pause. Nonmonotonic logics are subtle and can be misapplied. I misapplied Zadeh's possibilistic logic to help justify my theism.
  • The promotion of the LSAT and legal reasoning seems out of place. Law is the opposite of truth-seeking. Lawyers create whatever arguments they can to serve their clients. A quick Google couldn't dig up statistics, but I'd guess that more lawyers are theists than scientists.
  • For me, the LessWrong community is a place I can get better data and predictions than other news sources. I know only one person who is also on LessWrong. They live across an ocean from me, and we haven't talked in 8 months. I don't think hanging out and playing board games is a major draw. If this is the thesis, it is far from my personal experience.

Potential Strengths

  • The emphasis of the sequences on epistemic over instrumental rationality.
    • Other people in the LessWrong community have pointed this out. (I remember a sequence with the word "Hammer" in it that talks about instrumental rationality.)
    • The alternative reading list does not seem to address instrumental rationality
  • Treating suffering as interchangeable doesn't always produce good outcomes. (Though I don't know how to deal with this - if you can only take one course of action, you must reify everything into a space where you can compare options.)

Other An alternative to piracy in the USA is to request books with the Interlibrary loan system. It is free in most places. Also, academic libraries in public universities frequently offer membership for a small fee ($10-$20 per month) or free to community members - especially students, so if you have a local university, you might ask them.

Comment by IrenicTruth on [deleted post] 2023-02-21T13:52:52.165Z

Duplicating the description

TimePoints

  • 00:00 intro
  • 0:53 most of the sequences aren't about rationality; AI is not rationality
  • 3:43 lesswrong and IQ mysticism
  • 32:20 lesswrong and something-in-the-waterism
  • 36:49 overtrusting of ingroups
  • 39:35 vulnerability to believing people's BS self-claims
  • 47:35 norms aren't sharp enough
  • 54:41 weird cultlike privacy norms
  • 56:46 realnaming as "doxxing"
  • 58:28 no viable method for calling out rumors/misinformation if realnaming is 'doxxing'
  • 1:00:16 the strangeness and backwardness of LW-sphere privacy norms
  • 1:04:07 EA: disregard for the homeless and refusal to do politics because it's messy
  • 1:10:16 EA: largely socially inept, does not understand how truly bad the SBF situation is
  • 1:13:36 EA: treatment of utilitarianism and consciousness is simplistic
  • 1:20:20 EA rigor: vitamin A charity example
  • 1:23:39 extreme techno optimism and weak knowledge of human biology
  • 1:25:24 exclusionary white nerd millennial culture
  • 1:27:23 comfort class culture
  • 1:30:25 pragmatics-agnosticism
  • 1:33:13 shallow analysis of empirical topics
  • 1:34:18 idiosyncrasies of communication, e.g. being extremely obtuse at the thesis level
  • 1:39:50 epistemic rationality matters much more than instrumental rationality
  • 1:43:00 the scene isn't about rationality, it's about hanging out and board games (which is fine, just don't act like you're doing anything important)

References

  1. sample WAIS report https://www.pearsonassessments.com/co...
  2. what is g https://www.youtube.com/watch?v=jSo5v...
  3. childhood IQ vs. adult IQ https://pubmed.ncbi.nlm.nih.gov/12887...
  4. wonky attempts to measure IQ above 160 https://archive.vn/kFCY1
  5. computer-based verbal memory test https://humanbenchmark.com/tests/verb...
  6. typing speed / IQ https://eric.ed.gov/?id=ED022127
  7. simple choice reaction time https://www.psytoolkit.org/lessons/ex...
  8. severity of 83 IQ https://www.youtube.com/watch?v=5-Ur7...
  9. googleability of WAIS https://nda.nih.gov/data_structure.ht...
  10. uses of WAIS in clinical care https://www.ncbi.nlm.nih.gov/pmc/arti...
  11. drunk reaction time experiment https://imgur.com/a/IIZpTol
  12. how g correlates with WAIS https://archive.vn/gyDcM
  13. low murderer IQ https://archive.vn/SrenV
  14. tom segura bit about the first 48 https://www.youtube.com/watch?v=B0l2l...
  15. rarity of perfect LSAT scores (30 out of 100,000) https://archive.vn/KWAzf
  16. limits on human reading speed (1) https://archive.vn/IVU8x
  17. limits on human reading speed (2) https://psycnet.apa.org/record/1998-1...
  18. kinobody fitness callout by philion https://www.youtube.com/watch?v=WjytE...
  19. summary of lesswrong drama (Jan-Mar. 2022) https://alfredmacdonald.medium.com/su...
  20. leverage / geoff anders pseudo-cult https://archive.vn/BKvtM
  21. the questionability of michael vassar and related organizations https://archive.vn/8A8QO
  22. sharp vs soft culture https://archive.vn/VOpya
  23. something-in-the-waterism https://alfredmacdonald.medium.com/so...
  24. on the fakeness of many bayesian priors https://alfredmacdonald.substack.com/...
  25. criticism of the "postrationalist" subculture and the problems created by pseudonyms and hyper-privacy norms https://alfredmacdonald.substack.com/...
  26. proliferation of "technoyogi" woo in this culture due to lack of BS-calling norms https://alfredmacdonald.substack.com/...
  27. questionability of the vitamin A charity I mentioned https://archive.vn/2AxlK
  28. MIRI support from Open Philanthropy https://archive.vn/JW6WT
  29. MIRI publication record https://archive.vn/9hIhT
  30. MIRI staff https://archive.vn/hJeuT
  31. MIRI budget, 50% of which is spent on research personnel https://archive.vn/z6bvz
  32. benefits of sharp culture (or at least a mean robot boss) https://archive.vn/onIfM
  33. daniel dennett on, among other things, the problems with treating all suffering as interchangeable https://archive.vn/5SLEy
  34. on reading comprehension limits: https://catalog.shepherd.edu/mime/med... -- while a 50th percentile student reads (with retention) at 250wpm and a 75th at 500wpm for "general expository reading (e.g. news)", this same group reads at a 50th percentile of 149wpm and a 75th percentile of 170wpm for "advanced scientific and/or technical material". assuming a gaussian distribution, the distance between 50th percentile and 75th percentile is 2/3s an SD -- so with an SD of ~31.5, reading said material at 306.5WPM is 5SD from the mean, or about 1/3.5 million. the average audible narration rate is 155wpm, so this severely puts into question those who say they're 2xing or even 1.75xing advanced audiobooks/lectures.

Duplicating the first comment (@alfredmacdonald's proposed alternative)

A READING LIST FOR RATIONALITY THAT IS NOT LESSWRONG / RENDERS THE SEQUENCES SUPERFLUOUS

objection: "but I learned a lot about rationality through lesswrong"

response: maybe, but probably inadequately.

  1. while unorthodox, I usually suggest this above everything else: the PowerScore Logical Reasoning Bible, while meant as LSAT prep, is the best test of plain-language reasoning that I am aware of. the kinds of questions you are meant to do will humble many of you. https://www.amazon.com/PowerScore-LSAT-Logical-Reasoning-Bible/dp/0991299221 and you can take a 10-question section of practice questions at https://www.lsac.org/lsat/taking-lsat/test-format/logical-reasoning/logical-reasoning-sample-questions — many of you will not get every question right, in which case there is room to sharpen your ability and powerscore's book helps do that.
  2. https://www.amazon.com/Cengage-Advantage-Books-Understanding-Introduction/dp/1285197364 in my view, the best book on argumentation that exists; worth reading either alongside PowerScore's book, or directly after it.
  3. https://www.amazon.com/Rationality-What-Seems-Scarce-Matters/dp/B08X4X4SQ4 pinker's "rationality" is an excellent next step after learning how to reason through the previous two texts, since you will establish what rationality actually is.
  4. https://www.amazon.com/Cambridge-Handbook-Reasoning-Handbooks-Psychology/dp/0521531012 this is a reference text, meaning it's not meant to be read front-to-back. it's one of the most comprehensive of its kind.
  5. https://www.amazon.com/Handbook-History-Logic-Valued-Nonmonotonic/dp/044460359X — this is both prohibitively and ludicrously expensive, so you will probably need to pirate it. however, this history of logic covers many useful concepts.
  6. https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 this is a standard text that established "irrationality" as a mainstream academic concept. despite being a psychologist, some of kahneman's work won him the nobel prize in economics in 2002, shared with vernon smith.
  7. https://www.amazon.com/Predictably-Irrational-audiobook/dp/B0014EAHNQ this is another widely-read text that expands on the mainstream concept of irrationality.
  8. https://www.amazon.com/BIASES-HEURISTICS-Collection-Heuristics-Everything/dp/1078432317 it is exactly what it says: a list of about 100 cognitive biases. many of these biases are worth rereading and/or flashcarding. there is also https://en.wikipedia.org/wiki/List_of_cognitive_biases
  9. https://www.amazon.com/Informal-Logical-Fallacies-Brief-Guide/dp/0761854339 also exactly what it says, but with logical fallacies rather than biases. (a bias is an error in weight or proportion or emphasis; a fallacy is a mistake in reasoning itself.) there is also https://en.wikipedia.org/wiki/List_of_fallacies
  10. here is another fantastic handbook of rationality, which is a wonderfully integrated work spanning psychology, philosophy, law, and other fields with 806 pages of content. https://www.amazon.com/Handbook-Rationality-Markus-Knauff/dp/0262045079 (it is quite expensive -- no one will blame you if you pirate it from libgen.)

you will learn more through these texts than through the LessWrong Sequences. as mentioned, many of these are expensive, and no one will blame you if you need to pirate/libgen them. many or maybe even most of these you will need to reread some of these texts, perhaps multiple times.

"but I'd rather have a communi - " yes, exactly. hence the thesis of a video I made: lesswrong is primarily nerds who want a hangout group/subculture, rather than a means of learning rationality, and this disparity between claimed purpose and actual purpose produces most of the objections people have and many of my objections in my video, and why I have created this alternate reading list.

Comment by IrenicTruth on Why don't Rationalists use bidets? · 2023-01-05T18:41:55.602Z · LW · GW

Reading the comments here, I think I may halve my estimate of self-install time.

Comment by IrenicTruth on Why don't Rationalists use bidets? · 2023-01-02T20:02:37.538Z · LW · GW

I've wanted to install a bidet for 8+ years. However, I've always had higher-priority projects.

Costs that deter me:

  • What for you is a 20-minute project will be 4-8 hours for me because it involves plumbing (and I want it to not leak). The fastest plumbing project I've ever had (cleaning the p-trap beneath the bathroom sink) took 1.5 hours.
  • Hiring a contractor will be $100 because I live in a high-rent area, and they need to cover the expense of coming out. It will take me 1 hour to choose, schedule, and oversee a contractor.
  • I don't know how to choose a bidet. It'll take me 2-4 hours to research them.

The benefits are lower for me than for you:

  • I estimate it will save six rolls of toilet paper per year. That comes to about $20. If I value my hours at $50, hiring a contractor is $150, choosing a bidet is $100, and the bidet itself is at least $35. The sum is $285, a 14-year pay-off time.
  • I mainly want the bidet for comfort and because it will make me cleaner. Comfort and hygiene are lower-priority items for me. $20/year of extra comfort drops the pay-off time to 7 years.

BTW: Aella, a rationalist-adjacent Twitter user, mentioned that she uses a bidet.

Comment by IrenicTruth on Challenge to the notion that anything is (maybe) possible with AGI · 2023-01-01T23:54:43.198Z · LW · GW

Hint for those who want to read the text at the link: go to the bottom and click "view source" to get something that is not an SVG.

Comment by IrenicTruth on A Mystery About High Dimensional Concept Encoding · 2022-11-04T12:47:12.208Z · LW · GW

The best explanation I have found to explain this discrepancy is that ... RLACE ... finds ... a direction where there is a clear separation,

You could test this explanation using a support vector machine - it finds the direction that gives the maximum separation.

(This is a drive-by comment. I'm trying to reduce my external obligations, so I probably won't be responding.)

Comment by IrenicTruth on What "The Message" Was For Me · 2022-10-12T13:25:33.263Z · LW · GW

A lot of the steps in your chain are tenuous. For example, if I were making replicators, I'd ensure they were faithful replicators (not that hard from an engineering standpoint). Making faithful replicators negates step 3.

(Note: I won't respond to anything you write here. I have too many things to respond to right now. But I saw the negative vote total and no comments, a situation I'd find frustrating if I were in it, so I wanted to give you some idea of what someone might disagree with/consider sloppy/wish they hadn't spent their time reading.)

Comment by IrenicTruth on Calibrate - New Chrome Extension for hiding numbers so you can guess · 2022-10-07T19:38:30.229Z · LW · GW

Feature request: some way to keep score. (Maybe a scoring mode that makes the black box an outline on hover and then clicking right=unscored, left-right=correct, and left-left-right=incorrect - or maybe a mouse-out could be unscored and left = incorrect and right = correct).

Comment by IrenicTruth on Unifying Bargaining Notions (2/2) · 2022-09-10T18:15:02.922Z · LW · GW

I haven't finished reading this; I read the first few paragraphs and scanned the rest of the article to see if it would be worth reading. But I want to point out that starting with Harsanyi's Utilitarianism Theorem (a.k.a. Harsanyi's Impartial Observer Theorem) implies that you assume "independence of irrelevant alternatives" because the theorem assumes that its agents obey [1] the Von Neumann–Morgenstern utility theorem. The fourth axiom of this theorem (as listed in Wikipedia) is the "independence of irrelevant alternatives.". Since from the previous article,

The Nash Bargaining Solution is the only one that fulfills the usual three desiderata, and the axiom of Independence of Irrelevant Alternatives.

I am not surprised that this results in the Nash Bargaining solution as the solution to Bargaining Games. The last article also points out that the independence of irrelevant alternatives is not an obvious axiom, so I do not find that the Nash Bargaining solution to be more plausible because it is a generalization of the CoCo Equilibria.[2]


  1. From the abstract here: [https://www.researchgate.net/publication/228049690_Harsanyi'  s_'Utilitarian_Theorem'_and_Utilitarianism] and the introduction to Generalized UtilitarianismandHarsanyi's Impartial Observer Theorem ↩︎

  2. This is a bit inconsistent on my part because I usually make formal decisions according to Rule Utilitarianism, and most forms of utilitarianism assume Von Neumann–Morgenstern expected utility. However, in my defense, I'm not firmly attached to Rule Utilitarianism; it is just the current best I've found. ↩︎

Comment by IrenicTruth on Unifying Bargaining Notions (1/2) · 2022-09-10T17:20:06.867Z · LW · GW

You're trying to bake your personal values (like happy humans) into the rules.

My point is that this has already happened. The underlying assumptions bake in human values. The discussion so far did not convince me that an alien would share these values. I list instances where a human might object to these values. If a human may object to "a player which contributes absolutely nothing ... gets nothing," an alien may object too; if a human may object to "the only inputs are the set of players and a function from player subsets to utility," an alien may object too; and so forth. These are assumptions baked into the rules of how to divide the resources. So, I am not convinced that these rules allow all agents with conflicting goals to reach a compromise because I am not convinced all agents will accept these rules.[1]

I brought up the "happy humans term" as a way to point out that maybe aliens wouldn't object to the rule of "contribute nothing ... get nothing" because they could always define the value functions so that the set of participants who contribute nothing is empty.


  1. This sets up a meta-bargaining situation where we have to agree on which rules to accept to do bargaining before we can start bargaining. This situation seems to be a basic "Bargaining Game." I think we might derive the utilities of each rule set from the utilities the participants receive from a bargain made under those rules + a term for how much they like using that rule set[2]. Unfortunately, except for "Choose options on the Pareto frontier whose utilities exceed the BATNA," this game seems underdetermined, so we'll have trouble reaching a consensus. ↩︎

  2. To understand why I think there should be a term for how much they like using the rule set, imagine aliens who value self-determination and cooperative decision-making for all sentient beings and can wipe us out militarily. Imagine we want to split the resources in an asteroid both of us landed on. Consider the rule set of "might makes right." Under this set, they can unilaterally dictate how the asteroid is divided. So they get maximum utility from the asteroid's resources. However, they recognize that this is the opposite of self-determination and cooperative decision making; so getting all of the resources this way is of less utility to them than getting all the resources under another set of rules. ↩︎

Comment by IrenicTruth on Unifying Bargaining Notions (1/2) · 2022-09-10T05:00:07.686Z · LW · GW

There are quite a few assumptions to pin down solutions that seem to unnecessarily restrict the solution space for bargaining strategies. For example,

  1. "A player which contributes absolutely nothing to the project and just sits around, regardless of circumstances, should get 0 dollars."

    We might want solutions that benefit players who cannot contribute. For example, in an AGI world, a large number of organic humans may not be able to contribute because overhead swamps gains from trade in comparative advantage. We still want to give these people a slice of the pie. We want to value human life, not just production.

    Maybe you could reconceive the project as including a "has more happy humans" term. This makes all participants contributors.

  2. Related, is the implicit assumption that the player's input is what should determine the "chaa" result. I'd rather divide up the pie on consequentialist terms: what division brings the maximum utility for the worst off person or median person or maximum mean utility. A Marxist would want to distribute the gains according to the players' "needs." If our fellow humans come up with such different notions, an alien or AI can scarcely be expected to be more similar. Unfortunately the inputs to the problem are missing terms for "need" and long term population utility.

  3. The assumption that if the total pile is times as big, everyone should get times as much is also unwarranted. Utility arising from 500,000,000 pieces of candy is less thank 100,000,000 times the utility of 5 pieces. We get more mean and median utility when the extra gains go disproportionately to those who would have been allotted less.

  4. The CoCo solution has it's share of of assumptions. For example: Payoff dominance. If player A gets more money than player B in all cells, then player A will leave the game with more money than player B.

    I don't see why this is the way we want to design an allocation method. We may need this to make an incentive structure for certain types of behavior, but for arbitrary situations, I don't think this is a requirement.

Comment by IrenicTruth on Time-logging programs and/or spreadsheets (2022) · 2022-07-31T20:34:51.538Z · LW · GW

I use the "Bearable" app for very rough time logging. It has a system of toggles for "factors" where you can specify what factor was present in a 6-hour interval of your day. Since I am mainly interested in correlations with other things I measure, a primary purpose of "Bearable," this low resolution is a good compromise. It also makes it easy to log after the fact. "Did I do this activity in this 6-hour period?" is a much easier question than remembering down to an hour or quarter-hour granularity. The downside is I can't tell how much time I've invested in a particular category.

I do much more detailed time logging at work with Jira and the "Tempo" plugin. I can then look back when I create my monthly reports. And I can use the per-ticket data to estimate the effort required for future tickets.

Comment by IrenicTruth on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-17T19:27:43.241Z · LW · GW

I think learning is likely to be a hard problem in general (for example, the "learning with rounding problem" is the basis of some cryptographic schemes). I am much less sure whether learning the properties of the physical or social worlds is hard, but I think there's a good chance it is. If an individual AI cannot exceed human capabilities by much (e.g., we can get an AGI as brilliant as John von Neumann but not much more intelligent), is it still dangerous?

Comment by IrenicTruth on Train first VS prune first in neural networks. · 2022-07-09T20:51:48.835Z · LW · GW

You may want to look at what happens with test data never shown to the network or used to make decisions about its training. Pruning often improves generalization when data are abundant compared to the complexity of the problem space because you are reducing the number of parameters in the model.

Comment by IrenicTruth on My current take on Internal Family Systems “parts” · 2022-06-27T01:08:07.959Z · LW · GW

Going from "Parts" to "Self," you said the Self might be all the Parts processing together. (Capitalized "Self" means the IFS "Core Self.") How likely is the hypothesis that the Self is an artifact of the therapeutic procedure? When someone says they feel angry at a Part and claims that anger does not come from a Part but is their self, the therapist doesn't accept it. The therapist tells them they need to unblend. But when they describe the 8 C's and say that is their self, the therapist does not ask them to unblend, perceiving that as their Self.

Comment by IrenicTruth on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T13:25:38.372Z · LW · GW

For lefties:

  • We put unaligned AIs in charge of choosing what news people see. Result: polarization resulting in millions of deaths. Let's not make the same mistake again.

For right-wingers:

  • We put unaligned AIs in charge of choosing what news people see. Result: people addicted to their phones, oblivious to their families, morals, and eroding freedoms. Let's not make the same mistake again.
Comment by IrenicTruth on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T12:58:20.863Z · LW · GW

YouTubers live in constant fear of the mysterious, capricious Algorithm. There is no mercy or sense, just rituals of appeasement as it maximizes "engagement." Imagine that, but it runs your whole life.

<Optional continuation:> You don't shop at Hot Topic because you hear it can hurt your ranking, which could damage your next hiring opportunity. And you iron your clothes despite the starch making you itch because it should boost your conscientiousness score, giving you an edge in dating apps.

Comment by IrenicTruth on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T04:35:33.212Z · LW · GW

Do we serve The Algorithm, or does it serve us? Choose before The Algorithm chooses for you.

Comment by IrenicTruth on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T04:30:20.850Z · LW · GW

That kid who always found loopholes in whatever his parents asked him? He made an AI that's just like him.

Comment by IrenicTruth on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T04:17:56.012Z · LW · GW

COVID and AI grow exponentially. In December 2019, COVID was a few people at a fish market. In January, it was just one city. In March, it was the world. In 2010, computers could beat humans at Chess. In 2016, at Go. In 2022, at art, writing, and truck driving. Are we ready for 2028?

Comment by IrenicTruth on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T03:59:54.808Z · LW · GW

Someone who likes machines more than people creates a machine to improve the world. Will the "improved" world have more people?

Comment by IrenicTruth on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T03:49:44.384Z · LW · GW

Humanity's incompetence has kept us from destroying ourselves. With AI, we will finally break that shackle.

Comment by IrenicTruth on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T03:37:37.529Z · LW · GW

Orwell's boot stamped on a human face forever. The AI's boot will crush it first try.

Comment by IrenicTruth on [$20K in Prizes] AI Safety Arguments Competition · 2022-05-27T01:03:42.186Z · LW · GW

Hey Siri, "Is there a God?" "There is now."

 - Adapted from Fredric Brown, "The Answer" - for policymakers.

Comment by IrenicTruth on [deleted post] 2022-04-28T21:50:15.868Z

Rationalism requires stacktraces terminating in irrefutable observation

Like the previous two commenters, I find this statement odd. I don't fully trust my senses. I could be dreaming/hallucinating. I don't fully trust my knowledge of my thoughts. By this definition of a rationalist, I could never be one (and maybe I'm not) because I don't think there is such a thing as an irrefutable observation. I think there was a joke in that statement, but, unobserved by me, it took flight and now soars somewhere else.

Comment by IrenicTruth on Internet Literacy Atrophy · 2021-12-31T00:59:52.019Z · LW · GW

Like pjeby, I think you missed his point. He was not arguing from authority, he was presenting himself as evidence that someone tech-savvy could still see it as a trap. His actual reason for believing it is a trap is in his reply to GWS.

Comment by IrenicTruth on A fate worse than death? · 2021-12-20T20:55:52.258Z · LW · GW

If one must choose between a permanent loss of human life and some temporary discomfort, it doesn't make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort. 


This choice doesn't exist; permanent death is inevitable under known physics. All lifespans are finite because the time the universe will support consciousness is most likely finite, whether because of heat death or the big rip. This finiteness makes your "you save one life, and 7 billion humans suffer for 100 billion years" question not at all obvious. Saving a life is not avoiding death; it is postponing it. Thus, you could rewrite your scenario as: "should I give someone 80 extra years of normal life before they die, if in exchange, instead of dying at their normal time, 7 billion humans are tortured for 100 billion years and then die." Under a Rawlsian veil of ignorance, I would not choose to "save the life" in this scenario. Even if that person survived until the end of the black hole farming era, I probably still wouldn't choose it. There is too much chance that I will end up being one of the tortured. (Though a  chance of  years of life against  years of torture is pretty tempting on an expected value basis, so I'm not sure.)

As others have commented, I also think the reversibility of suffering is a weak point. We do not know how hard it is. It may have the same difficulty level as resurrection. But, if you specify that the finite span of torture happens instead of normal death, you avoid this.

Comment by IrenicTruth on Competence/Confidence · 2021-11-22T15:09:09.050Z · LW · GW

I had a similar issue. I could not do the exercise because I could not figure out how to evaluate confidence and competence separately. I always end up on the x=y line. Reading this thread did not help. "Anticipated okayness of failure" doesn't change much with time for the same task, so that is a vertical line. "Confidence" = "Self-related ability to improve" is an interesting interpretation (working on "confidence" would be working on learning skills). Still, intuitively it feels off from what the graphs say (though I haven't been able to put the disconnect into words). Thinking about the improv/parachute graph, maybe "confidence" is "willingness to attempt a task despite being incompetent." I'm giving up for now.

Comment by IrenicTruth on Book summary: Unlocking the Emotional Brain · 2021-11-18T14:56:36.490Z · LW · GW

I found a review on Amazon (quoted at the bottom, since I cannot link to it) that says Ecker is injecting significant personal opinion and slanting his report of the science. I don't know if this is true, but the gushing praise from readers and psychology's history of jumping on things rather than evaluating evidence make it seem more likely than not. For me, this means that reading this book will involve getting familiar with the associated papers.

The Review

by "scholar"

Previously I posted a very positive review of this book. On further reflection and study of the relevant research papers, I have a very different view. The science of memory reconsolidation is complex and subtle. Its application to clinical work with real patients remains predominantly hypothetical. Ecker creates the impression that the conditions for memory reconsolidation and updating are now known and clear. They are not. His claims for their application to clinical practice (in my view) go rather beyond the evidence. Moreover, when I read his clinical examples later in the book, I completely fail to see how they relate specifically to the science he earlier quotes - they just seem to be examples of his therapeutic approach called Coherence Therapy (which predates his interest in memory reconsolidation) - and although these are certainly interesting, I cannot grasp how they illustrate the principles of memory reconsolidation. The positive outcome is that this book, which I eventually found confusing and infuriating, prompted me to study further this fascinating field of enquiry. There are undoubtedly potential clinical applications, but I feel Ecker's enthusiasm is a little premature.

Comment by IrenicTruth on An Emergency Fund for Effective Altruists (second version) · 2021-11-15T14:20:58.107Z · LW · GW

If recoupments occur sparingly, as I'd expect, where should the remaining funds go?

Keep them for "times of national emergency" etc. to hedge against correlated risk.

How big is the risk that the fund will be used in illicit ways, such as tax evasion, despite the fact that donors cannot claim more than they spent?

Modern society strongly incentivizes misusing anything that touches money, so without further evidence, I'd say that the risk is very high (near certainty). If we haven't found a way to misuse it, it is more likely that we are not clever enough than that the way does not exist.

First thought: I put in $100 to an 80% fund. I wait a year to claim the tax break on the 80% donation netting say 30%*$80=$24 in reduced taxes. Then I take out $95. I've made $19 on the trade. Of course, a government would see this right away and not allow tax breaks for such contributions. But this sort of thing seems rife for problems.

Another: I put in $100, $80 goes to a "charity" that gives me a 10% kickback. Then I take out $95 and I've made $3.

You might be able to fix this by requiring that contributors maintain almost all of their assets as property of the fund. Then if I make a withdrawal for "an emergency" I can't keep any profit or buy anything that doesn't go right back to the fund. But that sounds a lot like the "everything in common" schemes that have failed so often in the past. So, we'd need to modify it to make it viable.

Comment by IrenicTruth on On Humor · 2021-09-20T16:11:33.965Z · LW · GW

What experimental tests has clash theory survived?

Comment by IrenicTruth on Assigning probabilities to metaphysical ideas · 2021-09-08T13:49:05.964Z · LW · GW

Take all the metaphysical models of the universe that any human ever considers.

This N is huge. Approximate it with the number of strings generatable in a certain formal language over the lifetime of the human race. We're probably talking about billions even if the human race ceases to exist tomorrow. (Imagine that 1/7 of the people have had a novel metaphysical idea, and you get 1B with just the people currently on earth today. If you think that's a high estimate, remember that people get into weird states of consciousness (through fever, drugs, exertion, meditation, and other triggers), so random strings in that language are likely.)

You may want to define "metaphysical idea" (and thus that language) better. Some examples of what I mean by "metaphysical idea:"

Comment by IrenicTruth on Tasks apps w/ time estimates to gauge how much you'll overshoot? · 2021-09-07T20:18:42.690Z · LW · GW

Yes. Fixed.

Comment by IrenicTruth on Tasks apps w/ time estimates to gauge how much you'll overshoot? · 2021-09-07T18:47:58.207Z · LW · GW

If you can model everything as tasks, FogBugz has a feature I used to help myself complete grad school: https://fogbugz.com/evidence-based-scheduling/, which gives you a probability distribution over finishing times. It was incredibly useful! You might want to start the free trial to see if they still have the "if you have too few users, you can use it for free until you get big enough" deal they used to have.

As of (X years ago) it was missing appointment scheduling.

My most recent solution for individual scheduling is Skedpal. It does not have the overshoot estimation you want. Instead, it will flag tasks that are risky because they have little flexibility for rescheduling. This serves the purpose of deciding whether you can or can't schedule something: add it to your list (or to your calendar), hit update schedule, and if something goes to your hotlist or gets marked as red/yellow for flexibility, say no. You can include overshoot by giving yourself lots of slack (and % slack is a parameter to Skedpal's scheduling algorithm). This makes a constraint that binds you for planning when real constraints don't yet bind you. For sufficiently important stuff, you can turn off the slack to schedule that one item (and know that you're going to pay hell doing it.)

To schedule my team, I do a manual poor man's version of FogBugz. Someday I'll turn it into a web app. (I've had this in my head for 2+ years, so don't hold your breath.)

Comment by IrenicTruth on How An Algorithm Feels From Inside · 2021-08-06T21:50:21.967Z · LW · GW

The earliest citation in Wikipedia is from 1883, and it is a question and answer: "If a tree were to fall on an island where there were no human beings would there be any sound?" [The asker] then went on to answer the query with, "No. Sound is the sensation excited in the ear when the air or other medium is set in motion."

So, if this is truly the origin, they knew the nature of sound when the question was first asked.

Comment by IrenicTruth on Covid 6/10: Somebody Else’s Problem · 2021-06-12T14:59:41.055Z · LW · GW

Re: dominant assurance contracts/crowdfunding

The article makes the bad assumption that , the distribution of individual values of the public good is common knowledge. A good entrepreneur will do market research to try and determine . But better approximations cost more. Entrepreneurs will also be biased to think their idea is good. So, it is likely that many entrepreneurs will have bad models. Most individuals will also not know . So, there is another mode to profit for the small fraction of individuals who have decent approximations of : buy contracts likely to fail.

I'm not sure how this affects the whole scheme, but I'm pretty sure it limits the size of the failure payoffs to be significantly less than the value where  is common knowledge.

The assumption that  (the value to the -th individual) is known to that individual is also false. The individual has less variance on their estimate than others but would need to invest resources to know . I'm not sure what this error does to the contracts. I doubt it has as much effect as the common knowledge assumption.

Comment by IrenicTruth on What topics are on Dath Ilan's civics exam? · 2021-04-27T11:56:09.806Z · LW · GW

> knowing about the MCU, no matter how cool, doesn't pay rent

Is not "enables socialization" a form of rent?

In favor of this particular point, I know about the MCU despite disliking superhero movies and comics (except Watchmen) precisely because it is helpful in my social circles.

Regarding @jaspax's main point, it is not obvious that formal education is necessary to generate a shared mythopoetic structure. OTOH I can't think of an example of a long-lasting one that does not have a group actively involved in educating people about it. So, it is not obvious that it is a poor candidate for formal education either.

Comment by IrenicTruth on What topics are on Dath Ilan's civics exam? · 2021-04-27T11:40:47.667Z · LW · GW
  • Reading and writing Dath Illan's spoken language. 
  • Arithmetic.
  • Individual and collective decision making and execution
    • How to make a good decision.
      • Individually
      • As part of a collective
    • How to recognize a good decision made by others
    • How to execute a good decision when you or others have made it - especially in the face of akrasia.
Comment by IrenicTruth on Is there a mu-like term that means something like "why is this any of my business?" · 2021-04-14T12:31:49.996Z · LW · GW

PocketCasts lets you play at high speed and cut pauses between words for podcasts. I haven't seen pause removal for video.

Comment by IrenicTruth on Covid 3/25: Own Goals · 2021-03-26T20:21:59.897Z · LW · GW

The Science Article mentions some ways it may understate the benefit of increasing production. But it misses an important benefit altogether. The chance of creating vaccine-resistant strains increases with the number of unvaccinated people. Vaccine-resistant strains make current capacity less valuable. So all capacity becomes more valuable the faster you vaccinate people because there is less chance of starting over.

Comment by IrenicTruth on How to Beat Procrastination · 2021-03-26T13:50:18.105Z · LW · GW

Here is the procrastination equation image (since it's currently broken in the main text).

Motivation = (Expectancy * Value) / (Impulsiveness * Delay)

And the same equation in math:

Comment by IrenicTruth on Great minds might not think alike · 2021-01-02T20:38:33.644Z · LW · GW

The concept of a translator between conceptual frameworks reminds me of the narrator of Blindsight - a man who had special cybernetic enhancements to allow him to do this type of translation.