Posts

aleph_four's Shortform 2020-02-27T23:00:49.413Z

Comments

Comment by aleph_four on AGI systems & humans will both need to solve the alignment problem · 2023-02-24T09:19:17.183Z · LW · GW

If there is no solution to the alignment problem within reach of human level intelligence, then the AGI can’t foom into an ASI without risking value drift…

A human augmented by a strong narrow AIs could in theory detect deception by an AGI. Stronger interpretability tools…

What we want is a controlled intelligence explosion, where an increase in strength of the AGI leads to an increase in our ability to align, alignment as an iterative problem…

A kind of intelligence arms race, perhaps humans can find a way to compete indefinitely?

Comment by aleph_four on Ngo and Yudkowsky on alignment difficulty · 2021-11-18T16:51:47.647Z · LW · GW

I love being accused of being GPT-x on Discord by people who don't understand scaling laws and think I own a planet of A100s

There are some hard and mean limits to explainability and there's a real issue that a person that correctly sees how to align AGI or that correctly perceives that an AGI design is catastrophically unsafe will not be able to explain it. It requires super-intelligence to cogently expose stupid designs that will kill us all. What are we going to do if there's this kind of coordination failure?

Comment by aleph_four on Incorrect hypotheses point to correct observations · 2021-07-28T18:03:49.003Z · LW · GW

People have poor introspective access to the reasons why they like or dislike something; when they are asked for an explanation, they often literally fabricate their reasons.

omg, they literally work that way. I can't, let me off

Comment by aleph_four on Open & Welcome Thread - February 2020 · 2020-02-28T01:02:24.009Z · LW · GW

Let’s add another Scott to our coffers.

Comment by aleph_four on aleph_four's Shortform · 2020-02-27T23:00:49.593Z · LW · GW

Lately I’ve been requiring a higher bar than the Turing Test. I propose “Anything that can program, and can converse convincingly in natural language about what it is programming must be thinking.”

Comment by aleph_four on Meta-Preference Utilitarianism · 2020-02-11T01:29:02.070Z · LW · GW

uh... I guess cannot get around the regress involved in claiming my moral values superior to competing systems in an objective sense? I hesitate to lump together the same kind of missteps that are involved with a mistaken conception of reality (a mis-apprehension of non-moral facts) with whatever goes on internally when two people arrive at different values.

I think it’s possible to agree on all mind independent facts, without entailing perfect accord on all value propositions, and that moral reflection is fully possible without objective moral truth. Perhaps I do not get to point at a repulsive actor and say they are wrong in the strict sense of believing falsehoods, but i can deliver a verdict on their conduct all the same.

Comment by aleph_four on Meta-Preference Utilitarianism · 2020-02-11T00:57:40.379Z · LW · GW

Well, i struggle to articulate what exactly we disagree on, because I find no real issue with this comment. Maybe i would say “high philosophical ability/sophistication causes both intergalactic civilization and moral convergence.”? I hesitate to call the result of that moral convergence “moral fact,” though I can conceive of that convergence.

Comment by aleph_four on Paper Trauma · 2020-02-07T03:10:39.811Z · LW · GW

uhh, it goes to sleep after a bit, but brings you back to what you were last doing.

The OCR doesn’t destroy the original

convert lines into appropriate geometric forms

Nope on this

convert text blocks to calendar entries, tickets, mails,...

Nope

Comment by aleph_four on Meta-Preference Utilitarianism · 2020-02-07T03:05:26.873Z · LW · GW

I’m immensely skeptical that open individualism will ever be more than a minority position (among humans, at least) But at any rate, convergence on an ethic doesn’t demonstrate objective correctness of that ethic from outside that ethic.

Comment by aleph_four on Meta-Preference Utilitarianism · 2020-02-07T02:54:58.872Z · LW · GW

Most intelligent beings in the multiverse share similar preferences.

I mean this could very well be true, but at best it points to some truths about convergent psychological evolution.

This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations

Sure, there are facts about what preferences would best enable the emergence of an intergalactic civilization. I struggle to see these as moral facts.

Also there’s definitely a manifest destiny evoking unquestioned moralizing of space exploration going on rn, almost like morality’s importance is only as an instrument to us becoming hegemonic masters of the universe. The angle you approached this question is value-laden in an idiosyncratic way (not in a particularly foreign way, here on less-wrong, but value-laden nonetheless)

One can recognize that one would be ”better off” with a different preference set without the alternate set being better in some objective sense.

change them to better fit the relevant moral facts.

I’m saying the self-reflective process that leads to increased parsimony between moral intuitions does not require objective realism of moral facts, or even the belief in moral realism. I guess this puts me somewhere between relativism and subjectivism according to your linked post?

Comment by aleph_four on Paper Trauma · 2020-02-06T07:16:20.357Z · LW · GW

ReMarkable solves some of these issues. I am now at the point where I have so many notes written on traditional paper that I do not want more to accumulate more and I cannot efficiently consult them without OCR and search functionality

Comment by aleph_four on Meta-Preference Utilitarianism · 2020-02-06T07:01:23.524Z · LW · GW

I’m not entirely sure what moral realism even gets you. Regardless of whether morality is “real” i still have attitudes towards certain behaviors and outcomes, and attitudes towards other people’s attitudes. I suspect the moral realism debate is confused altogether.

Comment by aleph_four on Money isn't real. When you donate money to a charity, how does it actually help? · 2020-02-06T06:15:18.400Z · LW · GW

money is the materialization of credit

woah, a marvelous inversion

Comment by aleph_four on Chris_Leong's Shortform · 2020-02-06T06:03:39.701Z · LW · GW

Before I even got to your comment, I was thinking “You can pry my laptop out of my cold dead hands Marx!”

Thank you for this clarification on personal vs private property.

Comment by aleph_four on Open & Welcome Thread - February 2020 · 2020-02-06T04:01:23.609Z · LW · GW

As of right now, I think that if business-as-usual continues in AI/ML, most unskilled labor in the transportation/warehousing of goods will be automatable by 2040.

Scott Anderson, Amazon’s director of Robotics puts it at over 10 years. https://www.theverge.com/2019/5/1/18526092/amazon-warehouse-robotics-automation-ai-10-years-away.

I don’t think it requires any fundamental new insights to happen by 2040, only engineering effort and currently available techniques.

I believe the economic incentives will align with this automation once it becomes achievable.

Transportation and warehousing currently accounts for ~10% of US employment.

Comment by aleph_four on Money isn't real. When you donate money to a charity, how does it actually help? · 2020-02-06T03:38:56.299Z · LW · GW

Mathematics is more real than money certainly. If we collectively agree that money has no value, it has no value. If we collectively agree that mathematics has no use, it does not stop being an unreasonably effective abstraction for describing natural phenomena.

Comment by aleph_four on ike's Shortform · 2019-09-07T00:18:57.335Z · LW · GW

Well if qualia aren’t epiphenomenal then an accurate simulation must include them or deviate into errancy. Claiming that you could accuracy simulate a human but leave out consciousness is just the p-zombie argument in different robes