MakoYass's Shortform

post by MakoYass · 2020-04-19T00:12:46.448Z · LW · GW · 18 comments

18 comments

Comments sorted by top scores.

comment by MakoYass · 2020-08-31T01:50:25.590Z · LW(p) · GW(p)

There's a lot of "neuralink will make it easier to solve the alignment problem" stuff going around the mainstream internet right now in response to neuralink's recent demo.

I'm inclined to agree with Eliezer, that seems wrong; either AGI will be aligned in which case it will make its own neuralink and wont need ours, or it will be unaligned and you really wouldn't want to connect with it. You can't make horses competitive with cars by giving them exoskeletons.

But, is there much of a reason to push back against this?

Providing humans with cognitive augmentation probably would help to solve the alignment problem, in a bunch of indirect ways.

It doesn't seem like a dangerous error at all. It feeds a public desire to understand how AGI might work. Neuralink itself is a great project for medical science. Generally, wrong beliefs cause bad consequences, but I'm having difficulty seeing what they'd be here.

comment by capybaralet · 2020-09-15T07:51:32.701Z · LW(p) · GW(p)

The obvious bad consequence is a false sense of security leading people to just get BCIs instead of trying harder to shape (e.g. delay) AI development.

" You can't make horses competitive with cars by giving them exoskeletons. " <-- this reads to me like a separate argument, rather than a restatement of the one that came before.

I agree that BCI seems unlikely to be a good permanent/long-term solution, unless it helps us solve alignment, which I think it could. It could also just defuse a conflict between AIs and humans, leading us to gracefully give up our control over the future light cone instead of fighting a (probably losing) battle to retain it.


...Your post made me think more about my own (and others') reasons for rejecting Neuralink as a bad idea... I think there's a sense of "we're the experts and Elon is a n00b". This coupled with feeling a bit burned by Elon first starting his own AI safety org and then ditching it for this... overall doesn't feel great.

comment by MakoYass · 2020-09-15T20:49:38.176Z · LW(p) · GW(p)

I've never been mad at elon for not having decision theoretic alignmentism. I wonder, should I be mad. Should I be mad about the fact that he has never talked to eliezer (eliezer said that in passing a year or two ago on twitter) even though he totally could whenever he wanted.

Also, what happened at OpenAI? He appointed some people to solve the alignment problem, I think we can infer that they told him, "you've misunderstood something and the approach you're advocating (proliferate the technology?) wouldn't really be all that helpful", and he responded badly to that? They did not reach mutual understanding?

comment by MakoYass · 2020-04-19T00:12:47.679Z · LW(p) · GW(p)

Considering doing a post about how it is possible the Society for Cryobiology might be wrong about Cryonics, it would have something to do with the fact that at least until recently, no cryobiologist who was seriously interested in cryonics was allowed to be a member,

but I'm not sure... their current position statement is essentially "it is outside the purview of the Society for Cryobiology", which, if sincere, would have to mean that the beef is over?

( statement is https://www.societyforcryobiology.org/assets/documents/Position_Statement_Cryonics_Nov_18.pdf )

comment by MakoYass · 2020-08-21T07:36:39.571Z · LW(p) · GW(p)

Hmm. It appears to me that Qualia are whatever observations affect indexical claims, and anything that affects indexical claims is a qualia, and this is probably significant

comment by G Gordon Worley III (gworley) · 2020-08-21T15:59:00.925Z · LW(p) · GW(p)

Yes, this seems straightforwardly true, although I don't think it's especially significant unless I'm failing to think of some relevant context about why you think indexical claims matter so much (but then I don't spend a lot of time thinking very hard about semantics in a formal context, so maybe I'm just failing to grasp what all is encompassed by "indexical").

comment by MakoYass · 2020-08-22T10:51:44.056Z · LW(p) · GW(p)

It's important because demystifying qualia would win esteem in a very large philosophical arena, heh. More seriously though, it seems like it would have to strike at something close to the heart of the meaning of agency.

comment by Max Kaye (max-kaye) · 2020-08-24T06:02:21.705Z · LW(p) · GW(p)
Hmm. It appears to me that Qualia are whatever observations affect indexical claims, and anything that affects indexical claims is a qualia

I don't think so, here is a counter-example:

Alice and Bob start talking in a room. Alice has an identical twin, Alex. Bob doesn't know about the twin and thinks he's talking to Alex. Bob asks: "How are you today?". Before Alice responds, Alex walks in.

Bob's observation of Alex will surprise him, and he'll quickly figure out that something's going on. But more importantly: Bob's observation of Alex alters the indexical 'you' in "How are you today?" (at least compared to Bob's intent, and it might change for Alice if she realises Bob was mistaken, too).

I don't think this is anything close to describing qualia. The experience of surprise can be a quale, the feeling of discovering something can be a quale (eureka moments), the experience of the colour blue is a quale, but the observation of Alex is not.

Do you agree with this? (It's from https://plato.stanford.edu/entries/indexicals/)

An indexical is, roughly speaking, a linguistic expression whose reference can shift from context to context. For example, the indexical ‘you’ may refer to one person in one context and to another person in another context.

Btw, 'qualia' is the plural form of 'quale'

comment by MakoYass · 2020-08-24T13:05:13.751Z · LW(p) · GW(p)

That's a well constructed example I think, but no that seems to be a completely different sense of "indexical". The concept of indexical uncertainty we're interested in is... I think... uncertainty about which kind of body or position in the universe your seat of consciousness is in, given that there could be more than one. The Sleeping Beauty problem is the most widely known example. The mirror chamber was another example.

comment by Max Kaye (max-kaye) · 2020-08-25T00:54:58.616Z · LW(p) · GW(p)
The concept of indexical uncertainty we're interested in is... I think... uncertainty about which kind of body or position in the universe your seat of consciousness is in, given that there could be more than one.

I'm not sure I understand yet, but does the following line up with how you're using the word?

Indexical uncertainty is uncertainty around the exact matter (or temporal location of such matter) that is directly facilitating, and required by, a mind. (this could be your mind or another person's mind)

Notes:

  • "exact" might be too strong a word
  • I added "or temporal location of such matter" to cover the sleeping beauty case (which, btw, I'm apparently a halfer or double halfer according to wikipedia's classifications, but haven't thought much about it)

Edit/PS: I think my counter-example with Alice, Alex, and Bob still works with this definition.

comment by TAG · 2020-08-22T11:09:23.080Z · LW(p) · GW(p)

I can see how this might result from confusing consciousness qua phenomenonality with consciousness qua personal identity.

comment by MakoYass · 2020-08-24T13:08:28.888Z · LW(p) · GW(p)

I think I'm saying those are going to to turn out to be the same thing, though I'm not sure exactly where that intuition is coming from yet. Could be wrong.

comment by TAG · 2020-08-21T15:19:19.288Z · LW(p) · GW(p)

Qualia are whatever observations affect indexical claims

Why would that be the case?

comment by MakoYass · 2020-08-21T07:51:07.087Z · LW(p) · GW(p)

As I get closer to posting my proposal to build a social network that operates on curators recommended via webs of trust, it is becoming easier for me to question existing collaborative filtering processes.

And, damn, scores on posts are pretty much meaningless if you don't know how many people have seen the post, how many tried to read it, how many read all of it, and what the up/down ratio is. If you're missing one of those pieces of information, then there exists an explanation for a low score that has no relationship to the post's quality, and you can't use the score to make a decision as to whether to give it a chance.

comment by MakoYass · 2020-11-24T21:02:05.199Z · LW(p) · GW(p)

Idea: Screen burn correction app that figures out how to exactly negate your screen's issues by pretty much looking at itself in a mirror through the selfie cam, trying to display pure white, remembering the imperfections it sees, then tinting everything with the negation of that from then on.

Nobody seems to have made this yet. I think there might be things for tinting your screen in general, but it doesn't know the specific quirks of your screenburn. Most of the apps for screen burn recommend that you just burn every color over the entire screen that isn't damaged yet, so that they all get to be equally damaged, which seems like a really bad thing to be recommending.

comment by MakoYass · 2020-08-16T06:14:36.932Z · LW(p) · GW(p)

Wild Speculative Civics: What if we found ways of reliably detecting when tragedies of the commons have occurred, then artificially increased their costs (charging enormous fines) to anyone who might have participated in creating them, until it's not even individually rational to contribute to them any more?

comment by ChristianKl · 2020-08-16T11:02:23.007Z · LW(p) · GW(p)

That sounds like punishing any usage of common resources which is likely undesireable. 

Good policy to manage individual commons requires to think through how their usage is best managed. Elinor Ostrom did a lot of research into what works for setting up good systems.

comment by Viliam · 2020-08-17T21:25:01.444Z · LW(p) · GW(p)

Sounds like auctioning the usage of the common.

I can imagine a few technical problems, like determining what level of usage is optimal (you don't want people to overfish the lake, but you don't know exactly how many fish are there), or the costs of policing. But it would be possible to propose a few dozen situations where this strategy could be used, and address these issues individually; and then perhaps only use the strategy in some of them. Or perhaps by examining individual specific cases, we would discover a common pattern why this doesn't work.