Posts

AI-related honours projects? 2013-09-20T03:50:05.604Z

Comments

Comment by klkblake on Tulpa References/Discussion · 2014-01-04T14:16:46.915Z · LW · GW

Two people, if you count random lesswrongers, and ~300, if you count self-reporting in the last tulpa survey (although some of the reports in that survey are a bit questionable.

Comment by klkblake on Tulpa References/Discussion · 2014-01-04T14:09:22.759Z · LW · GW

Being able to reliably succeed on this task is one of the tests I've been using. Mostly, though, it's just a matter of trying to get to the point where we can both be focusing intently on something.

Comment by klkblake on Tulpa References/Discussion · 2014-01-04T14:02:49.144Z · LW · GW

I tried that last week. I lost. We were actively trying to not share our strategies with each other, although in our case abstract knowledge and skills are shared.

Comment by klkblake on Tulpa References/Discussion · 2014-01-03T12:00:23.150Z · LW · GW

Nope.

Comment by klkblake on Tulpa References/Discussion · 2014-01-03T04:04:10.832Z · LW · GW

In terms of form, she's an anthropomorphic fox. At the moment, looking at her is not noticeably different to normal visualisation, except that I don't have to put any effort into it. Explaining it in words is somewhat hard -- she's opaque without actually occluding anything, if that makes sense.

Comment by klkblake on Tulpa References/Discussion · 2014-01-02T14:43:27.001Z · LW · GW

So, I have a tulpa, and she is willing to answer any questions people might have for her. She's not properly independent yet, so we can't do the more interesting stuff like parallel processing, etc, unfortunately (damned akrasia).

Comment by klkblake on Tulpa References/Discussion · 2014-01-02T14:35:16.557Z · LW · GW

There have been a number of reports on the tulpa subreddit from people who have talked to their psychologist about their tulpa. The diagnosis seems to be split 50/50 between "unusual coping mechanism" and "Disassociative Identity Disorder not otherwise specified".

Comment by klkblake on AI-related honours projects? · 2013-09-27T12:22:37.696Z · LW · GW

Sounds like fun! I'll PM you my contact details.

Comment by klkblake on AI-related honours projects? · 2013-09-26T15:50:23.573Z · LW · GW

I might be interested in being your study partner; what would that involve?

Comment by klkblake on AI-related honours projects? · 2013-09-25T13:00:25.410Z · LW · GW

Sorry for the late reply; my mid-semester break just started, which of course meant I came down with a cold :). I've (re-)read the recent papers, and was rather surprised at how much of the maths I was able to understand. I'm feeling less confidant about my mathematical ability after reading the papers, but that is probably a result of spending a few hours reading papers I don't fully understand rather an accurate assessment of my ability. Concept learning seems to be a good backup option, especially since it sounds like something my supervisor would love (except for the part where it's a form of supervised learning, but that's unlikely to be a problem).

I vaguely remember EY mentioning something about there needing to be research into better operating systems and/or better programming languages (in terms of reliability/security/correctness), but this may have been a while ago. I have quite a bit of interest in this area, and some experience as well. Is this something that you think would be valuable (and if so, how valuable compared to work on the main open problems)?

Do you know which of the open problems MIRI is likely to attack first? I'd like to avoid duplication of effort, though I know with the unpredictability of mathematical insight that's not always feasible.

UPDATE: I just had a meeting with my supervisor, and he was pretty happy with all of the options I presented, so that won't be a problem. An idea I had this morning, which I'm pretty excited about, is potentially applying the method from the probabalistic reflection paper to the Halting Problem, since it seems to share the same self-referential structure.

Comment by klkblake on AI-related honours projects? · 2013-09-21T14:06:32.244Z · LW · GW

I'd heard of Idris. Parts of it sound really good (dependent typing, totality, a proper effects system, being usable from Vim), although I'm not a huge fan of tactic-based proofs (that's what the Curry-Howard Isomorphism is for!). It's definitely on the top of my list of languages to learn. I wasn't aware of the security focus, that is certainly interesting.

Proving safety in the face of malicious input sounds fascinating -- a dump would be much appreciated.

Comment by klkblake on AI-related honours projects? · 2013-09-21T13:48:38.398Z · LW · GW

Fairly technical would be good. IEM and the sociological work are somewhat outside my interests. Attending a workshop would unfortunately be problematic; anxiety issues make travelling difficult, especially air travel (I live in Australia). Writing up comments on the research papers is an excellent idea; I will certainly start doing that regardless of what project I do. Of the subjects listed, I am familiar (in roughly decreasing order) with functional programming, efficient algorithms, parallel computing, discrete math, numerical analysis, linear algebra, and the basics of set theory and mathematical logic. I have "Naive Set Theory", "Introduction to Mathematical Logic", and "Godel, Escher, Bach" sitting on my desk at the moment, and I am currently taking courses in theory of computation, and intelligent systems (a combination AI/machine learning/data mining course). The areas I had planned to learn after the above are incompleteness/undecidability, model theory, and category theory. In terms of how my prospective advisor could affect things, he's mostly interested in cognitive science based AI, with some side interest in theory of computation.

Comment by klkblake on AI-related honours projects? · 2013-09-20T06:37:12.478Z · LW · GW

I haven't heard the term CSE before (computer science & engineering?), but I'm doing a Bachelor of Science, majoring in Computer Science and minoring in Mathematics. I am taking an AI course at the moment (actually, its a combined AI/data mining course, and it's a bit shallower than I would like, but it covers the basics).

Comment by klkblake on Public Service Announcement Collection · 2013-06-28T09:19:43.118Z · LW · GW

Do you know if this issue would show up on a standard vitamin panel?

Comment by klkblake on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? · 2013-06-20T13:46:57.578Z · LW · GW

Ah, ok. In that case though, the other agent wins at this game at the expense of failing at some other game. Depending on what types of games the agent is likely to encounter, this agents effectiveness may or may not actually be better than BestDecisionAgent. So we could possibly have an optimal decision agent in the sense that no change to its algorithm could increase its expected lifetime utility, but not to the extent of not failing in any game.

Comment by klkblake on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? · 2013-06-20T11:16:16.714Z · LW · GW

Let BestDecisionAgent choose the $1 with probability p. Then the various outcomes are:

Simulation's choice | Our Choice | Payoff
$1         | $1         = $1
$1         | $2 or $100 = $100
$2 or $100 | $1         = $1
$2 or $100 | $2 or $100 = $2

And so p should be chosen to maximise p^2 + 100p(1-p) + p(1-p) + 2(1-p)^2. This is equal to the quadratic -98p^2 + 97p + 2, which Wolfram Alpha says is maximised by p = 97/196, for a expected payoff of ~$26.

If we are not BestDecisionAgent, and so are allowed to choose separately, we aim to maximise pq + 100p(1-q) + q(1-p) + 2(1-p)(1-q), which simplifies to -98pq+98p-q+2, which is maximized by q = 0, for a payoff of ~$50.5. This surprises me, I was expecting to get p = q.

So (3) and (4) are not quite right, but the result is similar. I suspect BestDecisionAgent should be able to pick p such that p = q is the best option for any agent, at the cost of reducing the value it gets.

ETA: Of course you can do this just by setting p = 0, which is what you assume. Which, actually, means that (3) and (4) contradict each other: if BestDecisionAgent always picks the $2 over the $1, then the best any agent can do is $2.

(Incidentally, how do you format tables properly in comments?)

Comment by klkblake on Post ridiculous munchkin ideas! · 2013-05-25T13:09:22.771Z · LW · GW

Really? I had the impression that switching was relatively common among people who had their tulpas for a while. But then, I have drawn this impression from a lot of browsing of r/Tulpa, and only a glance at tulpa.info, so there may be some selection bias there.

I heard about merging here. On the other hand, this commenter seems to think the danger comes from weird expectations about personal continuity.

Comment by klkblake on Post ridiculous munchkin ideas! · 2013-05-22T13:10:41.082Z · LW · GW

This article seems relevant (if someone can find a less terrible pdf, I would appreciate it). Abstract:

The illusion of independent agency (IIS) occurs when a fictional character is experienced by the person who created it as having independent thoughts, words, and/or actions. Children often report this sort of independence in their descriptions of imaginary companions. This study investigated the extent to which adult writers experience IIA with the characters they create for their works of fiction. Fifty fiction writers were interviewed about the development of their characters and their memories for childhood imaginary companions. Ninety-two percent of the writers reported at least some experience of IIA. The writers who had published their work had more frequent and detailed reports of IIA, suggesting that the illusion could be related to expertise. As a group, the writers scored higher than population norms in empathy, dissociation, and memories for childhood imaginary companions.

The range of intensities reported by the writers seems to match up with the reports in r/Tulpas, so I think it's safe to say that it is the same phenomena, albeit achieved via slightly different means.

Some interesting parts from the paper regarding dissociative disorder:

The subjects completed the Dissociative Experiences Scale, which yields an overall score, as well as scores on three subscales:

  • Absorption and changeability: people's tendency to become highly engrossed in activities (items such as "Some people find that they become so involved in a fantasy or daydream that it feels as though it were really happening to them).
  • Amnestic experiences: the degree to which dissociation causes gaps in episodic memory ("Some people have the experience of finding things among their belongings that they do not remember buying").
  • Derealisation and depersonalisation: things like "Some people sometimes have the experience of feeling that their body does not belong to them".

The subjects scored an overall mean score of 18.52 (SD 16.07), whereas the general population score a mean of 7.8, and a group of schizophrenics scored 17.7. Scores of 30 are a commonly used cutoff for "normal" scores. Seven subjects exceeded this threshold. The mean scores for the subscales were:

  • Absorption and changeability: 26.22 (SD 14.65).
  • Amnestic experiences: 6.80 (SD 8.30).
  • Derealisation and depersonalisation: 7.84 (SD 7.39).

The latter two subscales are considered particularly diagnostic of dissociative disorders, and the subjects did not differ from the population norms on these. They each had only one subject score over 30 (not the same subject).

What I draw from this: Tulpas are the same phenomenon as writers interacting with their characters. Creating tulpas doesn't cause other symptoms associated with dissociative disorders. There shouldn't be any harmful long-term effects (if there were, we should have noticed them in writers). That said, there are some interactions that some people have with their tulpas that are outside the range (to my knowledge) of what writers do:

  • Possession
  • Switching
  • Merging

The tulpa community generally endorses the first two as being safe, and claims the last to be horribly dangerous and reliably ending in insanity and/or death. I suspect the first one would be safe, but would not recommend trying any of them without more information.

(Note: This is not my field, and I have little experience with interpreting research results. Grains of salt, etc.)

Comment by klkblake on Post ridiculous munchkin ideas! · 2013-05-13T11:52:22.586Z · LW · GW

I think the term is "reference class tennis".

Comment by klkblake on Post ridiculous munchkin ideas! · 2013-05-12T13:54:53.489Z · LW · GW

This is fascinating. I'm rather surprised that people seem to be able to actually see their tulpa after a while. I do worry about the ethical implications though -- with what we see with split brain patients, it seems plausible that a tulpa may actually be a separate person. Indeed, if this is true, and the tulpa's memories aren't being confabulated on the spot, it would suggest that the host would lose the use of the part of their brain that is running the tulpa, decreasing their intelligence. Which is a pity, because I really want to try this, but I don't want to risk permanently decreasing my intelligence.

Comment by klkblake on The Beauty of Settled Science · 2012-03-06T13:25:22.610Z · LW · GW

This may just be a temporary glitch, but this post appears to have had its content replaced with that of Mundane Magic.

Comment by klkblake on Open Thread: September 2011 · 2011-09-05T22:52:19.067Z · LW · GW

I knew Kolmogorov complexity was used in Solomonoff induction, and I was under the impression that using Universal Turing Machines was an arbitrary choice.

Comment by klkblake on Open Thread: September 2011 · 2011-09-05T11:18:19.511Z · LW · GW

I'm confused about Kolmogorov complexity. From what I understand, it is usually expressed in terms of Universal Turing Machines, but can be expressed in any Turing-complete language, with no difference in the resulting ordering of programs. Why is this? Surely a language that had, say, natural language parsing as a primitive operation would have a very different complexity ordering than a Universal Turing Machine?

Comment by klkblake on Rationality Quotes August 2011 · 2011-08-05T12:32:51.317Z · LW · GW

^W means Control-W, which is the ASCII code for "delete previous word".