Posts

The Waker - new mode of existence 2015-07-12T14:00:29.535Z
Surprising examples of non-human optimization 2015-06-14T17:05:16.214Z
Baysian conundrum 2014-10-13T00:39:45.207Z
Tips for writing philosophical texts 2014-08-31T22:38:38.332Z
Prediction of the Internet 2014-08-01T18:06:49.733Z
Paperclip Maximizer Revisited 2014-06-19T01:25:03.716Z

Comments

Comment by Jan_Rzymkowski on A list of apps that are useful to me. (And other phone details) · 2015-08-22T21:43:25.486Z · LW · GW

Does anybody knows any moodtracking app that asks you about your mood at random time of the day? (Simple rating of the mood and maybe some small question about whether something happened that day influencing your mood) All I found needed me to turn on the app, which meant I used to forget to rate my mood or when I was down I just couldn't be bothered. So it would be perfect if it would just daily pop-up an alert, make me choose something and then disappeared.

Comment by Jan_Rzymkowski on Fragile Universe Hypothesis and the Continual Anthropic Principle - How crazy am I? · 2015-08-18T20:11:02.628Z · LW · GW
  1. It must kill you (at least make you unconscious) on a timescale shorter than that on which you can become aware of the outcome of the quantum coin-toss
  2. It must be virtually certain to really kill you, not just injure you.

Both seem to be at odds with Many World Interpretation. In infinite number of those it will just injure you and/or you will become aware before, due to same malfuntion.

Comment by Jan_Rzymkowski on Versions of AIXI can be arbitrarily stupid · 2015-08-12T21:36:21.349Z · LW · GW

Isn't it the formalization of Pascal mugging? It also reminds of the human sacrifice problem - if we don't sacrifice a person, the Sun won't come up the next day. We have no proof, but how can we check?

Comment by Jan_Rzymkowski on Some concepts are like Newton's Gravity, others are like... Luminiferous Aether? · 2015-08-12T19:47:10.996Z · LW · GW

Good (not only Friendly, but useful to full extent) AI would understand the intention, hence answer that luminous aether is not a valid way of explaining behavior of light.

Comment by Jan_Rzymkowski on Bragging thread August 2015 · 2015-08-06T21:14:12.380Z · LW · GW

After years of confusion and lengthy hours of figuring out, in a brief moment I finally understood how is it possible for cryptography to work and how can Alice and Bob share secrets despite middleman listening from the start of their conversation. And of course now I can't imagine not getting it earlier.

Comment by Jan_Rzymkowski on We really need a "cryonics sales pitch" article. · 2015-08-06T15:21:09.093Z · LW · GW

Is there a foundation devoted to promotion of cryonics? If no, it would be probably very desirable to create such. Popularizing cryonics can save an incredible amout of existences and so, many people supporting cryonics would probably be willing to donate money to make some more organized promotion. Not to mention personal gains - the more popular cryonics would become, the lower the costs and better logistics.

If you are or know someone supporting cryonics and having experience/knowledge in non-profit organisations or professional promotion, please consider that.

Comment by Jan_Rzymkowski on The Waker - new mode of existence · 2015-07-13T12:30:45.262Z · LW · GW

I'm sorry for overly light-hearted presentation. It seemed suited for a presentation of a, to simplify greatly, form of fun.

Waker's reality doesn't really rely on dreams, but on waking in new realities and a form of paradoxical commitment to equally reality she lives in and a random reality she would wake up in.

It's rationale is purely a step in exploring new experiences, a form of meta-art. As human and transhuman needs will have been fulfilled, posthumans would (and here at least I expect future me) search for entirely new ways of existing, new subjectivities. That is what I consider posthumanism, meddling with most basic imperatives of concious existence.

I see as just a one possibility to explore, something to let copies of myself experience. (those are not independent copies however, I imagine whole cluster of myselves interconnected and gathering understanding of each others perceived realities. Those living Waker's lives would be less concerned with existence of other copies, but rather their experiences would be watched by higher level copies)

Comment by Jan_Rzymkowski on The Waker - new mode of existence · 2015-07-13T12:16:38.943Z · LW · GW

Disclaimer: This comment may sound very crackpottish. I promise the ideas in it aren't as wonky as they seem, but it would be to hard to explain them properly in such short time.

By living your life in this way, you'd be divorcing yourself from reality.

Here comes the notion that in posthumanism there is no definite reality. Reality is a product of experiences and how your choices influence those experiences. In posthumanism however you can modify it freely. What we call reality is a very local phenomenon.

Anyhow, it's not the case that your computing infrastructure would be in danger - it would be either protected by some powerful AI, much better suited to protecting your infrastructure then you or there would be other copies of you keeping the maintenance in "meatspace" (Again, I strongly believe that it's only our contemporary perspective that makes us feel that reality in which computations are performed is more real then virtual reality).

What's more, a Waker can be perfectly aware that there is a world beyond her experiencing and may occasionally leave her reality.

Comment by Jan_Rzymkowski on The Waker - new mode of existence · 2015-07-13T11:59:23.553Z · LW · GW

Well, creating new realities at will and switching between them is an example of Hub World. And I expect that would indeed be the first thing the new posthumans would go for. But this type of existence is stripped from many restrictions, which in a way make life interesting and give it structure. So I expect some of the posthumans (amongst them - me in the future) to create curated copies of themselves, which would gather entirely new experiences, like Waker's subjectivity. (it's experiences would be reported to some top-level copy)

You see, a Waker doesn't consider waking abandoning everything, the way we do. She doesn't feel abandonment, the same way we don't feel we have abandoned everything and everyone in the dream. She has the perfect awareness of current world and a world to be feeling exactly as real.

One other way to state it - staying in a one reality forever is for a Waker feels like (to us) staying in a dream and never waking up to experience the actual reality.

Comment by Jan_Rzymkowski on The Waker - new mode of existence · 2015-07-13T11:15:19.069Z · LW · GW

There are, of course, many variants possible. The one I focus on is largely solipsistic, where all the people are generated by an AI. Keep in mind that AI needs to fully emulate only a handful of personas and they're largely recycled in transition to a new world. (option 2, then)

I can understand your moral reservations, we should however keep the distinction between real instantiation and an AI's persona. Imagine reality generating AI as a skilful actor and writer. It generates a great number of personas with different stories, personalities and apparent internal subjectivity. When you read a good book, you usually cannot tell if events and people in it are true or made up; the same goes with skilful improv actor, you cannot tell whether it is a real person or just a persona. In that way they all pass Turing test. However you wouldn't consider a writer killing a real person, when he ceases to write about some fictional character or an actor killing a real person, when she stops acting.

Of course, you may argue that it makes Waker's life meaningless, if she is surrounded by pretenders. But it seems silly, her relationship with other people is the same as yours.

Comment by Jan_Rzymkowski on The Waker - new mode of existence · 2015-07-12T22:10:44.802Z · LW · GW

I don't think it is any more horryfing then being stuck in one reality, treasuring memories. It is certainly less horrifying then our current human existence with prospects of death, suffering, boredom, heartache, etc. Your fear seems to just be about something different than you're used to.

Comment by Jan_Rzymkowski on Surprising examples of non-human optimization · 2015-06-14T22:24:37.776Z · LW · GW

Actually for (2) the optimizer didn't know the set of rules, it played the game as if it were normal player, controlling only keyboard. It has in fact started exploiting "bugs" of which its creator were unaware. (Eg. in Supermario, Mario can stomp enemies in mid air, from below, as long as in the moment of collision it is already falling)

Comment by Jan_Rzymkowski on Surprising examples of non-human optimization · 2015-06-14T21:02:24.474Z · LW · GW

I am more interested in optimizations, where an agent finds a solution vastly different from what humans would come up with, somehow "cheating" or "hacking" the problem.

Slime mold and soap bubbles produce results quite similar to those of human planners. Anyhow, it would be hard to strongly outperform humans (that is find surprising solution) at problems of the type of minimal trees - our visual cortexes are quite specialized in this kind of task.

Comment by Jan_Rzymkowski on Are conferences an inefficient/terrible discussion forum (in addition to academic papers)? · 2015-06-04T23:56:44.364Z · LW · GW

Let's add here, that most of the scientists treat conferences as a form of vacation funded by academia or grant money. So there is a strong bias to find reasons for their necessity and/or benefits.

Comment by Jan_Rzymkowski on The paperclip maximiser's perspective · 2015-05-01T19:44:40.965Z · LW · GW

"I would not want to be an unconscious automaton!"

I strongly doubt that such sentence bear any meaning.

Comment by Jan_Rzymkowski on The paperclip maximiser's perspective · 2015-05-01T13:24:34.947Z · LW · GW

Well, humans have existentialism despite no utility of it. It just seems like a glitch that you end up having, when your conciousness/intelligence achieves certain level (my reasoning is thus: high intelligence needs analysing many "points of view", many counterfactuals. Technicaly, they end up internalized to some point.) Human trying to excel his GI, which is a process allowing him to reproduce better, ends up wondering the meaning of life. It could in turn drastically decrease his willingness to reproduce, but it is overridden by imperatives. In the same way, I belive AGI would have subjective conscious experiences - as a form of glitch of general intelligence.

Comment by Jan_Rzymkowski on Astronomy, space exploration and the Great Filter · 2015-04-21T19:34:20.688Z · LW · GW

If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails)

It doesn't has to be simulation of ancestor, we may be example of any civilisation, life, etc. While our laws of physics seem complex and weird (for macroscopic effects they generate), they may be actually very primitive in comparison to parent universe physics. We cannot possibly estimate computation power of parent universe computers.

Comment by Jan_Rzymkowski on Astronomy, space exploration and the Great Filter · 2015-04-21T19:27:18.121Z · LW · GW

You seem to be bottomlining. Earlier you gave cold reversible-computing civs reasonable probability (and doubt), now you seem to treat it as an almost sure scenario for civ developement.

Comment by Jan_Rzymkowski on Resolving the Fermi Paradox: New Directions · 2015-04-18T19:07:44.296Z · LW · GW

Does anybody now if dark matter can be explained as artificial systems based on known matter? It fits well the description of stealth civilization, if there is no way to nullify gravitational interaction (which seems plausible). It would also explain, why there is so much dark matter - most of the universe's mass was already used up by alien civs.

Comment by Jan_Rzymkowski on Snape's knowledge of valence shells · 2015-04-13T19:22:05.030Z · LW · GW

Overscrupulous chemistry major here. Both Harry and Snape are wrong. By the Pauli exclusion principle an orbital can only host two electrons. But at the same time, there is no outermost orbital - valence shells are only oversimplified description of atom. Actually, so oversimplified that no one should bother writing it down. Speaking of HOMOs of carbon atom (highest [in energy] occupied molecular orbitals), each has only one electron.

Comment by Jan_Rzymkowski on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-28T15:32:42.548Z · LW · GW

My problem with such examples is that it seems more like Dark Arts emotional manipulation than actual argument. What your mind hears is that, if you're not believing in God, people will come to your house and kill your family - and if you believed in God they wouldn't do that, because they'd somehow fear the God. I don't see how is this anything else but an emotional trick.

I understand that sometimes you need to cut out the nuance in morality thought experiments, like equaling taxes to being threatened to be kidnapped, if you don't regularly pay a racket. But the opposite thing is creating exciting graphic visions. Watching your loved one raped is not as bad as losing a loved one - but it creates a much better psychological effect, targeted to elicit emotional blackmail.

Comment by Jan_Rzymkowski on [LINK] The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics · 2015-02-20T23:45:12.549Z · LW · GW

Can anybody point me to what choice of interpretation changes? From what I understand it is an interpretation, so there is no difference in what Copenhagen/MWI predict and falsification isn't possible. But for some reason MWI seems to be highly esteemed in LW - why?

Comment by Jan_Rzymkowski on Open thread, Jan. 26 - Feb. 1, 2015 · 2015-01-31T17:58:05.867Z · LW · GW

Small observation of mine. While watching out for sunk cost fallacy it's easy to go to far and assume that making the same spending is the rational thing. Imagine you bought TV and the way home you dropped it and it's destroyed beyond repair. Should you just go buy the same TV as the cost is sunk? Not neccesarily - when you were buying the TV the first time, you were richer by the price of the TV. Since you are now poorer, spending this much money might not be optimal for you.

Comment by Jan_Rzymkowski on Baysian conundrum · 2014-10-13T19:44:04.351Z · LW · GW

Big thanks for poiting me to Sleeping beauty.

It is a solution to me - it doesn't feel like a suffering, just as few minute tease before sex doesn't feel that way.

Comment by Jan_Rzymkowski on Baysian conundrum · 2014-10-13T19:40:38.359Z · LW · GW

What I had in mind isn't a matter of manually changing your beliefs, but rather making accurate prediction whether or not you are in a simulated world (which is about to become distinct from "real" world), based on your knowledge about existence of such simulations. It could just as well be that you asked your friend, to simulate 1000 copies of you in that moment and having him teleport you to Hawaii as 11 AM strikes.

Comment by Jan_Rzymkowski on Baysian conundrum · 2014-10-13T19:27:18.712Z · LW · GW

By "me" I consder this particular instance of me, which is feeling that it sits in a room and which is making such promise - which might of course be a simulated mind.

Now that I think about it, it seems to be a problem with a cohesive definition of identity and notion of "now".

Comment by Jan_Rzymkowski on Baysian conundrum · 2014-10-13T18:33:51.883Z · LW · GW

Anthropic measure (magic reality fluid) measures what the reality is - it's like how an outside observer would see things. Anthropic measure is more properly possessed by states of the universe than by individual instances of you.

It doesn't look like a helpful notion and seems very tautological. How do I observe this anthropic measure - how can I make any guesses about what the outside observer would see?

Even though you can make yourself expect (probability) to see a beach soon, it doesn't change the fact that you actually still have to sit through the cold (anthropic measure).

Continuing - how do I know I'd still have to sit through the cold? Maybe I am in my simulated past - in hypothetical scenario it's a very down-to-earth assumption.

Sorry, but above doesn't clarify anything for me. I may accept that the concept of probability is out of the scope here, that bayesianism doesn't work for guessing whether one is or isn't in a certain simulation, but I don't know if that's what you meant.

Comment by Jan_Rzymkowski on Open thread, Sept. 1-7, 2014 · 2014-09-05T08:13:10.740Z · LW · GW

What is R? LWers use it very often, but Google search doesn't provide any answers - which isn't surprising, it's only one letter.

Also: why is it considered so important?

Comment by Jan_Rzymkowski on Tips for writing philosophical texts · 2014-09-01T20:09:42.523Z · LW · GW

I'd say the only requirement is spending some time living on Earth.

Thanks, I'd get to sketching drafts. But it'll take some time.

Comment by Jan_Rzymkowski on The Octopus, the Dolphin and Us: a Great Filter tale · 2014-08-30T17:59:47.965Z · LW · GW

There's also an important difference in their environment. Underwater (oceans, seas, lagoons) seems much more poor. There are no trees underwater to climb on, branches or sticks of which could be used for tools, you can't use gravity to devise traps, there's no fire, much simpler geology, lithe prospects for farming, etc.

Comment by Jan_Rzymkowski on The Great Filter is early, or AI is hard · 2014-08-30T12:22:53.436Z · LW · GW

Or, conversely, Great Filter doesn't prevent civilizations from colonising galaxies, and we've been colonised long time ago. Hail Our Alien Overlords!

And I'm serious here. Zoo hypothesis seems very conspiracy-theory-y, but generalised curiosity is one of the requirments for developing civ capable of galaxy colonisation, and powerful enough civ can sacrifice few star systems for research purposes, and it seem that most efficient way of simulating biological evolution or civ developement is actually having a planet develop on its own.

Comment by Jan_Rzymkowski on Questions on the human path and transhumanism. · 2014-08-12T22:34:11.431Z · LW · GW

It's not impossible that human values are itself conflicted. Sole existence of AGI would "rob" us from that, because even if AGI restrained from doing all the work for humans, it would still be "cheating" - AGI could do all that better, so human achievement is still pointless. And since we may not want to be fooled (to be made think that it is not the case), it is possible that in that regard even best optimisation must result in loss.

Anyway - I can think of at least two more ways. First is creating games, vastly simulating the "joy of work". Second, my favourite, is humans becoming part of the AGI, in other words, AGI sharing parts of its superintelligence with humans.

Comment by Jan_Rzymkowski on Maybe we're not doomed · 2014-08-03T14:18:58.594Z · LW · GW

PD is not a suitable model for MAD. It would be if a pre-emptive attack on an opponent would guarantee his utter destruction and eliminate a threat. But that's not the case - even in case of a carefully orchestrated attack, there is a great chance of rebuttal. Since military advantage of pre-emptive attack is not preferred over a lack of war, this game doesn't necessarily indicate to defect-defect scenario.

This could probably be better modeled with some form of iterated PD with number of iterations and values of outcomes based on decisions made along the game. Which I guess would be non-linear.

Comment by Jan_Rzymkowski on Prediction of the Internet · 2014-08-01T21:33:58.746Z · LW · GW

It wasn't my intent to give a compelling definition. I meant to highlight, which features of the internet I find important and novel as a concept.

Comment by Jan_Rzymkowski on Will AGI surprise the world? · 2014-06-22T19:50:57.977Z · LW · GW

Sounds very reasonable.

Comment by Jan_Rzymkowski on Will AGI surprise the world? · 2014-06-22T19:46:52.062Z · LW · GW

I'm not willing to engage in a discussion, where I defend my guesses and attack your prediction. I don't have sufficient knowledge, nor a desire to do that. My purpose was to ask for any stable basis for AI dev predictions and to point out one possible bias.

I'll use this post to address some of your claims, but don't treat that as argument for when AI would be created:

How are Ray Kurzweil's extrapolations an empiric data? If I'm not wrong, all he takes in account is computational power. Why would that be enough to allow for AI creation? By 1900 world had enough resources to create computers and yet it wasn't possible, because the technology wasn't known. By 2029 we may have proper resources (computational power), but still lack knowledge on how to use them (what programs run on that supercomputers).

I'm not sure what you're saying here. That we can assume AI won't arrive next month because it didn't arrive last month, or the month before last, etc.? That seems like shaky logic.

I'm saying that, I guess, everybody would agree that AI will not arrive in a month. I'm interested on what basis we're making such claim. I'm not trying to make an argument about when will AI arrive, I'm genuinely asking.

You're right about comforting factor of AI coming soon, I haven't thought of that. But still, developement of AI in near future would probably mean that its creators haven't solved the friendliness problem. Current methods are very black-box. More than that, I'm a bit concerned about current morality and governement control. I'm a bit scared, what may people of today do with such power. You don't like gay marriage? AI can probably "solve" that for you. Or maybe you want financial equality of humanity? Same story. I would agree though that it's hard to tell where would our preferences point to.

If you assume the worst case that we will be unable to build AGI any faster than direct neural simulation of the human brain, that becomes feasible in the 2030's on technological pathways that can be foreseen today.

Are you taking in account that to this day we don't truly understand biological mechanism of memory forming and developement of neuron connections? Can you point me to any predictions made by brain researchers about when we may expect technology allowing for full scan of human connectome and how close are we to understanding brain dynamics? (Creating of new synapses, control of their strenght, etc.)

Once you are able to simulate the brain of a computational neuroscientist and give it access to its own source code, that is certainly enough for a FOOM.

I'm tempted to call that bollocks. Would you expect a FOOM, if you'd give to a said scientist a machine telling him which neurons are connected and allowing to manipulate them? Humans can't even understand nematoda's neural network. You expect them to understand whole 100 billion human brain?

Sorry for the above, it would need a much longer discussion, but I really don't have strength for that.

I hope it would be in any way helpful.

Comment by Jan_Rzymkowski on Will AGI surprise the world? · 2014-06-22T12:17:38.106Z · LW · GW

This whole debate makes me wonder , if we can have any certainity for AI predictions. Almost all is based on personal opinions, highly susceptible to biases. And even people with huge knowledge about these biases aren't safe. I don't think anyone can trace their prediction back to empiric data, it all comes from our minds' black boxes, to which biases have full access and which we can't examine with our conciousness.

While I find Mark's prediction far from accurate, I know it might be just because I wouldn't like it. I like to think that I would have some impact on AGI research, that some new insights are needed rather than just pumping more and more money in SIRI-like products. Developement of AI in next 10-15 years would mean that no qualitative research were needed and that all what is to be done is honing current technology. It would also mean there was time for thorough developement of friendliness and we may end up with AI catastroph.

While I guess human level AI to rise in about 2070s, I know I would LIKE if it happened in 2070s. And I base this prediction on no solid base.

Can anybody point me to any near-empiric data concerning, when AGI may be developed? Anything more solid than hunch of even most prominent AI researcher? Applying Moore's law seems a bit magical, it without doubt has some Bayesian effect, but with little certainity.

The best thing I can think of is that we all can agree, that AI is not be developed tomorrow. Or in a month. Why do we think that? It seems like coming from some very reliable empiric data. If we can identify factor, which make us near-certain AI is not be created in a span of few months from now, maybe upon closer look, it may provide us with some less shaky predictions for further future.

Comment by Jan_Rzymkowski on Paperclip Maximizer Revisited · 2014-06-19T11:13:46.021Z · LW · GW

Yeah. Though actually it's more of a simplified version of a more serious problem.

One day you may give AI precise set of instructions, which you think would make good. Like find a way of curing diseases, but without harming patients, and without harming people for the sake of research and so on. And you may find that your AI is perfectly friendly, but it wouldn't yet mean it actually is. It may simply have learned human values as a mean of securing its existence and gaining power.

EDIT: And after gaining enough power it may as well help improve human health even more or reprogram human race to think unconditionaly that diseases were eradicated.

Comment by Jan_Rzymkowski on [LINK] Elon Musk interested in AI safety · 2014-06-19T01:31:19.574Z · LW · GW

But Musk starts with mentioning "Terminator". There's plenty of sf literature showing much more accuratly danger of AI, though none of them as widely known as "Terminator".

That AI may have unexpected dangers seems too vague to me, to expect Musk to think along lines of LWers.

Comment by Jan_Rzymkowski on [LINK] Elon Musk interested in AI safety · 2014-06-19T00:53:59.611Z · LW · GW

It's not only unlikely - what's much worse, is that it points to wrong reasons. It suggests that we should fear AI trying to take over the world or eliminating all people, as if AI would have incentive to do that. It stems from nothing more, but anthropomorphisation of AI, imagining it as some evil genius.

This is very bad, because smart people can see that those reasonings are flawed and get impression that these are the only arguments against unbounded developement of AGI. While reverse stupidity isn't smart, it's much harder to find good reasons why we should solve AI friendliness, when there are lots of distracting strawmans.

It was me from half a year ago. I used to think, that anybody, who fears AI may bring harm, is a loony. All the reasons I heard from people were that AI wouldn't know emotions, AI would try to harmfully save people from themselves, AI would want to take over the world, AI would be infected by virus or hacked or that AI would be just outright evil. I can easily debunk all of above. And then I read about Paperclip Maximizer and radically changed my mind. I might got to that point much sooner, if not for all the strawman distractions.

Comment by Jan_Rzymkowski on [LINK] Elon Musk interested in AI safety · 2014-06-18T23:52:36.874Z · LW · GW

Ummm... He points to "Terminator" movie. Doesn't that mean he's just going along usual "AI will revolt and enslave the human race... because it's evil!" rather than actually realising what existential risk involving AI is?

I started to use it as a good rule of thumb. When somebody mentions Skynet, he's probably not worth listening to. Skynet really isn't a reasonable scenario for what may go wrong with AI.

Comment by Jan_Rzymkowski on Rationalist Sport · 2014-06-18T19:32:34.540Z · LW · GW

While yoga seems like a salutary way of spending time, I woudn't call that sport. Clear win-states and competition seems crutial to sport.

And that's why sport for rationalists is someting so hard to come up with and so valuable - it needs to combine the happiness from the effort to be better than others, while battling the sense of superiority, which often comes with winning.

Sense of group superiority is to me the most revolting thing about most sports.

Comment by Jan_Rzymkowski on [LINK] The errors, insights and lessons of famous AI predictions: preprint · 2014-06-17T20:32:49.319Z · LW · GW

Now I think I shouldn't mention hindsight bias, it doesn't really fit here. I'm just saying that some events would be more probably famous, like: a) laymen posing extraordinary claim and ending up being right b) group of experts being spectacularly wrong

If some group of experts met in 1960s and pose very cautious claims, chances are small that it would end up being widely known. And ending up in above paper. Analysing famous predictions is bound to end up with many overconfident predictions - they're just more flashy. But it doesn't yet mean most of predictions are overconfident.

Comment by Jan_Rzymkowski on [LINK] The errors, insights and lessons of famous AI predictions: preprint · 2014-06-17T17:33:35.596Z · LW · GW

Isn't this article highly susceptible to hindsight bias? For example, the reason authors analyse Dreyfus's prediction is that, he was somewhat right. If he weren't, authors woudn't include that data-point. Therefore it skewes the data, even if it is not their intention.

It's hard to take valuable assessements from the text, when it would be naturally prone to highlight mistakes of the experts and correct predictions by laymen.

Comment by Jan_Rzymkowski on Group Rationality Diary, June 1-15 · 2014-06-15T20:38:00.123Z · LW · GW

It reminds me greatly my making of conlangs (artificial languages). While I find it creative, it takes vehement amounts of time to just create a simple draft and an arduous work to make satisfactory material. And all I'd get is just two or three people calling it cool and showing just a small interest. And I always know I'll get bored with that language in few days and never make as much as to translate simple texts.

And yet every now and then I get an amazing idea and can't stop myself from "wasting" hours, planning and writing about some conlang. And I end up being unsatisfied.

I don't think it is about Sunk Cost. It's more about a form of addiction toward creative works. Some kind of vicious cycle, where brain engages in activity, that just makes you want more to do that activity. The more you work on it, the more you want to do it, until reaching saturation, when you just can't look at it anymore.

Comment by Jan_Rzymkowski on Come up with better Turing Tests · 2014-06-10T19:53:25.515Z · LW · GW

Stuart, it's not about control groups, but that such test actually would test negatively for blind, who are intelligent. Blind AI would also test negatively, so how is that useful?

Actually physics test is not about getting closer to humans, but about creating something useful. If we can teach program to do physics, we can teach it to do other stuff. And we're getting somewhere mid narrow and real AI.

Comment by Jan_Rzymkowski on Come up with better Turing Tests · 2014-06-10T13:27:22.078Z · LW · GW

Ad 4. Elite judges is quite arbitrary. I'd rather iterate the test, each time choosing only those judges, who recognized program correctly or some variant of that (e.g. top 50% with most correct guesses). This way we select those, who go beyond simply conforming to a conversation and actually look for differences between program and human. (And as seen from transcripts, most people just try to have a conversation, rather than looking for flaws) Drawback is that, if program has set personality, judges could just stick to identifing that personality rather than human characteristics.

Another approach might be that, the same pair program-human is examined by 10 judges consecutively, each spending 5 minutes with both. The twist is that judges can leave instructions for next judges. So if program fails to perform "If you want to prove you're human, simply do nothing for 4 minutes, then re-type this sentence I've just written here, skipping one word out of 2", than every judge after the one, who found that flaw, can use that and make right guess.

My favourite method would be to give bot a simple physics textbook and then ask him to solve few physics test problems. Even if it wouldn't be actual AI, it would still prove helluva powerful. Just toss it summarized knowledge on quantum physics and ask to solve for GUT. Sadly, most humans wouldn't pass such high-school physics test.

  1. is actually original Turing Test.

EDIT:

  1. is bad. It would exclude equally many actual AI and blind people as well. It is actually more general problem with Turing Test. It helps testing programs that mimic humans, but not AI in general. For text based AI, senses are alien. You could develop real intelligence, which would fail, when asked "How you like the smell of glass?". Sure it can be taught that glass don't smell, but it actually needs superhuman abilities. So while superintelligence can perfectly mimic human, human-level AI wouldn't pass Turing Test, when asked about sensual stuff, just as humans would fail, when asked about nuances of geometry in four dimensions.
Comment by Jan_Rzymkowski on Guardians of the Gene Pool · 2014-05-13T15:09:21.869Z · LW · GW

You're right. I got way too far with claiming equivalence.

As for non-identity problem - I have trouble answering it. I don't want to defend my idea, but I can think of an example when one brings up non-identity and comes to wrong conclusion: Drinking alcohol while pregnant can cause a fetus to develop a brain damage. But such grave brain damage means this baby is not the same one, that would be created, if his mother didn't drink. So it is questionable that the baby would benefit from its mother abstinence.

Comment by Jan_Rzymkowski on Universal Fire · 2014-05-12T22:24:09.067Z · LW · GW

Little correction:

Phosphorus is highly reactive; pure phosphorus glows in the dark and may spontaneously combust. Phosphorus is thus also well-suited to its role in adenosine triphosphate, ATP, your body's chief method of storing chemical energy.

Actually, the above isn't true. Reactivity is a property of a molecule, not of an element. Elemental phosphorus is prone to get oxidised with atmospheric oxygen, producing lots of energy. ATP is reactive, because anhydride bonds are fairly unstable - but none change of oxidation takes place. That it contains phosphorus, isn't the actual reason for ATP to be an easy usable form of stroring energy. Salts of phosphoric acid also contain phosphorus, while being fairly unreactive. Thus the implication just doesn't make sense.

Comment by Jan_Rzymkowski on Guardians of the Gene Pool · 2014-05-09T23:07:10.015Z · LW · GW

"if you failed hard enough to endorse coercive eugenics"

This might be found a bit too controversial, but I was tempted to come up with not-so-revolting coercive eugenics system. Of course it's not needed, if there is technology for correcting genes, but let's say we only have circa 1900 technology. It has nothing to do with the point of Elizer's note, it's ust my musing.

Coervie eugenics isn't strictly immoral itself. It is a way of protecting people not yet born from genetical flaws - possible diseases, etc. But even giving them less then optimal features - intelligence, strength, looks - is quite equivalent to making them stupidier, weaker, uglier. If you could give your child healthy and pleasent life, yet decide to strip him from that, you are hurting him - it's not like his well-being is your property. But can you have YOUR child, while eugenics prevent you from breeding? Not in genetic sense, but it seems deeply flawed to base parent-child relation simply on genetic code. It's upbringing that matters. Adopted child is in any meaningful way YOUR child. But there are two problems - you can't really use "good genes" people for producing babies for "bad gene" people and "bad gene" mothers may have problem caring newborns without hormonal effect of birth. Way to make eugenics weaker, but overcome these problems, is to limit only mens' breading. When a couple with "good gene man" wants children - let them. If couple with "bad gene man" wants children, then future mother is impregnated by some (possible hired) "good gene man". Normally the couple have protected sex.

It is by no means perfect. But the price for relative well-being of future people is only for a woman to have sex with not her husband, and for husband to be "cheated on". While it seems quite unsettling, it's mainly our cultural norm. While this might be unpleasant for both, it isn't considerably worse then not being able to drink and smoke for woman through pregnancy. Therefore, such coercive eugenics would gradually improve gene pool, while not being considerable more evil then forbidding pregnant woman to smoke cigarettes.

I don't mean to say that such a system would be a good choice. But simply that it would be trading the rights of alive for the rights of not yet born.

I apologize, if above was inappropriate.