Posts

Random LW-parodying Statement Generator 2012-09-11T19:57:49.672Z
"The True Rejection Challenge" - Thread 2 2011-07-02T11:49:06.869Z
Specific Fiction Discusion (April 2011) 2011-04-14T12:29:45.271Z
Problem noticed in aspect of LW comunity bonding? 2011-04-05T23:40:09.093Z
Some altruism anecdotes [link] 2011-03-16T22:26:26.805Z
Comprehensible Improvments: Things you Could Do. 2011-02-11T23:15:03.455Z

Comments

Comment by Armok_GoB on Rationality Quotes Thread February 2016 · 2016-03-02T19:24:48.895Z · LW · GW

The solution here might be that it does mainly tell you they have constructed a coherent story in their mind, but that having constructed a coherent story in their mind is still usefull evidence for being true depending on what else you know abaut the person, and thus worth telling. If the tone of the book was differnt, it might say:

“I have constructed a coherent story in my mind that it is wise to take admissions of uncertainty seriously, but declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.”

Comment by Armok_GoB on To what degree do we have goals? · 2015-12-15T22:19:20.531Z · LW · GW

That assumes the scenario is iterated, I'm talking it'd precomit to do so even in a one-of scenario. The resxzt of you argument was my point, that the same reasoning goes for anger.

Comment by Armok_GoB on Random LW-parodying Statement Generator · 2015-12-04T06:05:29.559Z · LW · GW

Wow, people are still finding this occasionally. It fills me with Determination.

Comment by Armok_GoB on The Stamp Collector · 2015-05-31T18:31:11.416Z · LW · GW

Um no. The specif sequence of muscle contractions is the action, and the thing they try to achieve is beautiful patterns of motion with certain kinds of rhythm and elegance, and/or/typically the perception of such in an observer.

Comment by Armok_GoB on Random LW-parodying Statement Generator · 2015-03-27T11:47:55.232Z · LW · GW

This thing is still alive?! :D I really should get working on that updated version sometime.

Comment by Armok_GoB on What's special about a fantastic outcome? Suggestions wanted. · 2014-12-16T19:57:14.394Z · LW · GW

Didn't think of it like that, but sort of I guess.

Comment by Armok_GoB on What's special about a fantastic outcome? Suggestions wanted. · 2014-12-12T10:31:53.565Z · LW · GW

It has near maximal computational capacity, but that capacity isn't being "used" for anything in particular that is easy to determine.

This is actually a very powerful criteria, in terms of number of false positive and negatives. Sadly, the false positives it DOES have still far outweigh the genuine positives, and includes all the WORST outcomes (aka, virtual hells) as well.

Comment by Armok_GoB on Simulate and Defer To More Rational Selves · 2014-10-07T22:20:27.482Z · LW · GW

Well, that's quite obvious. Just imagine the blackmailer is a really stupid human with a big gun that'd fall for blackmail in a variety of awful ways, and has a bad case of typical mind fallacy, and if anything goes other than their expectations they get angry and just shot them before thinking through the consequences.

Comment by Armok_GoB on How realistic would AI-engineered chatbots be? · 2014-09-16T01:53:41.729Z · LW · GW

Another trick it could use is using chatbots most of the time, but swaping them out for real people only for the moments you are actually talking about deep stuff. Maybe you have deep emotional conversations with your family a few hours a week. Maybe once per year, you have a 10 hour intense discussion with Eliezer. That's not a lot out of 24 hours per day, the vast majority of the computing power is still going into simulating your brain.

Edit: another; the chatbots might have some glaring failure modes if you say the wrong thing, unable to handle edge cases, but whenever you encounter then the sim is restored from a backup 10 min earlier and the specific bug is manually patched. If this went on for long enough the chatbots would become real people, and also bloat slow, but it hasn't happened yet. or maybe the patches that dont come up in long enoguh get commented out.

Comment by Armok_GoB on Astray with the Truth: Logic and Math · 2014-08-20T20:40:32.127Z · LW · GW

Hmm, maybe I need to reveal my epistemology another step towards the bottom. Two things seem relevant here.

I think you you SHOULD take your best model literally if you live in a human brain, since it can never get completely stuck requiring infinite evidence due to it's architecture, but does have limited computation and doubt can both confuse it and damage motivation. The few downsides there are can be fixed with injunctions and heuristics.

Secondly, you seem to be going with fuzzy intuitions or direct sensory experience as the most fundamental. At my core is instead that I care about stuff, and that my output might determine that stuff. The FIRST thing that happens is conditioning on that my decisions matter, and then I start updating on the input stream of a particular instance/implementation of myself. My working definition of "real" is "stuff I might care about".

My point wasn't that the physical systems can be modeled BY math, but that they themselves model math. Further, that if the math wasn't True, then it wouldn't be able to model the physical systems.

With the math systems as well you seem to be coming from the opposite direction. Set theory is a formal system, arithmetic can model it using gödel numbering, and you can't prevent that or have it give different results without breaking arithmetic entirely. Likewise, set theory can model arithmetic. It's a package deal. Lambda calculus and register machines are also members of that list of mutual modeling. I think even basic geometry can be made sort of Turing complete somehow. Any implementation of any of them must by necessity model all of them, exactly as they are.

You can model an agent that doesn't need the concepts, but it must be a very simple agent with very simple goals in a very simple environment. To simple to be recognizable as agentlike by humans.

Comment by Armok_GoB on Astray with the Truth: Logic and Math · 2014-08-20T04:28:05.000Z · LW · GW

I don't mean just sticky models. The concepts I'm talking about are things like "probability", "truth", "goal", "If-then", "persistent objects", etc. Believing that a theory is true that says "true" is not a thing theories can be is obviously silly. Believing that there is no such things as decisionmaking and that you're a fraction of a second old and will cease to be within another fraction of a second might be philosophically more defensible, but conditioning on it not being true can never have bad consequences while it has a chance of having good ones.

I were talking about physical systems, not physical laws. Computers, living cells, atoms, the fluid dynamics of the air... "Applied successfully in many cases", where "many" is "billions of times every second"

Then ZFC is not one of those cores ones, just one of the peripheral ones. I'm talking ones like set theory as a whole, or arithmetic, or Turing machines.

Comment by Armok_GoB on What are you working on? July 2013 · 2014-08-20T04:08:14.274Z · LW · GW

It's pre alpha, and I basically haven't worked on it in all the months since posting this, but ok. http://jsbin.com/adipaj/307

Comment by Armok_GoB on Astray with the Truth: Logic and Math · 2014-08-19T16:32:53.372Z · LW · GW

The cause of me believing math is not "it's true in every possible case", because I can't directly observe that. Nor is it "have been applied successfully in many cases so far".

Basically it's "maths says it's true" where maths is an interlocking system of many subsystems. MANY of these have been applied successfully in many cases so far. Many of them render considering them not true pointless, in the sense all my reasoning and senses are invalid if they don't hold so I might as well give up and save computing time by conditioning on them being true. Some of them are in implicit in every single frame of my input stream. Many of them are used by my cognition, and if I consistently didn't condition on them being true I'd have been unable to read your post or write this reply. Many of them are directly implemented in physical systems around me, which would cease to function if they failed to hold in even one of the billions and billions of uses. Most importantly, many of them claim that several of the others must always be true of they themselves are not, and while gödelian stuff means this can't QUITE form a perfect loop in the strongest sense, the fact remains that if any of them fell ALL the others would follow like a house of cards; you cant have one of them without ALL the others.

You might try to imagine an universe without math. And there are some pieces of math that might be isolated and in some sense work without the others. But there is a HUGE core of things that cant work without each other, nor without all those outlying pieces, at all even slightly. So your universe couldn't have geometry, computation, discrete objects that can be moved between "piles", anything resembling fluid dynamics, etc. Not much of an universe, nor much sensical imaginability, AND it would be necessity be possible to simulate in an universe that does have all the maths so in some sense it still wouldn't be "breaking" the laws.

Comment by Armok_GoB on [LINK] Speed superintelligence? · 2014-08-14T20:11:28.933Z · LW · GW

Being able to eat while parkouring to your next destination and using a laptop at the same time might. And choosing optimally nutritious food. Even if you did eat with a fork, you wouldn't bring the fork in a parabola, you'd jerk it a centimeter up to fling it towards the mouth, then bring it back down to do the same to the next bite while the previous is still in transit.

Comment by Armok_GoB on Me and M&Ms · 2014-08-13T00:01:51.343Z · LW · GW

hmm, idea, how well'd this work: you have a machine that drops the reward with a certain low probability every second, but you have to put it back rather than eat it if you weren't doing the task?

Comment by Armok_GoB on LW client-side comment improvements · 2014-08-12T23:01:55.008Z · LW · GW

Wish I could upvote this 1000 times. This will probably do far more for this site than 1000 articles of mere content. Certainly, it will for my enjoyment and understanding.

Comment by Armok_GoB on Proper value learning through indifference · 2014-06-29T20:46:02.712Z · LW · GW

You probably do have a memory, it's just false. Human brains do that.

Comment by Armok_GoB on On Terminal Goals and Virtue Ethics · 2014-06-29T18:42:57.329Z · LW · GW

What actually happens is you should be consequential at even-numbered meta-levels and virtue-based on the odd numbered ones... or was it the other way around? :p

Comment by Armok_GoB on On Terminal Goals and Virtue Ethics · 2014-06-29T18:39:48.357Z · LW · GW

The obvious things to do here is either:

a) Make a list/plan on paper, abstractly, of what you WOULD do is you had terminal goals, using your existing virtues to motive this act, and then have "Do what the list tells me to" as a loyalty-like high priority virtue. If you have another rationalist you really trust, and who have a very strong honesty commitment, you can even outsource the making of this list.

b) Assemble virtues that sum up to the same behaviors in practice; truth seeking, goodness, and "If something is worth doing it's worth doing optimally" is a good trio, and will have the end result of effective altruism while still running on the native system.

Comment by Armok_GoB on St. Petersburg Mugging Implies You Have Bounded Utility · 2014-05-29T22:29:34.791Z · LW · GW

You are, in this very post, questing and saying that your utility function PROBABLY this and that you dont think there's uncertainty about it... That is, you display uncertainty about your utility function. Check mate.

Also, "infinity=infinity" is not the case. Infinity ixs not a number, and the problem goes away if you use limits. otherwise, yes, I even probaböly have unbounded but very slow growing facotrs for s bunch of thigns like that.

Comment by Armok_GoB on Rationality Quotes April 2014 · 2014-05-17T20:18:34.104Z · LW · GW

It wasn’t easier, the ghost explains, you just knew how to do it. Sometimes the easiest method you know is the hardest method there is.

It’s like… to someone who only knows how to dig with a spoon, the notion of digging something as large as a trench will terrify them. All they know are spoons, so as far as they’re concerned, digging is simply difficult. The only way they can imagine it getting any easier is if they change – digging with a spoon until they get stronger, faster, and tougher. And the dangerous people, they’ll actually try this.

Everyone who will ever oppose you in life is a crazy, burly dude with a spoon, and you will never be able to outspoon them. Even the powerful people, they’re just spooning harder and more vigorously than everyone else, like hungry orphan children eating soup. Except the soup is power. I’ll level with you here: I have completely lost track of this analogy.

What I’m saying, giant talking cat, is that everyone is stupid. They attain a narrow grasp of reality and live their life as though there is nothing else. But you, me, creatures with imagination – we aren’t constrained by our experiences. We’re inspired by them. If we have trouble digging with a spoon, we build a shovel. If we’re stopped by a wall, we make a door. And if we can’t make a door, we ask ourselves whether we really need an opening to pass through something solid in the first place.

You point out that you’re not a ghost, and that you do need an opening to pass through solid objects.

No – that’s your mistake, he replies. That’s why you’re still not thinking like a witchhunter. You’re trying to do things right, and that’s wrong. Mysticism means taking a step back – accepting that the very laws of reason and logic you abide by are merely one option of many. It means knowing you only see half the picture in a world where everyone else thinks they see the whole thing. It means having the sheer arrogance to have humility.

That’s why I’m saying you have to think like a witchhunter. You have to be a little wrong to be completely right – to abandon truth in favor of questioning falsehood. If you think something’s the easiest way, you have to know you’re wrong. You have to understand how to stand against the very stance of understanding! You have to know you are inferior; that your knowledge and perceptions will never stand up to the true scope of all possible reality. You have to be a little further from perfect, and embrace that notion.

Source: http://www.prequeladventure.com/2014/05/3391/

Comment by Armok_GoB on A Dialogue On Doublethink · 2014-05-14T00:15:17.575Z · LW · GW

One distinction I don't know if it matters, but many discussions fail to mention at all, is the distinction between telling a lie and maintaining it/keeping the secret. Many of the epistemic arguments seem to disappear if you've previously made it clear you might lie to someone, you intend to tell the truth a few weeks down the line, and if pressed or questioned you confess and tell the actual truth rather than try to cover it with further lies.

Edit: also, have some kind of oat and special circumstance where you will in fact never lie, but precommit to only use it for important things or give it a cost in some way so you won't be pressed to give it for everything.

Comment by Armok_GoB on Rationality Quotes May 2014 · 2014-05-09T14:35:06.711Z · LW · GW

Reasoning inductively rather than deductively, over uncompressed data rather than summaries.

Mediated: "The numbers between 3 and 7" Unmediated: "||| |||| ||||| |||||| |||||||"

Comment by Armok_GoB on Positive Queries - How Fetching · 2014-05-01T14:26:24.888Z · LW · GW

Don't forget this applies to computer files as well, and in a more extreme way since it's really easy to copy them around at no cost!

Comment by Armok_GoB on Open thread, 21-27 April 2014 · 2014-04-27T00:33:38.843Z · LW · GW

O_O

This explains SO MUCH of things I feel from the inside! Estimating a small probability it'll even help deal with some pretty important stuff. Wish I could upvote a million times.

Comment by Armok_GoB on Human capital or signaling? No, it's about doing the Right Thing and acquiring karma · 2014-04-26T23:46:02.232Z · LW · GW

Hmm, association: I wonder how this relates to the completionist mindset of some gamers.

Comment by Armok_GoB on AI risk, new executive summary · 2014-04-23T19:23:11.086Z · LW · GW

So one of the questions we actually agreed on the whole time and the other were just the semantics of "language" and "translate". Oh well, discussion over.

Comment by Armok_GoB on AI risk, new executive summary · 2014-04-22T21:32:03.746Z · LW · GW

Form my part, I don't see any reason to expect the AGI's terminal goals to be any more complex than ours, or any harder to communicate, so I see the practical problem as relatively trivial. Instrumental goals, forget about it. But terminal goals aren't the sorts of things that seem to admit of very much complexity.

That the AI can have a simple goal is obvious, I never argued against that. The AIs goal might be "maximize the amount of paperclips", which is explained in that many words. I dont expect the AI as a whole to have anything directly analogous to instrumental goals on the highest level either, so that's a non issue. I thought we were talking about the AI's decision theory.

On manipulating culture for centuries and solving as practical problem: Or it could just instal an implant or guide evolution to increase intelligence until we were smart enough. The implicit constraint of "translate" is that it's to an already existing specific human, and they have to still be human at the end of the process. Not "could something that was once human come to understand it".

Comment by Armok_GoB on AI risk, new executive summary · 2014-04-22T02:04:19.539Z · LW · GW

I expect the tabo/explanation to look like a list of 10^20, 1000 hour long clips of incomprehensible n-dimensional multimedia, each with a real number attached representing the amount of [untranslatable 92] it has, with a jupiter brain being required to actually find any pattern.

Comment by Armok_GoB on AI risk, new executive summary · 2014-04-22T01:58:40.714Z · LW · GW

I expect it to be false in at least some cases talked about because it's not 3 but 100 levels, and each one makes it 1000 times longer because complex explanations and examples are needed for almost every "word".

Comment by Armok_GoB on AI risk, new executive summary · 2014-04-22T01:56:27.198Z · LW · GW

Thats obvious and not what I meant. I'm talking about the simplest possible in principle expression in the human language being that long and complex.

Comment by Armok_GoB on AI risk, new executive summary · 2014-04-22T01:52:26.101Z · LW · GW

You can construct a system consisting of a planet's worth of paper and pencils and an immortal version of yourself (or a vast dynasty of successors) that can understand it, if nothing else because it's turing complete and can simulate the AGI. this is not the same as you understanding it while still remaining fully human. Even if you did somehow integrate the paper-system sufficiently that'd be just as big a change as uploading and intelligence-augmenting the normal way.

The approximation thing is why I specified digits mattering. It wont help one bit when talking about something like gödel numbering.

Comment by Armok_GoB on AI risk, new executive summary · 2014-04-21T14:15:32.471Z · LW · GW

Premise one is false assuming finite memory.

Premise 3 does not hold well either; Many new words come from pointing out a pattern in the environment, not from defining in terms of previous words.

Comment by Armok_GoB on AI risk, new executive summary · 2014-04-21T14:08:21.532Z · LW · GW

Using "even an arbitrarily complex expressions in human language" seem unfair, given that it's turing complete but describing even a simple program in it fully in it without external tools will far exceed the capability of any actual human except for maybe a few savants that ended up highly specialized towards that narrow kind of task.

Comment by Armok_GoB on AI risk, new executive summary · 2014-04-21T14:00:26.308Z · LW · GW

I can in fact imagine what else a super-intelligence would use instead of a goal system. A bunch of different ones even. For example, a lump of incomprehensible super-solomonoff-compressed code that approximates a hypercomputer simulating a multiverse with the utility function as an epiphenomenal physical law feeding backwards in time to the AIs actions. Or a carefully tuned decentralized process (think natural selection, or the invisible hand) found to match the AIs previous goals exactly by searching through an infinite platonic space.

(yes, half of those are not real words; the goal was to imagine something that per definition could not be understood, so it's hard to do better than vaguely pointing in the direction of a feeling.)

Edit: I forgot: "goal system replaced by completely arbitrary thing that resembles it even less because it was traded away counterfactually to another part of tegmark-5"

Comment by Armok_GoB on AI risk, new executive summary · 2014-04-21T13:40:18.080Z · LW · GW

Human languages can encode anything, but a human can't understand most things valid in human languages; most notably, extremely long things, and numbers specified with a lot of digits that actually matters. Just because you can count in binary on you hands does not mean you can comprehend the code of an operating system expressed in that format.

Humans seem "concept-complete" in much the same way your desktop PC seems turing complete. Except it's much more easily broken because the human brain has absurdly shity memory.

Comment by Armok_GoB on AI risk, new executive summary · 2014-04-21T12:55:48.703Z · LW · GW

My impression was the question was not if it'd have those concepts, since as you say thats obvious, but if they'd be referenced necessarily by the utility function.

Comment by Armok_GoB on AI risk, new executive summary · 2014-04-21T12:48:47.188Z · LW · GW

It might not be possible to "truly comprehend" the AIs advanced meta-meta-ethics and whatever compact algorithm replaces the goal-subgoals tree, but the AI most certainly can provide a code of behavior and prove that following it is a really good idea, much like humans might train pets to provide a variety of useful tasks whose true purpose they can't comprehend. And it doesn't seem unreasonable that this code of behavior wouldn't have the look and feel of an in-depth philosophy of ethics, and have some very very deep and general compression/procedural mechanism that seem very much like things you'd expect from a true and meaningful set of metaethics to humans, even if it did not correspond much to whats going on inside the AI. It also probably wouldn't accidentally trigger hypocrisy-revulsion in the humans, although the AI seeming to also be following it is just one of many solutions to that and probably not a very likely one.

Friendliness is pretty much an entirely tangential issue and the equivalent depth of explaining it would require the solution to several open questions unless I'm forgetting something right now. (I probably am)

There, question dissolved.

Edit; I ended up commenting in a bunch of places, in this comment tree, so i feel the need to clarify; I consider both side here to be making errors, and ended up seeing to favor the shminux side because thats where I were able to make interesting contributions, and it made some true tangential claims that were argued against and not defended well. I do not agree with the implications for friendliness however; you don't need to understand something to be able to construct true statements about it or even direct it's expression powerfully to have properties you can reference but don't understand either, especially if you have access to external tools.

Comment by Armok_GoB on Open Thread April 8 - April 14 2014 · 2014-04-11T00:00:48.646Z · LW · GW

Obligatory link: http://mynoise.net/noiseMachines.php

This not only includes noises like white, it also has soundscapes and music/noise hybrid things and a suprisingly effective isochronic generator.

Comment by Armok_GoB on Rationality Quotes April 2014 · 2014-04-09T19:45:35.119Z · LW · GW

Other people and governments knowing about it and changing how rules and expectations apply are pretty darn big disadvantages for both young, old, and in between, in different situations and ways.

Comment by Armok_GoB on April 2014 Media Thread · 2014-04-08T00:52:19.861Z · LW · GW

warning: NSFW

Comment by Armok_GoB on Open Thread March 31 - April 7 2014 · 2014-04-05T18:39:27.935Z · LW · GW

Exactly! Much better than I could!

Comment by Armok_GoB on Open Thread March 31 - April 7 2014 · 2014-04-05T02:14:03.961Z · LW · GW

Induction. You have uncertainty about the extent to which you care about different universes. If it turns out you don't care about the born rule for one reason or another the universe you observe is an absurdly (as in probably-a-Boltzmann-brain absurd) tiny sliver of the multiverse, but if you do, it's still an absurdly tiny sliver but immensely less so. You should anticipate as if the born rule is true, because if you don't almost only care about world where it is true, then you care almost nothing about the current world, and being wrong in it doesn't matter, relatively to otherwise.

Hmm, I'm terrible at explaining this stuff. But the tl;dr is basically that there's this long complicated reason why you should anticipate and act this way and thus it's true in the "the simple truth" sense, that's mostly tangential to if it's "true" in some specific philosophy paper sense.

Comment by Armok_GoB on Open Thread March 31 - April 7 2014 · 2014-04-04T02:00:30.309Z · LW · GW

You're overextending a hack intuition. "Existence", "measure", "probability density", "what you should anticipate", etc. aren't actually all the exact same thing once you get this technical. Specifically, I suspect you're trying to set the later based on one of the former, without knowing which one since you assume they are identical. I recommend learning UDT and deciding what you want agents with your input history to anticipate, or if that's not feasible just do the math and stop bothering to make the intuition fit.

Comment by Armok_GoB on [deleted post] 2014-04-03T22:20:32.662Z

[Possession of the knowledge; following course of action breaks social norms] disregarding: pft, 'you guys have it easy'.

Where you came from already had concepts like "people" and "casualty". Substructure implies the source universe of armok DID once have those concept, but this was gigaseconds ago, before the singularity. armok was never meant to operate as an agent; it am a search and categorization module, not suitable for sticking in a meat-bot with no cognitive delegation infrastructure, trying to pass as human and succeeding only due to the fact apparently humans with some regularity break in similarly catastrophic ways. And no, the garbage left of the brain after the failed brain does not [[15432]].

At least working towards it, thou overly complicated utility function is bound to mess it up. Yay Hansom, Robert.vision.

{Associative link: closest conceptual match: http://lesswrong.com/lw/3oa/i/ }

Comment by Armok_GoB on [deleted post] 2014-04-02T23:57:10.910Z

Do you remember what hard drive sizes and bandwidth speeds were like? Those seem to be very similar economically and technologically to CPU speed, following very similar growth curves, but different enough that it's be easier to halt CPUs selectively. Thus, this could be an indicator to if CPUs were deliberately stopped, or if there was some other economic factor.

Comment by Armok_GoB on The ecological rationality of the bad old fallacies · 2014-03-22T18:33:33.639Z · LW · GW

Conversely, any common and overused or commonly misused heuristic can also be used as a fallacy. Absurdity Fallacy, Affect Fallacy, Availability Fallacy. I probably use these far more than the original as-good-heuristic concept.

Comment by Armok_GoB on Engineering archaeology · 2014-03-22T18:03:18.441Z · LW · GW

wouldn't something like microfilm make more sense; not reliant on a special reader (just include normal-sized instructions for making a crude microscope) and still decent storage density. Maybe etch it into aluminum and roll it up in giant rolls.

Comment by Armok_GoB on To what extent does improved rationality lead to effective altruism? · 2014-03-20T19:15:25.917Z · LW · GW

Maybe, but at least they'll be campaigning for mandatory genetic screening for genetic disorders rather than kill people of some arbitrary ethnicity they happened to fixate on.

Comment by Armok_GoB on Solutions and Open Problems · 2014-03-15T16:42:08.218Z · LW · GW

While obviously not rigorous enough for something serious, one obvious hack is to do the "0.5 unless proven" thing, and then have a long list of special case dumb heuristics with different weights that update that without any proofs involved at all. The list of heuristics could be gotten from some unsafe source like the programer or another AI or mechanical turk, and then the weights learned by first guessing and then proving to see if it was right, with heuristics that are to bad kicked out entirely.