Comment by denimalpaca on Any Christians Here? · 2017-06-14T14:50:30.501Z · score: 0 (0 votes) · LW · GW

Could you give an actual criticism of the energy argument? "It doesn't pass the smell test" is a poor excuse for an argument.

When I assume that the external universe is similar to ours, this is because Bostrom's argument is specifically about ancestral simulations. An ancestral simulation directly implies that there is a universe trying to simulate itself. I posit this is impossible because of the laws of thermodynamics, the necessity to not allow your simulations to realize what they are, and keeping consistency in the complexity of the universe.

Yes its possible for the external universe to be 100% different from ours, but this gives us exactly no insight at all into what that external universe may be, and at this point it's a game of "Choose Your God", which I have no interest in playing.

Comment by denimalpaca on Any Christians Here? · 2017-06-14T14:47:38.198Z · score: 1 (1 votes) · LW · GW

Like, the idea that an entity simulating our universe wouldn't be able to do that, because they'd run out of energy doesn't pass even the basic sniff test.

I'm convinced you are not actually reading what I'm writing. I said if the universe ours is simulated in is supposed to be like our own/we are an ancestral simulation then this implies that the universe simulating ours should be like ours, and we can apply our laws of physics to it, and our laws of physics say there's entropy, or a limit to the amount of order.

I also believe that if we're a simulation, then the universe simulating ours must be very different than ours in fundamental ways, but this tells us nothing specific about that universe. And it implies that there could be no evidence, ever, of being in a simulation. Just like there could be no evidence, ever, of a god, or a flying spaghetti monster, or whatever other thought experiment you have faith in.

What I am trying to say is that you need a level of complexity to sufficiently trick intelligent beings in to not thinking they're in a simulation, and that humans could not create such a simulation themselves.

If you aren't postulating a soul, then we are nothing but complicated lighting and meat, meaning that we are entirely feasible to simulate.

Key word: complicated. Wrong word: feasible. I think you mean possible. Yes we are possible to simulate, but feasible implies that it can readily be done, which is exactly what I'm arguing against. Go read up about computer science, how simulations actually work, and physics before you start claiming things are feasible when they're currently impossible and certainly difficult problems that may only be feasible to the entirety of humanity working together for centuries.

It's even more bizarre to see you say that the claim of simulation makes no predictions, in response to me pointing out that it's prediction (just us in the observable universe) is the reason to believe it.

The prediction something makes is never the reason to believe something. The confirmation of that prediction is the reason to believe something. You cannot prove that whatever prediction the simulation makes is true, therefore there is not a rational reason to believe we are in a simulation. This is the foundation of logic and science, I urge you to look into it more.

The lack of aliens isn't proof of anything (absence of evidence is not the evidence of absence).

Comment by denimalpaca on Any Christians Here? · 2017-06-14T00:07:12.318Z · score: 1 (1 votes) · LW · GW

First, I'm not "resisting a conversion". I'm disagreeing with your position that a hidden variable is even more likely to be a mind than something else.

you are the one basically adding souls

I absolutely am not adding souls. This makes me think you didn't really read my argument. I'll present this a different way: human brains are incredibly complex. So complex, in fact, we still don't fully understand them. With a background in computer science, I know that you can't simulate something accurately without at least having a very accurate model. Currently, we have no accurate model of the brain, and it seems that the first accurate model we may get is just simulating every neuron at some level. What I'm saying is that unless that level we simulate neurons on is sufficiently small, there will be obvious errors. Perhaps this is feasible to simulate some human minds, even at the level of quantum mechanics.

My claim against a simulation being run in our universe that could sufficiently trick a human is this: There is not enough energy. This can be understood by thinking about the Second Law of Thermodynamics, and recognizing that to fully simulate something would require giving that thing the actual energy it has; to simulate an electron, you would need to give it the charge of an actual electron, or else it would not interact properly with its environment. Then it follows, if everything is simulated with its actual energy, it would take more energy than the universe has to simulate, because practically we lose energy to heat every time we try to fight entropy in some small way. The conclusion is that the universe is precisely simulating itself, which is indistinguishable from reality.

The need for this perfect simulation is a consequence of maintaining the observable complexity that is apparent in the formulations of these hypotheses by people like Bostrom. I wouldn't claim humans being in a simulation is impossible - my claim is that our own civilization cannot perfectly recreate itself.

So there's no actual good reason to believe we're in a simulation of ourselves, unless you take Bostrom's arbitrary operations on even more arbitrary numbers as evidence. Which I obviously don't think anyone should.

Finally, as I said before, the claim of a simulation makes no predictions, and we can't even know a way, if any, of proving we're in a simulation - except by construction, which seems impossible as outlined above. So, with no way to prove we're in a simulation, no way to prove a god exists, and no decent way to even make a reasonable estimate of the likelihood of either, the potential mechanisms that created the universe should be part of random distribution until we can sufficiently understand and test physical processes at a deeper level.

Comment by denimalpaca on Any Christians Here? · 2017-06-13T21:22:35.140Z · score: 0 (0 votes) · LW · GW

You can call it 'something missing', or 'god'.

I disagree. Something missing is different than a god. A god is often not well-defined, but generally it is assumed to be some kind of intelligence, that is it can know and manipulate information, and it has infinite agency or near to it. Something missing could be a simple physical process. One is infinitely complex (god), the other is feasibly simple enough for a human to fully understand.

The koopas are both pointing to the weirdness of their world, and the atheists are talking about randomness and the theists are talking about maybe it is a Sky Koopa.

I don't think this is really what you wrote the first time, but the argument you're presenting here doesn't progress us anywhere so I won't spend more time on it. I think we should drop this metaphor from the conversation.

Before too long we'll be able to write software that does basically what our brains do... There will be a lot more minds in simulations than have ever existed inside of human bodies...

Disagree again. First, "basically what our brains do" and "what our brains do" is almost certainly a non-trivial gap - if our brains are too complex for us to fully know every aspect of it at once, that is well enough to make precise predictions - then the jump to "basically what our brains do" introduces a difference in what we would predict. If we want to program all the neurons and neurotransmitters perfectly - have a brain totally modeled in software - then that brain would still need input like actual humans get or it may not develop correctly.

To the second point about " a lot more minds in simulations...", I also think this argument is fatally flawed. Let's assume that a perfect human brain can be simulated, however unlikely I think this is personally. To convince that simulated mind that it is in a base reality, it would have to be able to observe every aspect of that reality and come to the conclusion that the universe can and does fully exist by it's own processes. To be convinced it is living in a simulation, it may only need to see one physically "weird" thing; not a seemingly-too-improbable thing like no aliens, but an absolutely wrong thing, such as reversal of causality, that would be basically a glitch of the system.

Now some may argue that the simulators could "roll back" the simulation when these glitches occur, but I'm skeptical of the engineering feasibility of such a simulation in the first place that could, even for thousands of years, trick human minds. If we take a "lossy" simulation like video games now, it's clear that besides obvious bugs and invisible walls that bound the world, there's also a level of information resolution that's low compared to our world. That is, we can explain the physics of modern games by their physics engines, while we still struggle to explain the physics of the whole universe. If you have any amount of "lossiness" in a simulation, then eventually minds capable of finding that lossiness will - a brain in a vat will discover that, actually, nothing is made of atoms, but instead have their textures loaded in. Even if the brains we make don't have the ability to find this edge of resolution, we must assume that if we can create a superintelligent machine, and we can create a simulation of our own minds, then our simulated minds must also be able to create a superintelligence, which would either be able to find those lossy resolution issues or make a smarter being that can. Then the jig is up, and the simulations know they're in a simulation.

To get around the inevitable finding of lossiness in a simulation, the simulation creators would need to make their simulation indistinguishable from our own universe. This implies two things: first that such a simulation cannot be made, because making a perfect simulation of our universe inside our universe would take more energy than the universe has (see the Second Law of Thermodynamics if this doesn't make sense right away); the second is that if we could make a simulation indistinguishable from our universe, then we would know all the secrets of our universe, including whether or not we were in a simulation.

In physics, the answer to the question of "what's the something missing?" is not god, it is "we don't know yet." The answer that physicists look for makes specific predictions about testable phenomenon, and so far it does not seem that there are even any good testable claims that we're in a simulation.

What would those claims even be? Can we see where our universe is stored in memory on the machine we're supposedly running on? Why or why not?

Seems super arrogant for us to presume that we are the exception.

And it's super arrogant for theists to believe that a god created them special. So your argument from distaste of the other is not helping you.

The idea that one planet alone would have life is just too much of a score counter, too much of a giveaway.

We still don't know that we're the lone planet with life. And maybe it's too much of a giveaway to you, but it means almost nothing to me besides "the conditions to create life in the universe are rare even under arrangements where it is possible". Seeming like a score counter is not evidence it is a score counter. Only observing life on Earth is not a prediction about anything, it is not an explanation of anything - it is merely information, and the fact that you're twisting that information to give you a conclusion only says something about what you want to believe.

Comment by denimalpaca on Any Christians Here? · 2017-06-13T20:31:50.314Z · score: 0 (0 votes) · LW · GW

Sufficiently improbable stuff is evidence that there's a hidden variable you aren't seeing.

Sure, but you aren't showing what that hidden variable is. You're just concluding what you think it should be. So evidence that there's something missing isn't an opportunity to inject god, it's a new point to investigate. That, and sufficiently improbable stuff becomes probable when enough of it happens. Take a real example, like someone getting pregnant. While the probability of any given sperm reaching the egg and fertilizing it is low, the sheer number of sperm makes the chance that one of them fertilizes the egg is decent.

The argument can be equally applied to why we don't see alien civilizations: intelligent life may be incredibly rare, but not infeasibly so, because the universe is so vast that that vastness creates the chance for at least one instance of life starting and evolving to a noticeably intelligent state.

Neither the sperm nor the life, then, necessitate a god for their improbability yet existence, and until one can show that a god is necessary and nothing else will suffice to explain the universe, a god should not be proclaimed the (often only possible) right conclusion.

I don't see how your Mario argument relates to the no aliens data point, specifically how the positive evidence of a score counter in any way is like the lack of evidence of alien civs.

Comment by denimalpaca on Any Christians Here? · 2017-06-13T19:39:07.859Z · score: 0 (0 votes) · LW · GW

I find the 'where are all the aliens/simulation?" argument to be pretty persuasive in terms of atheism being a bust

Why does this imply atheism is a bust? The only thing I can think of that would make atheism "a bust" would be direct evidence of a god(s).

Comment by denimalpaca on Any Christians Here? · 2017-06-13T19:37:26.637Z · score: 0 (0 votes) · LW · GW

you can fault them for not properly updating but you can't fault them for inconsistency.

They're still being inconsistent with respect to the reality they observe. Why is the self-consistency alone more important than a consistency with observation?

Comment by denimalpaca on Any Christians Here? · 2017-06-13T17:40:46.557Z · score: 3 (2 votes) · LW · GW

Many Worlds Interpretation of Quantum Mechanics, a benevolent God is more likely than not going to exist somewhere.

I would urge you to go learn about QM more. I'm not going to assume what you do/don't know, but from what I've learned about QM there is no argument for or against any god.

were you aware that the ratio of sizes between the Sun and the Moon just happen to be exactly right for there to be total solar eclipses?

This also has to due with the distance between the moon and the earth and the earth and the sun. Either or both could be different sizes, and you'd still get a full eclipse if they were at different distances. Although the first test of general relativity was done in 1919, it was found later that the test done was bad, and later results from better replications actually provided good enough evidence. This is discussed in Stephen Hawking's A Brief History of Time.

and basically flag the locations of potentially habitable worlds for future colonization?

There are far more stars than habitable worlds. If you're going to be consistent with assigning probabilities, then by looking at the probability of a habitable planet orbiting a star, you should conclude that it is unlikely a creator set up the universe to make it easy or even possible to hop planets.

They are not essential to sapient life, and so they do not meet the criteria for the Anthropic Principle either.

Right, the sizes of the moon and sun are arbitrary. We could easily live on a planet with no moon, and have found other ways to test General Relativity. No appeal to any form of the Anthropic Principle is needed. And again with the assertion about habitable planets: the anthropic principle (weak) would only imply that to see other inhabitable planets, there must be an inhabitable planet from which someone is observing.

So you didn't provide any evidence for any god; you just committed a logical fallacy of the argument from ignorance. The way I view the universe, everything you state is still valid. I see the universe as a period of asymmetry, where complexity is allowed to clump together, but it clumps in regular ways defined by rules we can discover and interpret.

Comment by denimalpaca on Epistemology vs Critical Thinking · 2017-06-10T18:43:40.598Z · score: 1 (1 votes) · LW · GW

I think you wrote some interesting stuff. As for your question on a meta-epistemy, I think what you said about general approaches mostly holds in this case. Maybe there's a specific way to classify sub-epistemies, but it's probably better to have some general rules of thumb that weed out the definitely wrong candidates, and let other ideas get debated on. To save community time, if that's really a concern, a group could employ a back-off scheme where ideas that have solid rebuttals get less and less time in the debate space.

I don't know that defining sub-epistemies is so important. You give a distinction between math and theoretical computer science, but unless you're in those fields the distinction is near meaningless. So maybe it's more important to define these sub-epistemies as your relation to them increases.

Comment by denimalpaca on Destroying the Utility Monster—An Alternative Formation of Utility · 2017-06-09T15:44:57.792Z · score: 0 (0 votes) · LW · GW

Even changing "do" to "did", my counter example holds.

Event A: At 1pm I get a cookie and I'm happy. At 10pm, I reflect on my day and am happy for the cookie I ate.

Event (not) A: At 1pm I do not get a cookie. I am not sad, because I did not expect a cookie. At 10pm, I reflect on my day and I'm happy for having eaten so healthy the entire day.

In either case, I end up happy. Not getting a cookie doesn't make me unhappy. Happiness is not a zero sum game.

Comment by denimalpaca on Destroying the Utility Monster—An Alternative Formation of Utility · 2017-06-08T20:37:17.738Z · score: 0 (0 votes) · LW · GW

If I get a cookie, then I'm happy because I got a cookie. The negation of this event is that I do not get a cookie. However, I am still happy because now I feel healthier, having not eaten a cookie today. So both the event and it's negation cause me positive utility.

Comment by denimalpaca on Destroying the Utility Monster—An Alternative Formation of Utility · 2017-06-08T20:35:55.999Z · score: 2 (2 votes) · LW · GW

The term you're looking for is "apologist".

Comment by denimalpaca on The Simple World Hypothesis · 2017-06-08T20:27:20.309Z · score: 0 (0 votes) · LW · GW

If you have a universe of a certain complexity, then to simulate another universe of equal complexity it would have to be that universe to fully simulate it. To simulate a universe, you have to be sufficiently more complex and have sufficiently more expendable energy.

Comment by denimalpaca on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-12T18:36:11.943Z · score: 1 (1 votes) · LW · GW

"That from power comes responsibility is a silly implication written in a comic book, but it's not true in real life (it's almost the opposite). "

Evidence? I 100% disagree with your claim. Looking at governments or business, the people with more power tend to have a lot of responsibility both to other people in the gov't/company and to the gov't/company itself. The only kind of power I can think of that doesn't come with some responsibility is gun ownership. Even Facebook's power of content distribution comes with a responsibility to monetize, which then has downstream responsibilities.

Comment by denimalpaca on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-12T18:31:22.795Z · score: 1 (1 votes) · LW · GW

Not quite what I meant about identifying content but fair point.

As for fake news, the most reliable way to tell is whether the piece states information as verifiable fact, and if that fact is verified. Basically, there should be at least some sort of verifiable info in the article, or else it's just narrative. While one side's take may be "real" to half the world, the other side's take can be "real" to the other half of the world, but there should be some piece of actual information that both sides look at and agree is real.

Comment by denimalpaca on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-12T18:19:26.469Z · score: 4 (4 votes) · LW · GW

I'm actually very familiar with freedom of speech and I'm getting more familiar with your dismissive and elitist tone.

Freedom of speech applies, in the US, to the relationship between the government and the people. It doesn't apply to the relationship between Facebook and users, as exemplified by their terms of use.

I'm not confusing Facebook and Google, Facebook also has a search feature and quite a lot of content can be found within Facebook itself.

But otherwise thanks for your reply, it's stunning lack of detail gave me no insight whatsoever.

Comment by denimalpaca on Open thread, Apr. 10 - Apr. 16, 2017 · 2017-04-11T18:51:15.591Z · score: 0 (0 votes) · LW · GW

Maybe this has been discussed ad absurdum, but what do people generally think about Facebook being an arbiter of truth?

Right now, Facebook does very little to identify content, only provide it. They faced criticism for allowing fake news to spread on the site, they don't push articles that have retractions, and they just now have added a "contested" flag that's less informative than Wikipedia's.

So the questions are: does Facebook have any responsibility to label/monitor content given that it can provide so much? If so, how? If not, why doesn't this great power (showing you anything you want) come with great responsibility? Finally, if you were to build a site from ground-up, how would you design around the issue of spreading false information?

Comment by denimalpaca on What conservatives and environmentalists agree on · 2017-04-11T17:43:08.541Z · score: 0 (0 votes) · LW · GW

Reality check: most liberal people? trust fund kids at expensive colleges. most conservative people? working class.

Really disagree there. Plenty of trust fund kids are conservative, plenty of scholarship students are liberal... even at the same university. I think if you want to generalize, the more apt generalization is city vs. rural areas. There are tons of "working class" liberals, they work in service industries instead of coal mines. The big difference is the proximity to actual diversity, when you work with and live with and see diverse people every day, you get acclimated to it and accept it as the norm; when you live in a rural area with few people, nearly all of whom are white, you get acclimated to that. When the societal norm of rural areas is a more conservative, Christian mindset, and that in the cities is a more liberal mindset then it follows naturally that people in these areas would generally develop into those dominating mindsets.

I'm not sure that your statement about who gets hurt in the past is more likely to be conservative in the future is true, either. Your conclusion doesn't directly follow from the premise, and I can think of numerous personal and historical examples that run counter. Same with "liberals are sheltered", you offer no evidence that links your premise to conclusion and there are tons of counter examples.

Comment by denimalpaca on What conservatives and environmentalists agree on · 2017-04-10T21:14:00.345Z · score: 1 (1 votes) · LW · GW

"liberals aren't even willing to admit they made a mistake after the fact and will insist that the only reason people object to having their towns and houses completely overgrown with kudzu is irrational kudzuphobia."

I think this is a drastic overgeneralization taken in bad faith.

Comment by denimalpaca on What conservatives and environmentalists agree on · 2017-04-10T20:58:37.031Z · score: 0 (0 votes) · LW · GW

Yes I think that's exactly right. Scott Alexander's idea on it from the point of view of living in a zombie world makes this point really clear: do we risk becoming zombies to save someone, or no?

Comment by denimalpaca on What conservatives and environmentalists agree on · 2017-04-08T18:17:07.895Z · score: 3 (3 votes) · LW · GW

Seems to me both liberals and conservatives are social farmers, it's a matter of what crop is grown. Conservatives want their one crop, say potatoes, not because it's the most nutritional, but it's been around for forever and it's allowed their ancestors to survive. (If we assume like you do about Christianity, then we also have that God Himself Commanded They Grow Potatoes.) Liberals see the potatoes, recognize that some people still die even when they eat potatoes like their ancestor, and decide they need more crops. Maybe they grow fewer potatoes, and maybe they grow yellow potatoes instead of brown or some such triviality, but the idea like you state is to not privilege those people who are inherently better at digesting potatoes by growing other things as well. This is naturally heresy to conservative potato growers because you shouldn't fix something that isn't broken (and if God didn't say it's broken then it's not - excluding the idea of God and you just get potato-digesting-enzyme supremacy).

Comment by denimalpaca on OpenAI makes humanity less safe · 2017-04-03T21:19:33.923Z · score: 7 (7 votes) · LW · GW

I thought OpenAI was more about open sourcing deep learning algorithms and ensuring that a couple of rich companies/individuals weren't the only ones with access to the most current techniques. I could be wrong, but from what I understand OpenAI was never about AI safety issues as much as balancing power. Like, instead of building Jurassic Park safely, it let anyone grow a dinosaur in their own home.

Comment by denimalpaca on Naturally solved problems that are easy to verify but that would be hard to compute · 2017-04-03T16:23:07.388Z · score: 0 (0 votes) · LW · GW

Everyone has different ideas of what a "perfectly" or "near perfectly" simulated universe would look like, I was trying to go off of Douglas's idea of it, where I think the boundary errors would have effect.

I still don't see how rewinding would be interference; I imagine interference would be that some part of the "above ours" universe gets inside this one, say if you had some particle with quantum entanglement spanning across the universes (although it would really also just be in the "above ours" universe because it would have to be a superset of our universe, it's just also a particle that we can observe).

Comment by denimalpaca on Naturally solved problems that are easy to verify but that would be hard to compute · 2017-04-01T17:03:53.783Z · score: 0 (0 votes) · LW · GW

I 100% agree that a "perfect simulation" and a non-simulation are essentially the same, noting Lumifer's comment that our programmer(s) are gods by another name in the case of simulation.

My comment is really about your second paragraph, how likely are we to see an imperfection? My reasoning about error propagation in an imperfect simulation would imply a fairly high probability of us seeing an error eventually. This is assuming that we are a near-perfect simulation of the universe "above" ours, with "perfect" simulation being done at small scales around conscious observers.

So I'm not really sure if you just didn't understand what I'm getting at, because we seem to agree, and you just explained back to me what I was saying.

Comment by denimalpaca on Naturally solved problems that are easy to verify but that would be hard to compute · 2017-03-31T14:51:29.364Z · score: 0 (0 votes) · LW · GW

An idea I keep coming back to that would imply we reject the idea of being in a simulation is the fact that the laws of physics remain the same no matter your reference point nor place in the universe.

You give the example of a conscious observer recognizing an anomaly, and the simulation runner rewinds time to fix this problem. By only re-running the simulation within that observer's time cone, the simulation may have strange new behavior at the edge of that time cone, propagating an error. I don't think that the error can be recovered so much as moved when dealing with lower resolution simulations.

It makes the most sense to me, that if we are in a simulation it be a "perfect" simulation in that the most foundational forces and quantum effects are simulated all the time, because they are all in a way interacting with each other all the time.

Comment by denimalpaca on Could utility functions be for narrow AI only, and downright antithetical to AGI? · 2017-03-25T17:52:06.755Z · score: 0 (0 votes) · LW · GW

Go read a textbook on AI. You clearly do not understand utility functions.

Comment by denimalpaca on Could utility functions be for narrow AI only, and downright antithetical to AGI? · 2017-03-24T20:10:02.049Z · score: 0 (0 votes) · LW · GW

My definition of utility function is one commonly used in AI. It is a mapping of states to a real number: u:E -> R where u is a state in E (the set of all possible states), and R is the reals in one dimension.

What definition are you using? I don't think we can have a productive conversation until we both understand each other's definitions.

Comment by denimalpaca on A Problem for the Simulation Hypothesis · 2017-03-24T15:58:30.983Z · score: 0 (0 votes) · LW · GW

I tried to see if anyone else had previously made my argument (but better); instead I found these arguments:

I think the feasibility argument described here better encapsulates what I'm trying to get at, and I'll defer to this argument until I can better (more mathematically) state mine.

"Yet the number of interactions required to make such a "perfect" simulation are vast, and in some cases require an infinite number of functions operating on each other to describe. Perhaps the only way to solve this would be to assume "simulation" is an analogy for how the universe (operating under the laws of quantum mechanics) acts like a quantum computer - and therefore it can "calculate" itself. But then, that doesn't really say the same thing as "we exist in someone else's simulation"." (from the link).

This conclusion about the universe "simulating itself" is really what I'm trying to get at. That it would take the same amount of energy to simulate the universe as there is energy in the universe, so that a "self-simulating universe" is the most likely conclusion, which is of course just a base universe.

Comment by denimalpaca on A Problem for the Simulation Hypothesis · 2017-03-23T22:17:23.287Z · score: 0 (0 votes) · LW · GW

Let me be a little more clear. Let's assume that we're in a simulation, and that the parent universe hosting ours is the top level (for whatever reason, this is just to avoid turtles all the way down). We know that we can harness the energy of the sun, because not only do plants utilize that energy to metabolize, but we also can harness that energy and use it as electricity; energy can transfer.

Some machine that we're being simulated on must take into account these kinds of interactions and make them happen in some way. The machine must represent the sun in some way, perhaps as 0s and 1s. This encoding takes energy, and if we were to simply encode all the energy of the sun, the potential energy of the sun must exist somewhere in that machine. Even if the sun's information is compressed, it would still have to be decompressed when used (or else we have a "lossy" sun, not good if you don't want your simulations to figure out they're in a simulation) - and compressing/decompressing takes energy.

We know that even in a perfect simulation, the sun must have the same amount of energy as outside the simulation, otherwise it is not a perfect simulation. So if a blue photon has twice as much energy as a red photon, then that fact is what causes twice as much energy to be encoded in a simulated blue photon. This energy encoding is necessary if/when the blue photon interacts with something.

Said another way: If, in our simulation, we encode the energy of physical things with the smallest number of bits possible to describe that thing, and blue photons have twice as much energy as red photons, then it should take X bits to describe the energy of the red photon and 2*X bits to describe the blue photon.

As to extra energy, as a practical (engineering) matter alone it would take more energy to simulate a thing even after the encoding for the thing is done: in our universe, there are no perfect energy transfers, some is inevitably lost as heat, so it would take extra energy to overcome this loss. Secondly, if the simulation had any meta-data, that would take extra information and hence extra energy.

Comment by denimalpaca on A Problem for the Simulation Hypothesis · 2017-03-23T21:58:55.750Z · score: 0 (0 votes) · LW · GW

Yes, then I'm arguing that case 1 cannot happen. Although I find it a little tediously tautological (and even more so reductive) to define technological maturity as being solely the technology that makes this disjunction make sense....

Comment by denimalpaca on A Problem for the Simulation Hypothesis · 2017-03-22T22:08:28.530Z · score: 0 (0 votes) · LW · GW

"(1) civilizations like ours tend to self-destruct before reaching technological maturity, (2) civilizations like ours tend to reach technological maturity but refrain from running a large number of ancestral simulations, or (3) we are almost certainly in a simulation."

Case 2 seems far, far more likely than case 3, and without a much more specific definition of "technological maturity", I can't make any statement on 1. Why does case 2 seem more likely than 3?

Energy. If we are to run an ancestral simulation that even remotely wants to correctly simulate as complex phenomenon as weather, we would probably need the scale of the simulation to be quite large. We would definitely need to simulate the entire earth, moon, and sun, as the physical relationships between these three are very intertwined. Now, let's focus on the sun for a second, because it should provide us with all the evidence we need that a simulation would be implausible.

The sun has a lot of energy, and to simulate it would itself require a lot of energy. To simulate the sun exactly as we know it would take MORE energy than the sun, because the entire energy of the sun must be simulated and we must account for the energy lost due to heat or other factors as an engineering concern. So just to properly simulate the sun, we'd need to generate more energy than the sun has, which already seems very implausible on earth, given we can't create a reactor larger than the sun on the earth. If we extend this argument to simulating the entire universe, it seems impossible that humans would ever have the necessary energy to simulate all the energy in the universe, so we must only be able to simulate a part of the universe or a smaller universe. This again follows from the fact that to perfectly simulate something, it requires more energy than the thing simulated.

Comment by denimalpaca on Globally better means locally worse · 2017-03-22T21:46:50.825Z · score: 1 (1 votes) · LW · GW

You should look up the phrase "planned obsolescence". It's a concept taught in many engineering schools. Apple employs it in it's products. The basic idea is similar to your thoughts under "Greater Global Wealth": the machine is designed to have a lifetime that is significantly shorter than what is possible, specifically to get users to keep buying a machine. This is essentially subscription-izing products; subscriptions are, especially today in the start up world, generally a better business model than selling one product one time (or even a couple times).

With phones, this makes perfect sense, given the pace of advancements in the phones, generation after generation.

While you would think that a poor person would optimize for durability, often durability is more expensive, meaning that the poor person's only real choice is a lower-quality product that does not last as long.

"Better materials science: Globally, materials science has improved. Hence, at the local level, manufacturers can get away with making worse materials." This doesn't really follow to me. There are many reasons a manufacturer would use worse materials than the global "best materials", including lower costs. It seems to me that your idea of 'greater global implies worse local' can be equally explained as a phenomenon of capitalism, where the need to make an acceptable product as cheaply as possible does not often align with making the best product at whatever the cost.

Comment by denimalpaca on Could utility functions be for narrow AI only, and downright antithetical to AGI? · 2017-03-22T18:00:08.862Z · score: 0 (0 votes) · LW · GW

Is there an article that presents multiple models of UF-driven humans and demonstrates that what you criticize as contrived actually shows there is no territory to correspond to the map? Right now your statement doesn't have enough detail for me to be convinced that UF-driven humans are a bad model.

And you didn't answer my question: is there another way, besides UFs, to guide an agent towards a goal? It seems to me that the idea of moving toward a goal implies a utility function, be it hunger or human programmed.

Comment by denimalpaca on Could utility functions be for narrow AI only, and downright antithetical to AGI? · 2017-03-21T18:46:20.326Z · score: 0 (0 votes) · LW · GW

Why would I give up the whole idea? I think you're correct in that you could model a human with multiple, varying UFs. Is there another way you know of to guide an intelligence toward a goal?

Comment by denimalpaca on Could utility functions be for narrow AI only, and downright antithetical to AGI? · 2017-03-17T17:51:57.109Z · score: 0 (0 votes) · LW · GW

I think you're getting stuck on the idea of one utility function. I like to think humans have many, many utility functions. Some we outgrow, some we "restart" from time to time. For the former, think of a baby learning to walk. There is a utility function, or something very much like it, that gets the baby from sitting to crawling to walking. Once the baby learns how to walk, though, the utility function is no longer useful; the goal has been met. Now this action moves from being modeled by a utility function to a known action that can be used as input to other utility functions.

As best as I can tell, human general intelligence comes from many small intelligences acting in a cohesive way. The brain is structured like this, as a bunch of different sections that do very specific things. Machine models are moving in this direction, with the Deepmind Go neural net playing a version of itself to get better a good example.

Comment by denimalpaca on How AI/AGI/Consciousness works - my layman theory · 2017-03-09T23:13:20.922Z · score: 0 (0 votes) · LW · GW

I disagree with your interpretation of how human thoughts resolve into action. My biggest point of contention is the random pick of actions. Perhaps there is some Monte-Carlo algorithm that has a statistical guarantee that after some thousands or so tries, there is a very high probability that one of them is close to the best answer. Such algorithms exist, but it makes more sense to me that we take action based not only on context, but our memory of what has happened before. So instead of a probabilistic algorithm, you may have a structure more like a hash table. Then the input to the hash table would be what we see and feel in the moment: you see a mountain lion and feel fear, this information is hashed, and run like hell is the output. Collisions of this hash table could result in things like inaction.

I think your idea of consciousness is a good start and similar to my own ideas on the matter: we are a system and the observer of the system. What questions remain, however, are what are the sufficient and necessary components of the system, besides self-observation, that would create a subjective experience? Such as, would a system need to be self-preserving and aware of that self-preservation? Is sentience a prerequisite of sapience? By your definition, you seem to imply the other way around, that one must be a self-observing system to observe that you are observing something outside of your system. Maybe this is a chicken and egg problem, and the two are co-necessary factors. I would like to hear your thoughts on this.

As to your thoughts on a friendly AI...I have come up with a silly and perhaps incorrect counter-intuitive approach. Basically, it works like this: a computer system's scheduler gives processor time to different actions in preference of some utility level. Let's say 0 is the least important, and 5 the most. Lower level processes cannot preempt higher level ones; that is, a level 0 process cannot run before all level 1 processes are complete, and even if the completion of a level 0 process can aide the completion of a level 1 process, it cannot be run. The machine must find a different method, or return that the level 1 process cannot be completed with the current schedule. A level 5 request to make 1000 paperclips is given to the machine, and the machine determines that killing all humans will aid the completion of paperclips. Alas! Killing all humans is already scheduled at level 0, and another approach must be taken.

The other, less silly approach I thought of is to enforce a minimum energy requirement on all processes of a sufficiently dangerous machine. It stands to reason that creating 1000 paperclips can take significantly less energy than killing all humans, so killing all humans will be seen as a non-optimal strategy. In this scheme, we may not want to ask for world peace, but we should always be careful what we wish for....

Comment by denimalpaca on Double Crux — A Strategy for Resolving Disagreement · 2017-03-09T22:51:13.902Z · score: 0 (0 votes) · LW · GW

This looks like a good method to derive lower-level beliefs from higher-level beliefs. The main thing to consider when taking a complex statement of belief from another person, is that it is likely that there is more than one lower-level belief that goes into this higher-level belief.

In doxastic logic, a belief is really an operator on some information. At the most base level, we are believing, or operating on, sensory experience. More complex beliefs rest on the belief operation on knowledge or understanding; where I define knowledge as belief of some information: Belief(x) = Knowledge_x. These vertices of knowledge can connect along relational edges to form a graph, of which a subset of vertices and edges could be said to be an understanding.

So I think it's not only important to use this method as a reverse-operator of belief, but to also take an extra step and try to acknowledge the other points on the knowledge graph that represent someone's understanding. Then these knowledge vertices can also be reverse-operated on, and a more complete formulation of both parties' maps can be obtained.