Posts

Comments

Comment by SodaPopinski on Open thread, July 31 - August 6, 2017 · 2017-08-02T05:47:54.463Z · LW · GW

I believe Dyson is saying there could indeed by an infinite amount. Here is a wikipedia article about it https://en.wikipedia.org/wiki/Dyson%27s_eternal_intelligence and the article itself http://www.aleph.se/Trans/Global/Omega/dyson.txt

Comment by SodaPopinski on Open thread, July 31 - August 6, 2017 · 2017-08-01T17:05:04.072Z · LW · GW

This is a very interesting part of an interview with Freeman Dyson where he talks about how computation could go on forever even if the universe faces a heat death scenario. https://www.youtube.com/watch?v=3qo4n2ZYP7Y

Comment by SodaPopinski on Open thread, Dec. 12 - Dec. 18, 2016 · 2016-12-13T17:49:12.444Z · LW · GW

In the same vein, I would highly recommend John Maynard Smith's "Evolution and the Theory of Games". It has many highly motivated examples of Game Theory in Biology by a real biologist. The later chapters get dense but the first half is readable with a basic knowledge of calculus (which was in fact my background when I first picked up this book).

Comment by SodaPopinski on Astrobiology III: Why Earth? · 2016-10-06T11:52:27.628Z · LW · GW

CellBioGuy all your astrobiology posts are great I'd be happy to read all of those. This may be off the astrobiology topic but I would love to see a post with your opinion on the foom question. For example do you agree with Gwern's post about there not being complexity limitations preventing runaway self-improving agents?

Comment by SodaPopinski on Open thread, Jul. 25 - Jul. 31, 2016 · 2016-07-29T18:22:55.423Z · LW · GW

Still reading minor nitpick: for point 2 you don't want to say NP (since P is in NP). It is the NP-hard problems that people would say can't be solved but for small instances (which as you point out is not a reasonable assumption).

Comment by SodaPopinski on Stupid Questions, 2nd half of December · 2015-12-30T01:25:17.355Z · LW · GW

So your first and second point make sense to me, they together make the nominal interest rate. What I don't understand is your point about growth. The price of a stock should be determined by the adjusted future returns of the company right? The growth you speak of should be accounted for already in our models of the future returns. So if the price going up that means the models are underestimating future returns right?

Comment by SodaPopinski on Stupid Questions, 2nd half of December · 2015-12-29T23:00:22.058Z · LW · GW

People in finance tend to believe (reasonably I think) that the stock market trends upward. I believe they mean it trends upward even after you account for the value of the risk you take on by buying stock in a company (i.e. being in the stock market is not just selling insurance). So how does this mesh with the general belief that the market is at least pretty efficient. Why are we systematically underestimating future returns of companies?

Comment by SodaPopinski on Open thread, Dec. 14 - Dec. 20, 2015 · 2015-12-16T20:49:40.709Z · LW · GW

About 20/50, I don't know if that can be unambiguously converted to diopters. I measure by performance by sitting at a constant 20 feet away and when I am over 80% correct I shrink the font on the chart a little bit. I can currently read a slightly smaller font than what corresponds to 20/50 on an eye chart.

Comment by SodaPopinski on Open thread, Dec. 14 - Dec. 20, 2015 · 2015-12-16T20:09:20.454Z · LW · GW

Does anyone know of some good program for eye training. I would like to try to become a little less near-sighted by straining to make out things which are at the edge of my range of good vision. I know near-sighted means my eyeball is squashed, but I am hoping my brain can fix a bit of the distortion in software. Currently I am doing random printed out eye charts, and I have gotten a bit better over time, but printing out the charts is tedious.

Comment by SodaPopinski on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-06T17:30:42.860Z · LW · GW

This is a really fascinating idea, particularly the aspect that we can influence the likelihood we are in a simulation by making it more likely that simulations happen.

To boil it down to a simple thought experiment. Suppose I am in the future where we have a ton of computing power and I know something bad will happen tomorrow (say I'll be fired) barring some 1/1000 likelihood quantum event. No problem, I'll just make millions of simulations of the world with me in my current state so that tomorrow the 1/1000 event happens and I'm saved since I'm almost certainly in one of these simulations I'm about to make!

Comment by SodaPopinski on Open thread, Nov. 02 - Nov. 08, 2015 · 2015-11-06T01:42:37.462Z · LW · GW

I agree with your sentiment. I am hoping though that one can define formally what a computation is given a physical system. Perhaps you are on to something with the causal requirement, but I think this is hard to pin down precisely. The noise is still being caused by the previous state of the system, so how can we sensibly talk about cause in a physical system. It seems like we would be more interested in 'causes' associated to more agent-like objects like an engine than formless things like the previous state of a cloud of gas. Actually I think Caspar's article was trying to formalize something like this but I don't understand it that well: http://lesswrong.com/r/discussion/lw/msg/publication_on_formalizing_preference/

Comment by SodaPopinski on Open thread, Nov. 02 - Nov. 08, 2015 · 2015-11-05T22:52:13.788Z · LW · GW

Take the thermal noise generated in part of the circuit. By setting a threshold we can interpret it as a sequence 110101011 etc. Now if this list sequence was enormous we would eventually have a pixel by pixel description of any picture, letter by letter description of every book, state after state description of the tape on any Turing machine etc (basically a Library of Babel situation). Now of course we would need a crazy long sequence for this, but there is similar noise associated with the motion of every atom in the circuit, likewise the noise is far more complex if we don't truncate it to 0's and 1's, and finally there are many many many encodings of our resulting strings (does 110 represent the letter A, 0101 a blue pixel and so on).

If I chose ahead of time the procedure of how the thermal noise fluctuates and I seed in two instances of noise I think of as representing 2 and 3, and after a while it outputs a thermal noise I think of as 5 then I am ok calling that a computation. But why should my naming of the noise and dictating how the system develops be required for computation to occur?

Comment by SodaPopinski on Open thread, Nov. 02 - Nov. 08, 2015 · 2015-11-05T19:17:27.142Z · LW · GW

It is interesting to compare the Less Wrong and Wikipedia articles on Recursive self improvement: http://wiki.lesswrong.com/wiki/Recursive_self-improvement https://en.wikipedia.org/wiki/Recursive_self-improvement I still find the anti-foom arguments based on diminishing returns in the Wikipedia article to be compelling. Has there been any progress on modelling recursively self improving systems systems beyond what we can find in the foom-debate?

Comment by SodaPopinski on Open thread, Nov. 02 - Nov. 08, 2015 · 2015-11-04T19:41:12.271Z · LW · GW

If there are really infinite instances of conscious computations, then I don't think it is unreasonable to believe that there exists no more/less measure and simply we have no reason at all to be surprised to be living in one type of simulation than another. I guess my interest with the question was if there is any way to not throw the baby out with the bathwater, by having a reasonable more restrictive notation of what a computation is.

Comment by SodaPopinski on Open thread, Oct. 19 - Oct. 25, 2015 · 2015-11-04T19:30:57.507Z · LW · GW

My question is simply: Do we have any reason to believe that the uncertainty introduced by quantum mechanics will preclude the level of precision in which two agents have to model each other in order to engage in acausal trade?

Comment by SodaPopinski on Open thread, Nov. 02 - Nov. 08, 2015 · 2015-11-04T01:58:02.017Z · LW · GW

What is a computation? Intuitively some (say binary) states of the physical world are changed, voltage gates switched, rocks moved around (https://xkcd.com/505/), whatever.
Now, in general if these physical changes were done with some intention like in my CPU or the guy moving the rocks in the xkcd comic, then I think of this as a computation, and consequentially I would care for example about if the computation I performed simulated a conscious entity.

However, surely my or my computer's intention can't be what makes the physical state changes count as a computation. But then how do we get around the slippery slope where everything is computing everything imaginable. There are billions of states I can interpret as 1's and 0's which get transformed in countless different ways every time I stir my coffee. Even worse, in quantum mechanics, the state of a point is given by a potentially infinitely wiggly function. What stops me from interpreting all of this as computation which under some encoding gives rise to countless Boltzmann brain type conscious entities and simulated worlds?

Comment by SodaPopinski on Open thread, Oct. 19 - Oct. 25, 2015 · 2015-10-24T22:51:41.247Z · LW · GW

Yes, I understand the point of acausal trading. The point of my question was to speculate on how likely it is that quantum mechanics may prohibit modeling accurate enough to make acausal trading actually work. My intuition is based on the fact that in general faster than light transmission of information is prohibited. For example, even though entangled particles update on each others state when they are outside of each others light cone, it is known that it is not possible transmit information faster than light using this fact.
Now, does mutually enhancing each others utility count as information, I don't think so. But my instinct is that acausal trade protocols will not be possible do to the level of modelling required and the noise introduced by quantum mechanics.

Comment by SodaPopinski on Open thread, Oct. 19 - Oct. 25, 2015 · 2015-10-21T22:57:26.547Z · LW · GW

Do we know whether quantum mechanics could rule out acausal between partners outside eachother's lightcone? Perhaps it is impossible to model someone so far away precisely enough to get a utility gain out of an acuasal trade? I started thinking about this after reading this wiki article on the 'Free will theorem' https://en.wikipedia.org/wiki/Free_will_theorem .

Comment by SodaPopinski on Stupid questions thread, October 2015 · 2015-10-16T21:05:07.363Z · LW · GW

Where can I find the most coherent anti-FOOM argument (outside of the FOOM debate)? [That is, I'm looking for arguments for the possibility of not having an intelligence explosion if we reach near human level AI, the other side is pretty well covered on LW.]

Comment by SodaPopinski on Stupid questions thread, October 2015 · 2015-10-16T20:59:41.314Z · LW · GW

If we obtained a good understanding of the beginning of life and found that the odds of life occurring at some point in our universe was one in a million, then what exactly would follow from that. Sure the Fermi paradox would be settled, but would this give credence to multiverse/big world theories or does the fact that the information is anthropically biased tell us nothing at all? Finally, if we don't have to suppose a multiverse to account for a vanishingly small probability of life, then wouldn't it be surprising if there are not a lot of hugely improbable jumps in the forming of intelligent life?

Comment by SodaPopinski on Stupid Questions September 2015 · 2015-09-03T18:07:05.022Z · LW · GW

Is there any good nonfiction books on cryonics? All I could find is this one http://www.amazon.com/Freezing-People-Not-Easy-Adventures/dp/0762792957/ref=sr_1_1?ie=UTF8&qid=1441303378&sr=8-1&keywords=cryonics . I started to read it but it is more historical and autobiographical. Also, do you think there would be demand for a well researched book on cryonics for general audiences?

Comment by SodaPopinski on Stupid Questions September 2015 · 2015-09-03T17:50:10.536Z · LW · GW

Is it useful to think about the difference between 'physically possible' i.e. obeying the laws of physics and possible to engineer? In computer science there is something like this. You have things which can't be done on a turing machine (e.g. halting problem). But then you have things which we may never be able to arrange the atoms in the universe to do, such as large cases of NP-hard problems.

So what about in physics? I have seen the argument that if we set loose a paperclip maximizer on earth, then we might doom the rest of the observable universe. But maybe there is simply no sequence of steps that even a super brilliant AI could take to arrange matter in such a way as to say move 1000kg at 98% the speed of light. Anyway, I am curious if this kind of thinking is developed somewhere.

Comment by SodaPopinski on Stupid Questions September 2015 · 2015-09-03T17:11:14.605Z · LW · GW

What is the best way to handle police interaction in countries you don't live in? In the US it is generally considered pretty wise to exercise your right to be silent extensively. Now obviously in some really corrupt places your just going to have to go along with whatever they want. But what about the different countries in Europe? My instinct would be to respectfully tell the officer I would like to call my embassy (and have that number with me!).

Comment by SodaPopinski on Stupid Questions September 2015 · 2015-09-03T17:00:33.972Z · LW · GW

Can we use the stock market itself as a useful prediction market in any way? For example can we get useful information about how long Moore's law type growth in microprocessors will likely continue based on how much the market values certain companies? Or are there too many auxiliary factors, so that reverse engineering anything interesting from price information is hopeless?

Comment by SodaPopinski on Open Thread, Jun. 8 - Jun. 14, 2015 · 2015-06-11T03:27:02.042Z · LW · GW

What do we really understand about the perception of time speeding up as we get older? Every time I have seen it brought up one of two explanations are given. Either time is speeding up because we have fewer novel experiences which, in turn, lead to fewer new memories being created. Then, supposedly, our feeling of time passing is dependent on how many new memories we have in a given time frame and so we feel time is speeding up.

The other explanation I have seen is that time speeds up because each new year is a smaller percentage of your life up to that point. For example, it is easier to distinguish a 2kg weight and 4kg weight than a 50kg weight and a 52kg weight. So the argument goes that a similar thing holds for our perception of time passing.

These arguments both feel sketchy to me. Is there a more rigorous investigation into this question?

Comment by SodaPopinski on Probability of coming into existence again ? · 2015-02-28T17:49:19.653Z · LW · GW

The problem is the mental construct of "I". Yes we can't help but believe that there is feeling, thinking, subjective experience etc. The problem is that our brain seems to naturally construct a concept of "I" which is a sort of owner of these subjective experiences that persists over time. This construct, while deeply engrained and probably useful, is not consistent with physical reality. This can be seen either with teleporter type thought experiments or to some extent with real life cases of brain trauma (for example in Oliver Sacks's or Ramachandran's books). Our brains' care about protecting some potential future entities, which barring crazy technology or anthropic scenarios are easy to specify, but there is not going to be a coherent general principle to decide when we should count potential future entities as being us.

Comment by SodaPopinski on A rational approach to the issue of permanent death-prevention · 2015-02-11T16:36:51.241Z · LW · GW

The idea of a persistent personal identity has no physical basis. I am not questioning consciousness only saying that the mental construct that there is an ownership to some particular sequence of conscious feelings over time is inconsistent with reality (as I would argue all the teleporter-type thought experiments show). So in my view all that matters is how much a certain entity X decides (or instinctually feels) it should care about some similar seeming later entity Y.

Comment by SodaPopinski on Open thread, Feb. 9 - Feb. 15, 2015 · 2015-02-09T23:03:20.339Z · LW · GW

Are there things we should be doing now to take advantage of future technology. What I mean would be something like people who bank umbilical cord fluid for potential future stem cell usages. Another example would be if we had taken a lot of pictures of a historical building which is now gone, then we could use modern day photogrammetry to make a 3d model of it. A potential current example, suppose we recorded a ton of our day to day vocal communication. Then, some day in the future, a new machine learning algorithm could make use of the data. So what I am looking for is whether there are any potential 'missed opportunity' of this type we should be considering (posted similar question on futurology subreddit).

Comment by SodaPopinski on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-15T16:50:10.024Z · LW · GW

How do Bostrom type simulation arguments normally handle nested simulations? If our world spins off simulation A and B, and B spins off C and D, then how do we assign the probabilities of finding ourselves in each of those? Also troubling to me is what happens if you have a world that simulates itself, or simulations A and B that simulate each other. Is there a good way to think about this?

Comment by SodaPopinski on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-15T15:30:01.424Z · LW · GW

One part is writing down whatever dreams I can remember right upon awaking. This has led to me occasionally experiencing lucid dreams without really trying.

Also since I am writing dreams anyway, this makes it easy to do the other writing which I find beneficial. Namely, writing the major plan of the day and gratitude stuff.

Comment by SodaPopinski on Superintelligence 12: Malignant failure modes · 2014-12-05T19:18:00.213Z · LW · GW

On one hand, I think the world is already somewhat close to a singleton (with regard to AI, obviously it is nowhere near singleton with regard to most other things). I mean google has a huge fraction of the AI talent. The US government has a huge fraction of the mathematics talent. Then, there is Microsoft, FB, Baidu, and a few other big tech companies. But every time an independent AI company gains some traction it seems to be bought out by the big guys. I think this is a good thing as I believe the big guys will act in there own best interest including their interest in preserving their own life (i.e., not ending the world). Of course if it is easy to make an AGI, then there is no hope anyway. But, if it requires companies of Google scale, then there is hope they will choose to avoid it.

Comment by SodaPopinski on Superintelligence 12: Malignant failure modes · 2014-12-02T13:31:49.624Z · LW · GW

Totally agree, and I wish this opinion was voiced more on LW rather than the emphasis on trying to make a friendly self improving AI. For this to make sense though I think the human race needs to become a singleton, although perhaps that is what Google's acquisitions and massive government surveillance is already doing.

Comment by SodaPopinski on Open thread, Dec. 1 - Dec. 7, 2014 · 2014-12-02T03:39:00.394Z · LW · GW

(warning brain dump most of which probably not new to the thinking on LW) I think most people who take the Tegmark level 4 universe seriously (or any of the preexisting similar ideas) get there by something like the following argument: Suppose we had a complete mathematical description of the universe, then exactly what more could there be to make the thing real (Hawking's fire into the equations).

Here is the line of thinking that got me to buy into it. If we ran a computer simulation, watched the results on a monitor, and saw a person behaving just like us, then it would be easy for me to interpret their world and their mind etc. as real (even if I could never experience it viscerally living outside the simulation). However, if we are willing to call one simulation real, then we get into the slippery slope problem which I have no idea how to avoid whereby any physical phenomena implementing any program from the perspective of any universal Turing machine must really exist. So it seems to me if we believe some simulation is real there is no obvious barrier to believing every (computable) universe exists. As for whether we stop at computable universes or include more of mathematics, I am not sure anything we would call conscious could tell the difference, so perhaps it makes no difference.

(Resulting beliefs + aside on decision theory) I believe in a Level 4 Tegmark with no reality fluid measure (as I have yet to see a convincing argument for one) a la http://lesswrong.com/r/discussion/lw/jn2/preferences_without_existence/ . Moreover, I don't think there is any "correct" decision theory that captures what we should be doing. All we can do is pick the one that feels right with regard to our biological programming. Which future entities are us, how many copies of us will there be, and who should I care about etc. are all flaky concepts at best. Of course, my brain won't buy into the idea I should jump off a bridge or touch a hot stove, but I think it is unplausable that this will follow from any objective optimization principle. Nature didn't need a decision theory to decide if it is a good idea to walk into a teleporter machine if two of us walk out the other side. We have our built in shabby biological decision theory, we can innovate on it theoretically, but there is no objective sense in which some particular decision theory will be the right one for us.

Comment by SodaPopinski on Open thread, Dec. 1 - Dec. 7, 2014 · 2014-12-01T22:25:59.160Z · LW · GW

Elon Musk often advocates looking at problems from a first principles calculation rather than by analogy. My question is what does this kind of thinking imply for cryonics. Currently, the cost of full body preservation is around 80k. What could be done in principle with scale?

Ralph Merkle put out a plan (although lacking in details) for cryopreservation at around 4k. This doesn't seem to account for paying the staff or transportation. The basic idea is that one can reduce the marginal cost by preserving a huge number of people in one vat. There is some discussion of this going on at Longecity, but the details are still lacking.

Comment by SodaPopinski on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-26T15:14:45.070Z · LW · GW

This is a disturbing talk from Schmidhuber (who worked with Hutter and one of the founders of Deep Mind at the Swiss AI lab).
I say disturbing because of the last minute where he basically says we should be thankful for being the stepping stone to the next step in an evolution towards a world ran by AI's.
This is the nonsense we see repeated almost everywhere (outside lesswrong) that we should be happy to have humanity supplanted by the more intelligent AI, and here it is coming from a pretty wellknown AI researcher... https://www.youtube.com/watch?v=KQ35zNlyG-o

Comment by SodaPopinski on Open thread, Nov. 10 - Nov. 16, 2014 · 2014-11-10T20:50:10.789Z · LW · GW

Suppose we believe that stock market prices are very good aggregators of information about companies future returns. What would be the signs that the "big money" is predicting (a) a positive postscarcity type singularity event or (b) an apocalypse scenario AI induced or otherwise?

Comment by SodaPopinski on Open thread, Nov. 3 - Nov. 9, 2014 · 2014-11-07T06:14:19.616Z · LW · GW

What is the current status on formalizing timeless decision theory? I am new to LW, and have a mathematics background and would like to work on decision theory (in the spirit of LW). However, all I can find is some old posts (2011) of Eliezer saying that write ups are in process, as well as a 120 page report by Eliezer from MIRI which is mostly discussing TDT in words as well as the related philosophical problems. Is there a formal self contained definition of TDT out there?