Posts
Comments
thank you.
Why down votes and a statement that I am wrong because I misunderstood.
This is a mean spirited reaction when I lead with admission that I could not follow the argument. I offered a concrete example and stated that I could not follow the original thesis as applied to the concrete example. No one took me up on this.
Are you too advanced to stoop to my level of understanding and help me figure out how this abstract reasoning applies to a particular example? Is the shut down mechanism suggested by Yudkowsky too simple?
I tried to follow with a particular shut down mechanism in mind and this whole argument is just too abstract to see how it applies.
Yudkowsky gave us a shut down mechanism in his Time Magazine article. He said we could bomb* the data centers. Can you show how these theorems cast doubt on this shut down proposal?
*"destroy a rogue datacenter by airstrike." - https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Compute is not the limiting factor for mammalian intelligence. Mammalian brains are organized to maximize communication. The gray matter, where most compute is done, is mostly on the surface and the white matter which dominate long range communication, fills the interior, communicating in the third dimension.
If you plot volume of white matter vs. gray matter across the various mammal brains, you find that the volume of white matter grows super linearly with volume of gray matter. https://www.pnas.org/doi/10.1073/pnas.1716956116
As brains get larger, you need a higher ratio of communication/compute.
Your calculations, and Cotras as well, focus on FLOPs but the intelligence is created by communication.
dy/dt = f(y) = m*y whose solution is the compound interest exponential, y = e^(m*t).
Why not estimate m?
An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely.
This blog post contains a false dichotomy. In the equation, m can take any value and there is no special keyhole value, and there is no line between fast and slow.
The description in the subsequent discussion is a distraction. The posted equation is meaningful only if we have an estimate of the growth rate.
We don't have to tell it about the off switch!
You said nothing about positive contributions. When you throw away the positives, everything is negative.
Why didn't you also compute the expectation this project contributes towards human flourishing?
If you only count the negative contributions, you will find that the expectation value of everything is negative.
The ML engineer is developing an automation technology for coding and is aware of AI risks . The engineers polite acknowledgment of the concerns is met with your long derivation of how many current and future people she will kill with this.
Automating an aspect of coding is part of a long history of using computers to help design better computers, starting with Carver Mead's realization that you don't need humans to cut rubylith film to form each transistor.
You haven't shown an argument that this project will accelerate the scenario you describe. Perhaps the engineer is brushing you off because your reasoning is broad enough to apply to all improvements in computing technology. You will get more traction if you can show more specifically how this project is "bad for the world".
The missile gap was a lie by the US Air Force to justify building more nukes, by falsely claiming that the Soviet Union had more nukes than the US
This statement is not supported by the link used as a reference. Was it a lie? The reference speaks to failed intelligence and political manipulation using the perceived gap. The phrasing above suggests conspiracy.
Why doesn't an "off switch" protect us?
You have more than an order of magnitude scatter in your plot, but you write 3 significant figures to your calculated doubling period. Is this precision of value?
Also, your black data appears to have something different going on prior to 2008. It would be worthwhile doing a separate fit to post 2008 data. Eyeballing it, it is longer than 4 year doubling time.
AI is dependent on humans. It gets power and data from humans and it cannot go on without humans. We don't trade with it, we dictate terms.
Do we fear a world where we have turned over mining, production and powering everything to the AI. Getting there would take a lot more than self amplifying feedback loop of a machine rewriting its own code.
When I was doing runs in the dozens of miles, I found it better to cache water ahead of time at the ten mile points. On a hot day, you need more water than you can comfortably carry.
Ok, I could be that someone. here goes. You and the paper author suggest a heat engine. That needs a cold side and a hot side. We build a heat engine where the hot side is kept hot by the incoming energy as described in this paper. The cold side is a surface we have in radiative communication with the 3 degrees Kelvin temperature of deep space. In order to keep the cold side from melting, we need to keep it below a few thousand degrees, so we have to make it really large so that it can still radiate the energy.
From here, we can use Stefan–Boltzmann law, to show that we need to build a radiator much bigger than a billion times the surface area of Mercury. It goes as the fourth power of the ratio of temperatures in our heat engine.
The paper's contribution is the suggestion of a self replicating factory with exponential growth. That is cool. But the problem with all exponentials is that, in real life, they fail to grow indefinitely. Extrapolating an exponential a dozen orders of magnitude, without entertaining such limits, is just silly.
A billion times the energy flux from the surface of the sun, over any extended area is a lot to deal with. It is hard to take this proposal seriously.
For the record, I find that scientists make such errors routinely. In public conferences when optical scientists propose systems that violate the constant radiance theorem, I have no trouble standing up and saying so. It happens often enough that when I see a scientist propose such a system, It does not diminish my opinion of that scientist. I have fallen into this trap myself at times. Making this error should not be a source of embarrassment.
either way, you are claiming Sandberg, a physicist who works with thermodynamic stuff all the time, made a trivial error of physics;
I did not expect this to revert to credentialism. If you were to find out that my credentials exceed this other guy's, would you change your position? If not, why appeal to credentials in your argument?
I stand corrected. Please forgive me.
Last summer I went on a week long backpacking trip where we had to carry out all our used toilet paper.
This year, I got this bidet for Christmas: https://www.garagegrowngear.com/products/portable-bidet-by-culoclean
You could carry one with you so you are no longer reliant on having them provided for you.
How much extra energy external energy is required to get an energy flux on Mercury of a billion times that leaving the sun? I have an idea, but my statmech is rusty. (the fourth root of a billion?)
And do we have to receive the energy and convert it to useful work with 99.999999999% efficiency to avoid melting the apparatus on Mercury?
DK> "I don't see how they are violating the second law of thermodynamics"
Take a large body C, and a small body H. Collect the thermal radiation from C in some manner and deposit that energy on H. The power density emitted from C grows with temperature. The temperature of H grows with the power density deposited. If, without adding external energy, we concentrate the power density from the large body C to a higher power density on the small body H, H gets hotter than C. We may then use a heat engine between H an C to make free energy. This is not possible, therefore we cannot do the concentration.
The Etendue argument is just a special case where the concentration is attempted with mirrors or lenses. Changing the method to involve photovoltaic/microwave/rectenna power concentration doesn't fix the issue, because the argument from the second law is broader, and encompasses any method of concentrating the power density as shown above.
When we extrapolate exponential growth, we must take care to look for where the extrapolation fails. Nothing in real life grows exponentially without bounds. "Eternity in Six Hours" relies on power which is 9 orders of magnitude greater than the limit of fundamental physical law.
thanks for showing that Gwern's statement that I am "bad at reading" is misplaced.
The conservation of etendué is merely a particular version of the second law of thermodynamics. Now, You are trying to invoke a multistep photovoltaic/microwave/rectenna method of concentrating energy, but you are still violating the second law of thermodynamics.
If one could concentrate the energy as you propose, one could build a perpetual motion machine.
>Kokotajlo writes:Wouldn't that be enough to melt, and then evaporate, the entirety of Mercury within a few hours? After all isn't that what would happen if you dropped Mercury into the Sun?
How do you get hours?
The sun emits light because it is hot. You can't concentrate thermal emission to be brighter than the source. (if you could, you could build a perpetual motion machine).
Eternity in Six Hours describes very large lightweight mirrors concentrating solar radiation onto planet Mercury.
The most power you could deliver from the sun to Mercury is the power of the sun times the square of the ratio of the radius of Mercury to the radius of the sun.
The total solar output is 4*10^26 Watts. The ratio of the sun's radius to that of mercury is half a million. So you can focus about 10^15 Watts onto Mercury at most.
Figure 2 of Eternity in Six Hours projects getting 10^24 Watts to do the job.
I have read Eternity in Six Hours and I can say that it violates the Second Law of Thermodynamics through the violation of the Constant Radiance Theorem. The Power density they deliver to Mercury exceeds the power density of radiation exiting the sun by 6 orders of magnitude!
You should have a look at the conference on retrocausation. And it would also be valuable to look at Garret Moddel's experiments on the subject.
you draw a right turn. The post is asking about a left turn.
You drew a right turn, the post is asking about a left turn.
Yes it is. When I took Feynman's class on computation, he presented an argument on Landauer's limit. It involved a multi-well quantum potential where the barrier between the wells was slowly lowered and the well depths adjusted. During the argument, one of the students asked if he had not just introduced a Maxwell's demon. Feynman got very defensive.
Why assume Gaussian?
Is it likely to do more good than harm?
Most AI safety criticisms carry a multitude of implicite assumptions. This argument grants the assumption and attacks the wrong strategy.
We are better off improving a single high-level AI than making a second one. There is not battle between multiple high-level AIs if there is only one.
>What would you have said?
Your comment is stronger without this sentence.
Gwern asks"Why would you do that and ignore (mini literature review follows):"
Thompson did not ignore the papers Gwern cites. A number of them are in Thompson's tables comparing prior work on scaling. Did Gwern tweet this criticism without even reading Thompson's paper?
Reliable? Your hard disk will be unreadable before long, while the human brain has developed ways to pass information down over generations.
>I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community.
This is chastising him for failure to abide by groupthink.
The rest of your comment makes a point that is undermined by this statement.
FSM
https://www.spaghettimonster.org/2015/08/mars-sighting/
I must disagree. I roasted a large plane for Thanksgiving yesterday and it was incomparable to a bird. For tips on brining your plane, see here: https://en.wikipedia.org/wiki/US_Airways_Flight_1549
No, Humans do not satisfy this assumptions adopted here, unless you make this more specific.
The definition of Generalize is given above as: "Generalizes, i.e., performs well in new domains, which were not optimized for during training, with no domain-specific tuning".
Whether you think humans do this depends on what you take for "new domains" and "perform well".
Humans taught to crawl on hardwood floors can crawl on carpeted floors. Humans taught to hunt fly larva will need further training to hunt big game.
When we substitute credentials for reason, we get nowhere.
- Open the box when and iff you need the utilon. This simple solution gets you the maximum utilons if you need a utilon and none if you don't need a utilon.
This is difficult for people with no ML background. The trouble with this is that one first has to explain timelines. Then explain what averages and ranges most researchers in the field maintain, and then explain why some discount that in favor of short AI timelines. That is a long arc for a skeptical person.
Aren't we all skeptical people? Carl Sagan said that extraordinary claims require extraordinary evidence. Explaining a short timeline is a heavy lift by its very nature.