Posts

What we owe the microbiome 2022-12-17T19:40:11.462Z
Playing shell games with definitions 2022-12-08T19:35:35.197Z

Comments

Comment by weverka on The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists · 2023-11-22T01:04:08.145Z · LW · GW

thank you.

Comment by weverka on The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists · 2023-11-20T13:45:06.010Z · LW · GW

Why down votes and a statement that I am wrong because I misunderstood.  

This is a mean spirited reaction when I lead with admission that I could not follow the argument.  I offered a concrete example and stated that I could not follow the original thesis as applied to the concrete example.  No one took me up on this.  

Are you too advanced to stoop to my level of understanding and help me figure out how this abstract reasoning applies to a particular example?   Is the shut down mechanism suggested by Yudkowsky too simple?

Comment by weverka on The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists · 2023-10-23T18:11:36.006Z · LW · GW

I tried to follow with a particular shut down mechanism in mind and this whole argument is just too abstract to see how it applies.

  Yudkowsky gave us a shut down mechanism in his Time Magazine article.  He said we could bomb* the data centers.  Can you show how these theorems cast doubt on this shut down proposal?

 

*"destroy a rogue datacenter by airstrike." - https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

Comment by weverka on Transformative AGI by 2043 is <1% likely · 2023-06-09T20:34:57.192Z · LW · GW

Compute is not the limiting factor for mammalian intelligence.  Mammalian brains are organized to maximize communication.  The gray matter, where most compute is done, is mostly on the surface  and the white matter which dominate long range communication, fills the interior, communicating in the third dimension.

If you plot volume of white matter vs. gray matter across the various mammal brains, you find that the volume of white matter grows super linearly with volume of gray matter.   https://www.pnas.org/doi/10.1073/pnas.1716956116

As brains get larger, you need a higher ratio of communication/compute.

Your calculations, and Cotras as well, focus on FLOPs but the intelligence is created by communication.  

Comment by weverka on Hard Takeoff · 2023-05-28T13:43:47.415Z · LW · GW

dy/dt = f(y) = m*y whose solution is the compound interest exponential, y = e^(m*t).

Why not estimate m?  

An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely.

This blog post contains a false dichotomy.  In the equation, m can take any value and there is no special keyhole value, and there is no line between fast and slow.

The description in the subsequent discussion is a distraction.  The posted equation is meaningful only if we have an estimate of the growth rate.  

Comment by weverka on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2023-04-01T14:55:31.010Z · LW · GW

We don't have to tell it about the off switch!

Comment by weverka on Lotteries: A Waste of Hope · 2023-03-12T01:18:58.659Z · LW · GW
Comment by weverka on [deleted post] 2023-02-08T13:59:48.820Z

You said nothing about positive contributions.  When you throw away the positives, everything is negative.  

Comment by weverka on [deleted post] 2023-02-06T14:13:33.742Z

Why didn't you also compute the expectation this project contributes towards human flourishing?

If you only count the negative contributions, you will find that the expectation value of everything is negative. 

Comment by weverka on [deleted post] 2023-02-05T14:30:40.740Z

The ML engineer is developing an automation technology for coding and is aware of AI risks .  The engineers polite acknowledgment of the concerns is met with your long derivation of how many current and future people she will kill with this.

  Automating an aspect of coding is part of a long history of using computers to help design better computers, starting with Carver Mead's realization that you don't need humans to cut rubylith film to form each transistor.

    You haven't shown an argument that this project will accelerate the scenario you describe.  Perhaps the engineer is brushing you off because your  reasoning is broad enough to apply to all improvements in computing technology.   You will get more traction if you can show more specifically how this project is "bad for the world".

Comment by weverka on The AI Timelines Scam · 2023-01-29T17:09:49.138Z · LW · GW

The missile gap was a lie by the US Air Force to justify building more nukes, by falsely claiming that the Soviet Union had more nukes than the US

This statement is not supported by the link used as a reference.  Was it a lie?  The reference speaks to failed intelligence and political manipulation using the perceived gap. The phrasing above suggests conspiracy.

Comment by weverka on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2023-01-26T22:28:01.480Z · LW · GW

Why doesn't an "off switch" protect us?

Comment by weverka on Trends in GPU price-performance · 2023-01-24T19:13:00.913Z · LW · GW

You have more than an order of magnitude scatter in your plot, but you write 3 significant figures to your calculated doubling period. Is this precision of value?  

Also, your black data appears to have something different going on prior to 2008.  It would be worthwhile doing a separate fit to post 2008 data.  Eyeballing it, it is longer than 4 year doubling time.

Comment by weverka on We don’t trade with ants · 2023-01-20T19:59:40.393Z · LW · GW

AI is dependent on humans.  It gets power and data from humans and it cannot go on without humans.   We don't trade with it, we dictate terms. 

Do we fear a world where we have turned over mining, production and powering everything to the AI. Getting there would take a lot more than self amplifying feedback loop of a machine rewriting its own code. 

Comment by weverka on Running With a Backpack · 2023-01-11T20:06:52.573Z · LW · GW

When I was doing runs in the dozens of miles, I found it better to cache water ahead of time at the ten mile points.  On a hot day, you need more water than you can comfortably carry.

Comment by weverka on What 2026 looks like · 2023-01-10T14:06:59.606Z · LW · GW

Ok, I could be that someone. here goes.  You and the paper author suggest a heat engine.  That needs a cold side and a hot side.  We build a heat engine where the hot side is kept hot by the incoming energy as described in this paper.  The cold side is a surface we have in radiative communication with the 3 degrees Kelvin temperature of deep space.  In order to keep the cold side from melting, we need to keep it below a few thousand degrees, so we have to make it really large so that it can still radiate the energy. 

From here, we can use Stefan–Boltzmann law,  to show that we need to build a radiator much bigger than a billion times the surface area of Mercury.  It goes as the fourth power of the ratio of temperatures in our heat engine.  

The paper's contribution is the suggestion of a self replicating factory with exponential growth.  That is cool.  But the problem with all exponentials is that, in real life, they fail to grow indefinitely.  Extrapolating an exponential a dozen orders of magnitude, without entertaining such limits, is just silly. 

Comment by weverka on What 2026 looks like · 2023-01-09T13:25:07.006Z · LW · GW

A billion times the energy flux from the surface of the sun, over any extended area is a lot to deal with.  It is hard to take this proposal seriously.

Comment by weverka on What 2026 looks like · 2023-01-08T19:14:04.316Z · LW · GW

For the record, I find that scientists make such errors routinely.  In public conferences when optical scientists propose systems that violate the constant radiance theorem, I have no trouble standing up and saying so.  It happens often enough that when I see a scientist propose such a system, It does not diminish my opinion of that scientist.  I have fallen into this trap myself at times.  Making this error should not be a source of embarrassment.  

either way, you are claiming Sandberg, a physicist who works with thermodynamic stuff all the time, made a trivial error of physics;

I did not expect this to revert to credentialism.  If you were to find out that my credentials exceed this other guy's, would you change your position?  If not, why appeal to credentials in your argument?

Comment by weverka on [Preprint] The Computational Limits of Deep Learning · 2023-01-03T17:30:19.967Z · LW · GW

I stand corrected.  Please forgive me.

Comment by weverka on Why don't Rationalists use bidets? · 2023-01-02T21:50:57.674Z · LW · GW

Last summer I went on a week long backpacking trip where we had to carry out all our used toilet paper.  

This year, I got this bidet for Christmas:  https://www.garagegrowngear.com/products/portable-bidet-by-culoclean

You could carry one with you so you are no longer reliant on having them provided for you.

Comment by weverka on What 2026 looks like · 2022-12-25T15:50:24.250Z · LW · GW

How much extra energy external energy is required to get an energy flux on Mercury of a billion times that leaving the sun?  I have an idea, but my statmech is rusty. (the fourth root of a billion?)

And do we have to receive the energy and convert it to useful work with 99.999999999% efficiency to avoid melting the apparatus on Mercury?

Comment by weverka on What 2026 looks like · 2022-12-23T12:33:03.140Z · LW · GW

DK> "I don't see how they are violating the second law of thermodynamics"

Take a large body C, and a small body H.  Collect the thermal radiation from C in some manner and deposit that energy on H.  The power density emitted from C grows with temperature.  The temperature of H grows with the power density deposited.  If, without adding external energy, we concentrate the power density from the large body C to a higher power density on the small body H, H gets hotter than C.  We may then use a heat engine between H an C to make free energy.  This is not possible, therefore we cannot do the concentration.

The Etendue argument is just a special case where the concentration is attempted with mirrors or lenses.  Changing the method to involve photovoltaic/microwave/rectenna power concentration doesn't fix the issue, because the argument from the second law is broader, and encompasses any method of concentrating the power density as shown above.

When we extrapolate exponential growth, we must take care to look for where the extrapolation fails.  Nothing in real life grows exponentially without bounds.  "Eternity in Six Hours"  relies on power which is 9 orders of magnitude greater than the limit of fundamental physical law.

Comment by weverka on What 2026 looks like · 2022-12-22T05:50:37.215Z · LW · GW

thanks for showing that Gwern's statement that I am "bad at reading" is misplaced. 

Comment by weverka on What 2026 looks like · 2022-12-22T05:10:27.369Z · LW · GW

The conservation of etendué is merely a particular version of the second law of thermodynamics.  Now, You are trying to invoke a multistep photovoltaic/microwave/rectenna method of concentrating energy, but you are still violating the second law of thermodynamics.  

If one could concentrate the energy as you propose, one could build a perpetual motion machine.

Comment by weverka on What 2026 looks like · 2022-12-21T14:21:23.324Z · LW · GW

>Kokotajlo writes:Wouldn't that be enough to melt, and then evaporate, the entirety of Mercury within a few hours? After all isn't that what would happen if you dropped Mercury into the Sun?

How do you get hours?  

Comment by weverka on What 2026 looks like · 2022-12-21T13:52:42.143Z · LW · GW

The sun emits light because it is hot.  You can't concentrate thermal emission to be brighter than the source.  (if you could, you could build a perpetual motion machine). 

Eternity in Six Hours describes very large lightweight mirrors concentrating solar radiation onto planet Mercury.

The most power you could deliver from the sun to Mercury is the power of the sun times the square of the ratio of the radius of Mercury to the radius of the sun.

The total solar output is 4*10^26 Watts.  The ratio of the sun's radius to that of mercury is half a million.  So you can focus about 10^15 Watts onto Mercury at most.

Figure 2 of Eternity in Six Hours projects getting 10^24 Watts to do the job.

Comment by weverka on What 2026 looks like · 2022-12-21T02:11:35.525Z · LW · GW

I have read Eternity in Six Hours and I can say that it violates the Second Law of Thermodynamics through the violation of the Constant Radiance Theorem.  The Power density they deliver to Mercury exceeds the power density of radiation exiting the sun by 6 orders of magnitude!

Comment by weverka on Can you control the past? · 2022-12-20T18:22:13.098Z · LW · GW

You should have a look at the conference on retrocausation.  And it would also be valuable to look at Garret Moddel's experiments on the subject.

Comment by weverka on How to Convince my Son that Drugs are Bad · 2022-12-17T21:26:57.258Z · LW · GW
Comment by weverka on How to Convince my Son that Drugs are Bad · 2022-12-17T21:22:39.074Z · LW · GW
Comment by weverka on How is the "sharp left turn defined"? · 2022-12-09T21:55:18.748Z · LW · GW

you draw a right turn.  The post is asking about a left turn.

Comment by weverka on How is the "sharp left turn defined"? · 2022-12-09T21:53:32.844Z · LW · GW

You drew a right turn, the post is asking about a left turn.

Comment by weverka on Playing shell games with definitions · 2022-12-08T20:04:07.945Z · LW · GW

Yes it is.  When I took Feynman's class on computation, he presented an argument on Landauer's limit.  It involved a multi-well quantum potential where the barrier between the wells was slowly lowered and the well depths adjusted.  During the argument, one of the students asked if he had not just introduced a Maxwell's demon.  Feynman got very defensive.

Comment by weverka on Why I'm Sceptical of Foom · 2022-12-08T19:56:04.813Z · LW · GW

Why assume Gaussian?

Comment by weverka on What are the major underlying divisions in AI safety? · 2022-12-07T14:36:06.479Z · LW · GW

Is it likely to do more good than harm?

Comment by weverka on Godzilla Strategies · 2022-12-04T15:42:26.281Z · LW · GW

Most AI safety criticisms carry a multitude of implicite assumptions.  This argument grants the assumption and attacks the wrong strategy.
  We are better off improving a single high-level AI than making a second one.  There is not battle between multiple high-level AIs if there is only one.

Comment by weverka on AGI Impossible due to Energy Constrains · 2022-12-02T13:04:17.529Z · LW · GW

>What would you have said? 

Your comment is stronger without this sentence.

Comment by weverka on [Preprint] The Computational Limits of Deep Learning · 2022-12-02T05:02:08.854Z · LW · GW

Gwern asks"Why would you do that and ignore (mini literature review follows):"  

Thompson did not ignore the papers Gwern cites.  A number of them are in Thompson's tables comparing prior work on scaling.  Did Gwern tweet this criticism without even reading Thompson's paper?

Comment by weverka on AGI Impossible due to Energy Constrains · 2022-12-01T13:17:15.059Z · LW · GW

Reliable?  Your hard disk will be unreadable before long, while the human brain has developed ways to pass information down over generations.

Comment by weverka on AGI Impossible due to Energy Constrains · 2022-12-01T13:14:18.101Z · LW · GW

>I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community. 

This is chastising him for failure to abide by groupthink.
The rest of your comment makes a point that is undermined by this statement.

Comment by weverka on Assuming that at least one religion is true, what would you expect it to be? · 2022-11-26T13:03:46.631Z · LW · GW

FSM

 

 

https://www.spaghettimonster.org/2015/08/mars-sighting/

Comment by weverka on Planes are still decades away from displacing most bird jobs · 2022-11-25T18:56:07.652Z · LW · GW

I must disagree.  I roasted a large plane for Thanksgiving yesterday and it was incomparable to a bird.  For tips on brining your plane, see here: https://en.wikipedia.org/wiki/US_Airways_Flight_1549

Comment by weverka on Refining the Sharp Left Turn threat model, part 1: claims and mechanisms · 2022-11-25T16:18:11.545Z · LW · GW

No, Humans do not satisfy this assumptions adopted here, unless you make this more specific.

The definition of Generalize is given above as: "Generalizes, i.e., performs well in new domains, which were not optimized for during training, with no domain-specific tuning".

Whether you think humans do this depends on what you take for "new domains" and "perform well".  

Humans taught to crawl on hardwood floors can crawl on carpeted floors.  Humans taught to hunt fly larva will need further training to hunt big game.

Comment by weverka on Exponential Economist Meets Finite Physicist [link] · 2022-11-23T14:36:14.968Z · LW · GW

When we substitute credentials for reason, we get nowhere.

Comment by weverka on The Doubling Box · 2022-11-23T14:31:35.472Z · LW · GW
  1.  Open the box when and iff you need the utilon.  This simple solution gets you the maximum utilons if you need a utilon and none if you don't need a utilon.
Comment by weverka on What is the best source to explain short AI timelines to a skeptical person? · 2022-11-23T13:14:26.777Z · LW · GW

This is difficult for people with no ML background.  The trouble with this is that one first has to explain timelines. Then explain what averages and ranges most researchers in the field maintain, and then explain why some discount that in favor of short AI timelines.  That is a long arc for a skeptical person.

Aren't we all skeptical people?  Carl Sagan said that extraordinary claims require extraordinary evidence.  Explaining a short timeline is a heavy lift by its very nature.