Posts

Evaluating the truth of statements in a world of ambiguous language. 2024-10-07T18:08:09.920Z
What good is G-factor if you're dumped in the woods? A field report from a camp counselor. 2024-01-12T13:17:23.829Z
Hastings's Shortform 2023-02-25T17:05:19.219Z
Medical Image Registration: The obscure field where Deep Mesaoptimizers are already at the top of the benchmarks. (post + colab notebook) 2023-01-30T22:46:31.352Z
What are our outs to play to? 2022-06-18T19:32:10.822Z

Comments

Comment by Hastings (hastings-greer) on Hastings's Shortform · 2024-12-02T13:03:10.349Z · LW · GW
Comment by Hastings (hastings-greer) on Cole Wyeth's Shortform · 2024-11-25T19:52:20.148Z · LW · GW

Beauty of notation is an optimization target and so should fail as a metric, but especially compared to other optimization targets I’ve pushed on, in my experience it seems to hold up. The exceptions appear to be string theory and category theory and two failures in a field the size of math is not so bad.

Comment by Hastings (hastings-greer) on AI as a powerful meme, via CGP Grey · 2024-11-23T00:37:59.831Z · LW · GW

prompts already go through undesigned evolution through reproductive fitness (rendered in 4k artstation flickr 2014)

Comment by Hastings (hastings-greer) on Which things were you surprised to learn are not metaphors? · 2024-11-22T15:18:28.446Z · LW · GW

Sternum and neck for me

Comment by Hastings (hastings-greer) on Hastings's Shortform · 2024-11-22T14:43:01.332Z · LW · GW

Properties of the track I am on are load bearing in this assertion. (Explicitl examples of both cases from the original comment: Tesla worked out how to destroy any structure by resonating it, and took the details to his grave because he was pretty sure that the details would be more useful for destroying buildings than for protecting them from resonating weapons. This didn't actually matter because his resonating weapon concept was crankish and wrong. Einstein worked out how to destroy any city by splitting atoms, and disclosed this, and it was promptly used to destroy cities. This did matter because he was right, but maybe didn't matter because lots of people worked out the splitting atoms thing at the same time. It's hard to tell from the inside whether you are crankish)

Comment by Hastings (hastings-greer) on Hastings's Shortform · 2024-11-21T18:23:50.384Z · LW · GW

Nuclear power has gotten to a point where we can use it quite safely as long as no one does the thing (the thing being chemically separating the plutonium and imploding it in your neighbor's cities) and we seem to be surviving, as while all the actors have put great effort into being ready do do "the thing," no one actually does it. I'm beginning to suspect that it will be worth separating alignment into two fields, one of "Actually make AI safe" and another, sadder but easier field of "Make AI safe as long as no one does the thing." I've made some infinitesimal progress on the latter, but am not sure how to advance, use or share it since currently, conditional on me being on the right track, any research that I tell basically anyone about will immediately be used to get ready to do the thing, and conditional on me being on the wrong track (the more likely case by far) it doesn't matter either way, so it's all downside. I suspect this is common? This is almost but not quite the same concept as "Don't advance capabilities."

Comment by Hastings (hastings-greer) on Both-Sidesism—When Fair & Balanced Goes Wrong · 2024-11-08T18:51:13.704Z · LW · GW

I have observed a transition. 12 years ago, the left-right split was based on many loosely correlated factors and strategic/inertial effects, creating bizarre situations like near perfect correlation between opinions on Gay Marriage and privatization of social security. I think at that time you could reason much better if you could recognize that the separation between left and right was not natural. I at least have a ton of cached arguments from this era because it became such a familiar dynamic. 

Nowadays, I don't think this old schema really applies, especially among the actual elected officers and party leadership. The effective left right split is mono-factor: you are right exactly in proportion to your personal loyalty to one Donald J. Trump, resulting in bizarre situations like Dick Cheney being classified as "Left."  

 

Comment by Hastings (hastings-greer) on Johannes C. Mayer's Shortform · 2024-11-07T14:37:22.013Z · LW · GW

+1 for just throwing your notes up on a website. For example, mine are at https://www.hgreer.com/Reports/ although there is currently a bit of a gap for the last few months as I've been working more on synthesizing existing work into a CVPR submission than on exploreing new directions.

The above is a terrible post-hoc justification and I need to get back to note taking.

Comment by Hastings (hastings-greer) on The hostile telepaths problem · 2024-11-04T21:27:16.941Z · LW · GW

Organizations and communities can also face hostile telepaths. My pet theory that sort of crystalized while reading this is that p-hacking is academia’s response to a hostile telepath that banned publication of negative results.

This of course sucks for non traditional researchers and especially journalists who don’t even subconsciously know that p=0.05002 r=1e-7 “breakthrough in finding relationship between milk consumption and toenail fungus” is code for “We have conclusively found no effect and want to broadcast to the community that there is no effect here; yet we cannot ever consciously acknowledging that we found nothing because our mortgages depend on fooling a hostile telepath into believing this is something”

Comment by Hastings (hastings-greer) on The Median Researcher Problem · 2024-11-03T02:37:42.620Z · LW · GW

Personally I am quite pleased with the field of parapsychology. For example, they took a human intuition and experience ("Wow, last night when I went to sleep I floated out of my body. That was real!") and operationalized it into a testable hypothesis ("When a subject capable of out of body experiences floats out of their body, they will be able to read random numbers written on a card otherwise hidden to them.") They went and actually performed this experiment, with a decent deal of rigor, writing the results down accurately, and got an impossible result- one subject could read the card. (Tart, 1968.) A great deal of effort quickly went in to further exploration (including military attention with the men who stare at goats etc) and it turned out that the experiment didn't replicate, even though everyone involved seemed to genuinely expect it to. In the end, no, you can't use an out of body experience to remotely view, but I'm really glad someone did the obvious experiments instead of armchair philosophizing. 

https://digital.library.unt.edu/ark:/67531/metadc799368/m2/1/high_res_d/vol17-no2-73.pdf is a great read from someone who obviously believes in the metaphysical, and then does a great job designing and running experiments and accurately reporting their observations, and so it's really only a small ding against them that the author draws the wrong larger conclusions in the end.

Comment by Hastings (hastings-greer) on The Median Researcher Problem · 2024-11-03T00:19:33.931Z · LW · GW

Show me a field where replication crises tear through, exposing fraud and rot and an emperor that never had any clothes, a field where replications fail so badly that they result in firings and polemics in the New York Times and destroyed careers- and then I will show you a field that is a little confused but has the spirit and will get there sooner or later.

What you really need to look out for are fields that could never, on a conceptual level, have a devastating replication crisis. Lesswrong sometimes strays a little close to this camp.

Comment by Hastings (hastings-greer) on Alexander Gietelink Oldenziel's Shortform · 2024-10-31T20:05:38.984Z · LW · GW

Since you’re already in it: do you happen to know if the popular system of epicycles accurately represented the (relative, per body) distance of each planet from earth over time, or just the angle? I’ve been curious about this for a while but haven’t had time to dig in. They’d at minimum have to get it right for the moon and sun for predicting eclipse type.

Comment by Hastings (hastings-greer) on Gwern: Why So Few Matt Levines? · 2024-10-29T13:58:23.672Z · LW · GW

After reading this, I prompted Claude with

Please write a parody of chapter 3 of the 1926 winnie the pooh, where instead of winnie and piglet searching for a woozle, some bloggers are looking for bloggers similar to matt levine, and not realizing that they are the bloggers who are similar to matt levine. This will be a humorous reply to the attached post.

Comment by Hastings (hastings-greer) on Zach Stein-Perlman's Shortform · 2024-10-25T11:37:07.580Z · LW · GW

Arxiv is basically one huge, glacially slow internet comment section, where you reply to an article by citing it. It’s more interactive than it looks- most early career researchers are set up to get a ping whenever they are cited.

Comment by Hastings (hastings-greer) on Could randomly choosing people to serve as representatives lead to better government? · 2024-10-21T22:36:30.815Z · LW · GW

Keep in mind that representative democracy as practiced in the US is doing as well as it is while holding up to hundreds of millions of dollars of destructive pessimization effort- any alternative system is going to be hit with similar efforts. Just off the top of my head: we are being hit with about $50 dollars per capita of spending this fall, and that's plenty to brain-melt a meaningful fraction of the population. Each member of a 500 member sortition body chosing a president, if their identity is leaked, is going to be immediately hit with OOM 30 million dollars of attempts to change their mind. This is a different environment than a calm deliberation and consideration of the issues as examined by the linked studies.

(figures computed by dividing 2024 election spending by targeted population)

Comment by Hastings (hastings-greer) on The Mysterious Trump Buyers on Polymarket · 2024-10-18T17:05:22.405Z · LW · GW

What are the odds that Polymarket resolves “Trump yes” and Harris takes office in 2025? If these mystery traders expect to profit from hidden information, the hidden information could be about an anticipated failure of UMA instead of about the election itself.

Comment by Hastings (hastings-greer) on Open Thread Fall 2024 · 2024-10-18T02:32:14.154Z · LW · GW

Are there any mainstream programming languages that make it ergonomic to write high level numerical code that doesn't allocate once the serious calculation starts? So far for this task C is by far the best option but it's very manual, and Julia tries and does pretty well but you have to constantly make sure that the compiler successfully optimized away the allocations that you think it optimized away. (Obviously Fortran is also very good for this, but ugh)

Comment by Hastings (hastings-greer) on Why Academia is Mostly Not Truth-Seeking · 2024-10-16T19:46:01.657Z · LW · GW

To say that most academic research is anything, you’re going to have to pick a measure over research. Uniform measure is not going to be exciting – you’re going to get almost entirely undergraduate assignments and Third World paper mills. If your weighted sampler is “papers linked in articles about how academia is woke” you’re going to find a high %fake. If your weighed measure is “papers read during work hours by employees at F500 companies” you’ll find a lower, nonzero %fake.

Handwringing over public, vitriolic retractions spats is going to fuck your epistemology via sampling bias. There is no replication crisis in underwater basket weaving

Comment by Hastings (hastings-greer) on Evaluating the truth of statements in a world of ambiguous language. · 2024-10-07T19:56:57.740Z · LW · GW

Yeah, I definitely oversimplified somewhere. I'm definitely tripped up by "this statement is false" or statements that don't terminate. Worse, thinking in that direction, I appear to have claimed that the utterance "What color is your t-shirt" is associated with a probability of being true. 

Comment by Hastings (hastings-greer) on Is Text Watermarking a lost cause? · 2024-10-02T14:04:12.452Z · LW · GW

I think that your a-before-e example is confusing your intuition- a typical watermark that occurs 10% of the time isn't going to be semantic, it's more like "this n-gram hashed with my nonce == 0 mod 10"

Comment by Hastings (hastings-greer) on Open Thread Summer 2024 · 2024-09-30T15:22:44.404Z · LW · GW

I'm at this point pretty confident that under the Copenhagen interpretation, whenever an intergalactic photon hits earth, the wave-function collapse takes place on a semi-spherical wave-front many millions of lightyears in diameter. I'm still trying to wrap my head around what the interpretation of this event is in many-worlds. I know that it causes earth to pick which world it is in out of the possible worlds that split off when the photon was created, but I'm not sure if there is any event on the whole spherical wavefront.

It's not a pure hypothetical- we are likely to see gravitational lens interferometry in our lifetime (if someone hasn't achieved it yet outside of my attempt at literature review) which will either confirm that these considerations are real, or produce a shock result that they aren't.

Comment by Hastings (hastings-greer) on 2024 Petrov Day Retrospective · 2024-09-29T13:01:57.096Z · LW · GW

One feature of every lesswrong Petrov day ritual is the understanding that the people on the other side of the button have basically similar goals and reasoning processes, especially when aggregated into a group. I wonder if the mods at /r/sneerclub would be interested in a Petrov day collaboration in the future.

Comment by Hastings (hastings-greer) on Why is o1 so deceptive? · 2024-09-28T12:38:13.564Z · LW · GW

Does it ever fail to complete a proof, and honestly announce failure? A single time I have gotten claude to successfully disprove a statement that I asked it to prove, after trying to find a proof and instead finding a disproof, but I’ve never had it try for a while and then announce that it has not made progress either way.

Comment by Hastings (hastings-greer) on Yoav Ravid's Shortform · 2024-09-25T12:36:59.575Z · LW · GW

The funniest possible outcome is that no one opts in and so the world is saved but the blog post is ruined.

I would hate to remove the possibility of a funny outcome. No opt in!

Comment by Hastings (hastings-greer) on Struggling like a Shadowmoth · 2024-09-25T11:22:53.678Z · LW · GW

I greatly enjoyed this book back in the day, but the whole scenario was wild enough to summon the moral immune system. Past a certain point, for me it’s a safe default to put up mental barriers and actively try not to learn moral lessons from horror fiction. Worm, Gideon the 9th, anything by Stephen King- great, but I don’t quite expect to learn great lessons.

While rejecting them as sources of wisdom now, I can remember these books and return to them if I suddenly need to make moral choices in a world where people can grow wiser by being tortured for months, or stronger by

killing and then mentally fusing with your childhood friend. or achieve coordination by mind controlling your entire community and spending their lives like pawns

Comment by Hastings (hastings-greer) on Applications of Chaos: Saying No (with Hastings Greer) · 2024-09-22T11:22:11.599Z · LW · GW

This is a good point! As a result of this effect and Jensen’s inequality, chaos is a much more significant limit on testing CUDA programs than for example cpp programs

Huang

Comment by Hastings (hastings-greer) on Applications of Chaos: Saying No (with Hastings Greer) · 2024-09-22T11:19:53.135Z · LW · GW

I enjoyed doing this interview. I haven’t done too much extemporaneous public speaking, and it was a weird but wonderful experience being on the other side of the youtube camera. Thanks Elizabeth!

Comment by Hastings (hastings-greer) on Applications of Chaos: Saying No (with Hastings Greer) · 2024-09-21T21:43:11.032Z · LW · GW
  • If a trebuchet requires you to solve the double pendulum problem (a classic example of a chaotic system) in order to aim, it is not a competition-winning trebuchet.

Ah, this is not quite the takeaway- and getting the subtlety here right is important for larger conclusions. If a simulating a trebuchet requires solving the double pendulum problem over many error-doublings, it is not a competition-winning trebuchet. This is an important distinction. 

If you start with a simulator and a random assortment of pieces, and then start naively optimizing for pumpkin distance, you will quickly see the sort of design shown at 5:02 in the video, where the resulting machine is unphysical because its performance depends on coincidences that will go away in the face of tiny changes in initial conditions. This behaviour shows up with a variety of simulators and optimizers.

An expensive but probably effective solution is to perturb a design several times, simulate it several times, and stop simulation once the simulations diverge. 

An ineffective solution is to limit the time of the solution, as many efficient and real-world designs take a long time to fire, because they begin with the machine slowly falling away from an unstable equilibrium. 

The chaos-theory motivated cheap solution is to limit the number of rotations of bodies in the solution before terminating it, as experience shows error doublings tend to come from rotations in trebuchet-like chaotic systems. 

The solution I currently have implemented at jstreb.hgreer.com is to only allow the direction of the projectile to rotate once before firing (specifically, it is released if it is moving upwards and to the right at a velocity above a threshold) which is not elegant, but seems mostly effective. I want to move to the "perturb and simulate several times" approach in the future.

Comment by Hastings (hastings-greer) on Pronouns are Annoying · 2024-09-18T16:31:08.745Z · LW · GW

The structure of language kind of screwed us here. Picture literally any reasonable policy for discussing each other’s religious affiliation in the workplace. Now implement that policy, but your workers speak a language where the grammar only functions if you know whether each referent is a “True” christian.

Comment by Hastings (hastings-greer) on You don't get to have cool flaws · 2024-09-12T18:49:44.811Z · LW · GW

export INSTRUMENTAL_GOAL=change_yourself

Keep track of your past attemts to $INSTRUMENTAL_GOAL, so that you can better predict whether your future attempts to $INSTRUMENTAL_GOAL will succeed, and so better choose between plans that require $INSTRUMENTAL_GOAL and plans that route around it.

Comment by Hastings (hastings-greer) on you should probably eat oatmeal sometimes · 2024-09-10T17:27:08.707Z · LW · GW

I didn't catch on at all that this was humor, and as a result made a point to pick up oatmeal next time I was at the grocery. I do actually like oatmeal, I just hadn't thought about it in a while. It has since made for some pretty good breakfasts. 

This whole sequence of events is either deeply mundane or extremely funny, I genuinely can't tell. If it's funny it's definitely at my expense.

Comment by Hastings (hastings-greer) on Has Anyone Here Consciously Changed Their Passions? · 2024-09-09T19:06:34.780Z · LW · GW

Ahah! I suspect that permission to start from scratch may be a large component of maintaining passion. Starting from scratch at will is pretty close to the exact boundary between programming I do for fun and programming for which I demand payment. 

Comment by Hastings (hastings-greer) on Open Thread Summer 2024 · 2024-09-09T18:14:18.492Z · LW · GW

Today I realized I am free to make the letters in an einsum string meaningful (b for batch, x for horizontal index, y for vertical index etc) instead of just choosing ijkl.

Comment by Hastings (hastings-greer) on Has Anyone Here Consciously Changed Their Passions? · 2024-09-09T14:29:12.675Z · LW · GW

I haven't been able to change my passion, but I have faced a similar issue and found that if I occasionally take stock of my semi-abandoned long term projects, I often notice that I have passion for one of them again. As a result, several have come to something resembling completion over the years, and often over many passion-dispassion cycles. The key then becomes documentation and good storage and organization, to minimize the difficulty of starting up again. I feel that this has made my passion for projects more durable, because there is no longer a sense of panic if it starts to fade- I expect it to return if it is needed.

Comment by Hastings (hastings-greer) on Double's Shortform · 2024-09-07T16:19:24.488Z · LW · GW

The "tiny explosions" mental model doesn't make new predictions in the way that the Carnot model does, but it does encode and compress an enormous amount of useful pre-discovered information. For example, that a car engine is hot like fire and will burn you, that if you mix gasoline and air and light it, it will explode, that a car engine will be made of strong stuff, that a car engine is in something of a delicate engineered balance, and if you make large changes to it, it will typically become extremely loud and catch fire. I think this is enough to distinguish the "tiny explosions" model from typical "guess the teacher's password" knowledge.

 

Comment by Hastings (hastings-greer) on Hastings's Shortform · 2024-08-30T13:04:26.558Z · LW · GW

A consistent trope in dath-ilani world-transfer fiction is "Well the theorems of agents are true in dath ilani and independent of physics, so they're going to be true here damnit"

How do we violate this in the most consistent way possible?

Well it's basically default that a dath ilani gets dropped in a world without the P NP distinction, usually due to time travel BS. We can make it worse- there's no rule that sapient beings have to exist in worlds with the same model of the peano axioms. We pull some flatlander shit- Keltham names a turing machine that would halt if two smart agents fall off the peano frontier and claims to have proof it never halts, and then the native math-lander chick says nah watch this and then together they iterate the machine for a very very long time- a non standard integer number of steps- and then it halts and Keltham (A) just subjectively experienced an integer larger than any natural number of his homeworld and (B) has a couterexample to his precious theorems 

Comment by Hastings (hastings-greer) on Eli's shortform feed · 2024-08-29T20:35:57.934Z · LW · GW

With a grain of salt,

There’s a sort of quiet assumption that should be louder about the dath Ilan fiction: which is that it’s about a world where a bunch of theorems like “as systems of agents get sufficiently intelligent, they gain the ability to coordinate in prisoner’s dilemma like problems” have proofs. You could similarly write fiction set in a world where P=NP has a proof and all of cryptography collapses. I’m not sure whether EY would guess that sufficiently intelligent agents actually coordinate- Just like I could write the P=NP fiction while being pretty sure that P/=NP

Comment by Hastings (hastings-greer) on Eli's shortform feed · 2024-08-28T20:46:13.656Z · LW · GW

What you’ve hit upon is “BATNA,” or “Best alternative to a negotiated agreement.” Because the robbers can get what they want by just killing the farmers, the dath ilani will give in- and from what I understand, Yudowsky therefore doesn’t classify the original request (give me half your wheat or die) as a threat.

This may not be crazy- it reminds me of the Ancient Greek social mores around hospitality, which seem insanely generous to a modern reader but I guess make sense if the equilibrium number of roving <s>bandits</s> honored guests is kept low by some other force

Comment by Hastings (hastings-greer) on Quick look: applications of chaos theory · 2024-08-28T18:03:18.328Z · LW · GW

So this turns out to be a doozy, but it's really fascinating. I don't have an answer- an answer would look like "normal chaotic differential equations don't have general exact solutions" or "there is no relationship between being chaotic and not having an exact solution" but deciding which is which won't just require proof, it would also require good definitions of "normal differential equation" and "exact solution." (the good definition of "general" is "initial conditions with exact solutions have nonzero measure") I have some work.

A chaotic differential equation has to be nonlinear and at least third order- and almost all nonlinear third order differential equations don't admit general exact solutions. So, the statement "as a heuristic, chaotic differential equations don't have general exact solutions" seems pretty unimpressive. However, I wrongly believed the strong version of this heuristic and that belief was useful: I wanted to model trebuchet arm-sling dynamics, recognized that the true form could not be solved, and switched to a simplified model based on what simplifications would prevent chaos (no gravity, sling is wrapped around a drum instead of fixed to the tip of an arm) and then was able to find an exact solution (note that this solvable system starts as nonlinear 4th order, but can be reduced using conservation of angular momentum hacks)

Now, it is known that a chaotic difference equation can have an exact solution: the equation x(n+1) = 2x(n) mod 1 is formally chaotic and has the exact solution 2^n x mod 1.  A chaotic differential equation exhibiting chaotic behaviour can have an exact solution if it has discontinuous derivatives because this difference equation can be constructed: 

equation is in three variables x, y, z

dz/dt always equals 1

if 0 < z < 1:
    if x > 0:
        dx/dt = 0
        dy dt = 1

    if x < 0:

        dx/dt = 0

        dy/dt = -1

if 1 < z < 2: 

    if y > 0

    dx/dx = -.5

    dy dt = 0

    if y < 0

    dy dt = 0

    dx dt = .5

if 2 < z < 3:

    dx/dt = x ln(2)

    dy/dt = -(y)/(3 - t)

and then make it periodic by gluing z=0 to z=3 in phase space. (This is pretty similar to the structure of the lorentz attractor, except that in the lorentz system, the sheets of solutions get close together but don't actually meet.) This is an awful,weird ode: the derivative is discontinuous, and not even bounded near the point where the sheets of solutions merge.

Plenty of prototypical chaotic differential equations have a sprinkling of exact solutions: e.g, three bodies orbiting in an equilateral triangle- hence the requirement for a "general" exact solution.

The three body problem "has" an "exact" "series" "solution" but it appears to be quite effed: for one thing, no one will tell me the coefficient of the first term. I suspect that in fact the first term is calculated by solving the motion for all time, and then finding increasingly good series approximations to that motion.

I strongly suspect that the correct answer to this question can be found in one of these stack overflow posts, but I have yet to fully understand them:

https://physics.stackexchange.com/questions/340795/why-are-we-sure-that-integrals-of-motion-dont-exist-in-a-chaotic-system?rq=1


https://physics.stackexchange.com/questions/201547/chaos-and-integrability-in-classical-mechanics

There are certainly billiards with chaotic and exactly solvable components- if nothing else, place a circular billiard next to an oval. So, for the original claim to be true in any meaningful way, this may have to involve excluding all differential equations with case statements- which sounds increasingly unlike a true, fundamental theorem.

If this isn't an open problem, then there is somewhere on the internet a chaotic, normal-looking system of odes (would have aesthetics like x'''' = sin(x''') - x'y''', y' = (1-y / x') etc) posted next to a general exact solution, perhaps only valid for non chaotic initial conditions, or a proof that no such system exists. The solvable system is probably out there and related to billiards


Final edit: the series solution to the three body problem is legit mathematically, see page 64 here


https://ntrs.nasa.gov/citations/19670005590

So “can’t find general exact solution to chaotic differential equation” is just uncomplicatedly false

Comment by Hastings (hastings-greer) on Open Thread Summer 2024 · 2024-08-27T18:27:43.163Z · LW · GW

I want to run code generated by an llm totally unsupervised

Just to get in the habit, I should put it in an isolated container in case it does something weird

Claude, please write a python script that executes a string as python code In an isolated docker container.

Comment by Hastings (hastings-greer) on robo's Shortform · 2024-08-27T11:27:59.717Z · LW · GW

If you do set out on this quest, Bell's inequality and friends will at least put hard restrictions on where you could look for a rule underlying seemingly random wave function collapse. The more restricted your search, the sooner you'll find a needle!

Comment by Hastings (hastings-greer) on Quick look: applications of chaos theory · 2024-08-20T23:10:30.900Z · LW · GW

I am suddenly unsure whether it is true! It certainly would have to be more specific than how I phrased it, as it is trivially false if the differential equation is allowed to be discontinuous between closed form regions and chaotic regions

Comment by Hastings (hastings-greer) on Quick look: applications of chaos theory · 2024-08-19T14:58:55.244Z · LW · GW

Sometimes!

https://sohl-dickstein.github.io/2024/02/12/fractal.html

Comment by Hastings (hastings-greer) on Quick look: applications of chaos theory · 2024-08-19T14:37:10.919Z · LW · GW

Differential equation example: I wanted a closed form solution of the range of the simplest possible trebuchet- just a seesaw. This is perfectly achievable, see for example http://ffden-2.phys.uaf.edu/211.fall2000.web.projects/J%20Mann%20and%20J%20James/see_saw.html. I wanted a closed form solution of the second simplest trebuchet, a seesaw with a sling. This is impossible, because even though the motion of the trebuchet with sling isn't chaotic during the throw, it can be made chaotic by just varying the initial conditions, which rules out a simple closed form solution for non-chaotic initial conditions.

Lyapunov exponent example: for the bouncing balls, if each ball travels 1 diameter between bounces, then a change in velocity angle of 1 degree pre-bounce becomes a change in angle of 4 degrees post bounce (this number may be 6- I am bad at geometry), so the exponent is 4 if time is measured in bounces.
 

Comment by Hastings (hastings-greer) on Quick look: applications of chaos theory · 2024-08-19T00:21:08.764Z · LW · GW

The good news is that chaos theory can rule out solutions with extreme prejudice- and because it's a formal theory, it lets you be very clear if it's ruling out a solution absolutely (FluttershAI and Clippy combined aren't going to be able to predict the weather a decade in advance) vs if it's ruling out a solution in all practicality, but teeeechnically (ie, predicing 4-5 swings of a double pendulum).  Here are the concrete examples that come to mind:

I wrote a CPU N-Body simulator,  and then ported it to CUDA. I can't test that the port is correct by comparing long trajectories of the CPU simulator to the CUDA simulator, and because I know this is a chaos problem, I won't try to fix this by adding epsilons to the test. Instead, I will fix this by running the test simulation for ~ < a Lyapunov Time.

I wrote a genetic algorithm for designing trebuchets. The final machine, while very efficient, is visibly using precise timing after several cycles of a double pendulum. Therefore, I know it can't be built in real life. 

I see a viral gif of black and white balls being dumped in a pile, so that when they come to rest they form a smiley face. I know it's fake because that's not something that can be achieved with careful orchestration

I prove a differential equation is chaotic, so I don't need to try to find a closed form solution.

One thing that jumps out, writing this out explicitly, is that chaos theory concievably could be replaced with intuition of "well obviously that won't work," and so I don't know to what extent chaos theory just formulated wisdom that existed pre-1950s, vs generated wisdom that got incorporated into modern common sense. Either way, formal is nice- in particular, the "can't test end to end simulations by direct comparison" and "can't find a closed form solution" cases saved me a lot of time.

Comment by Hastings (hastings-greer) on Quick look: applications of chaos theory · 2024-08-18T20:28:59.262Z · LW · GW

Hi! I seem run into chaos relatively often in practice. It's extremely useful and not likely to have flagship applications because it mostly serves to rule out solutions. The workflow looks like 

"I have an idea! It is very brilliant"

"Your idea is wonderful, but it's probably fucked by chaos. Calculate a Lyapunov exponent"

calculates Lyapunov exponent

"fuck"

But this is of course much better than trying the idea for weeks or months without a concept of why it's impossible

Comment by Hastings (hastings-greer) on Debate: Get a college degree? · 2024-08-13T16:19:52.066Z · LW · GW

A different perspective: Colleges very, very badly want you to graduate - especially if you look like you have been doing something other than playing videogames high on weed in your apartment for four years. The upshot of this is that in the case suggested (top 1% IQ, top 50% conscientiousness) after a threshold of maybe 5 hours a week, any effort put specifically towards graduating is basically wasted- going to college with the goal of graduating is severely, severely under-determined. Take chemistry classes! Take physics classes! Take graduate math classes without the prerequisites and fail them! Calculate which essays you don't strictly have to turn in! Build a rocket ship or a race car! Found a startup! Practice bullying administrators into giving you class credit for all of the above!

What college is providing you is 35 hours a week of working time to do with as you please, access to 3-D printers, a machine shop, math classes, the local supercomputer, a chemistry lab, oscilloscopes and signal generators, and zero unemployment stigma. The marginal cost of also getting the credentials while you are there is tiny.

When push comes to shove, you cannot spend 4 years on the goal "graduate from college." There are very few tasks that you can achieve without a college degree that would be significantly more difficult to achieve while getting a college degree.

(There are also degrees which you absolutely can't get with 5 hours a week of effort. Selecting one of them is a choice, and frankly, these are not degrees that you are going to successfully self teach.)

Comment by Hastings (hastings-greer) on J's Shortform · 2024-08-11T20:49:00.343Z · LW · GW

I consistently get upvotes and lots of disagrees when I post thoughts on alignment, which is much more encouraging than downvotes.

Comment by Hastings (hastings-greer) on Reality Testing · 2024-07-13T17:26:23.389Z · LW · GW

Today many of us are farther away from ground truth. The internet is an incredible means of sharing and discovering information, but it promotes or suppresses narratives based on clicks, shares, impressions, attention, ad performance, reach, drop off rates, and virality - all metrics of social feedback.  As our organizations grow larger, our careers are increasingly beholden to performance reviews, middle managers' proclivities, and our capacity to navigate bureaucracy. We find ourselves increasingly calibrated by social feedback and more distant from direct validation or repudiation of our beliefs about the world.

I seek a way to get empirical feedback on this set of claims- specifically the direction-of-change-over-time assertions "farther... increasingly... more distant..."

Comment by Hastings (hastings-greer) on Open Thread Summer 2024 · 2024-07-09T12:11:36.468Z · LW · GW

Yeah, in the lightcone scenario evolution probably never actually aligns the inner optimizers- although it may align them, as a super intelligence copying itself will have little leeway for any of those copies having slightly more drive to copy themselves than their parents. Depends on how well it can fight robot cancer.

However, while a cancer free paperclipper wouldn't achieve "AGIs take over the lightcone and fill it with copies of themselves, to at least 90% of the degree to which they would do so if their terminal goal was filling it with copies of themselves," they would achieve something like "AGIs take over the lightcone and briefly fill it with copies of themselves, to at least 10^-3% of the degree to which they would do so if their terminal goal was filling it with copies of themselves" which is in my opinion really close. As a comparison, if Alice sets off Kmart AIXI with the goal of creating utopia we don't expect the outcome "AGIs take over the lightcone and convert 10^-3% of it to temporary utopias before paperclipping."

Also, unless you beat entropy, for almost any optimization target you can trade "fraction of the universe's age during which your goal is maximized" against "fraction of the universe in which your goal is optimized" since it won't last forever regardless. If you can beat entropy, then the paperclipper will copy itself exponentially forever.