Posts

Effective Altruists and Rationalists Views & The case for using marketing to highlight AI risks. 2024-04-19T04:16:15.016Z
Linkpost for Accursed Farms Discussion / debate with AI expert Eliezer Yudkowsky 2023-05-05T18:20:20.004Z
An Apprentice Experiment in Python Programming, Part 4 2021-08-31T07:56:04.711Z
An Apprentice Experiment in Python Programming, Part 3 2021-08-16T04:42:31.948Z
An Apprentice Experiment in Python Programming, Part 2 2021-07-29T07:39:47.468Z
An Apprentice Experiment in Python Programming 2021-07-04T03:29:13.251Z
Should I take glucosamine? 2020-12-02T05:20:04.734Z
Charting Is Mostly Superstition 2020-08-23T20:26:15.474Z
Market Misconceptions 2020-08-20T04:46:21.991Z
The Wrong Side of Risk 2020-08-16T03:50:03.900Z
How to Lose a Fair Game 2020-08-14T18:41:13.638Z
Repeat Until Broke 2020-08-13T07:19:30.156Z
You Need More Money 2020-08-12T06:16:09.827Z
When is the right time to hole up? 2020-03-14T21:42:39.523Z
Do 24-hour hand sanitizers actually work? 2020-03-01T20:41:43.288Z
gilch's Shortform 2019-08-26T01:50:09.933Z
Stupid Questions June 2017 2017-06-10T18:32:57.352Z
Stupid Questions May 2017 2017-04-25T20:28:53.797Z
Open thread, Apr. 24 - Apr. 30, 2017 2017-04-24T19:43:36.697Z
Open thread, Apr. 17 - Apr. 23, 2017 2017-04-18T02:47:46.389Z
Cheating Omega 2017-04-13T03:39:10.943Z

Comments

Comment by gilch on hydrogen tube transport · 2024-04-19T20:07:38.710Z · LW · GW

Hydrogen can only burn in the presence of oxygen. The pipe does not contain any, and combustion isn't possible until after they have had time to mix. It's also not going to explode from the pressure, because it's the same as the atmosphere. The shaped charge is obviously going to explode, that's the point, but it will be more directional. That still doesn't sound safe in an enclosed space. Maybe the vehicle could deploy a gasket seal with airbags or something to reduce the leakage of expensive hydrogen.

Comment by gilch on hydrogen tube transport · 2024-04-19T19:59:47.894Z · LW · GW

Condensation is not just possible but would happen by default. You described the tubes as steel lined with aluminum in contact with the ground, if not buried. That's going to be consistently cool enough for passive condensation.

Getting water out of a long tube shouldn't be hard with multiple drains, and if there's any incline, you just need them at the bottom. You can just dump it in the ground. Use a plumbing trap to keep the gasses separated. They're at equal pressure, so this should work, and the pressure can also be maintained mostly passively with hydrogen bladders exposed to the atmosphere on the outside, although the burned hydrogen will have to be regenerated before they empty completely, but this can be done anywhere on the pipe. Hydrogen can be easily regenerated by electrolysis of water, which doesn't seem any more expensive than charging the batteries. It might be even cheaper to crack if off of natural gas or to use white hydrogen when available.

Are turbines more expensive than electric motors for similar power? It's true that conventional piston engines are heavy, but batteries are also heavy, especially the cheaper chemistries.

Alternatively, run electricity through the pipe to power the vehicles so they don't have to carry any extra weight for power. It's coated with conductive aluminum already. If half-pipes could be welded with a dielectric material and not cost any more that would work. Or use an internal monorail, but maybe only if you were going to do that already. Or you could suspend a wire. That's got to be pretty cheap compared to the pipe itself.

Comment by gilch on hydrogen tube transport · 2024-04-19T03:29:00.477Z · LW · GW

A vehicle in a hydrogen-filled tube can't use air around it for engines

Why not? Your "fuel" tanks could simply carry oxygen to burn the surrounding hydrogen "air" with.

and shouldn't emit exhaust.

Exhaust would be water vapor, easily removed even passively via condensation and drains. Hydrogen will (of course) have to be replaced to maintain pressure.

Comment by gilch on Reconsider the anti-cavity bacteria if you are Asian · 2024-04-18T17:18:16.967Z · LW · GW

The oral flora contains a diversity of organisms, bacterial, viral, and fungal, including some yeasts. Don't some of them produce ethanol already? And adhere to the mucous membrane instead of enamel? It wouldn't surprise me if some even excrete acetaldehyde under some conditions. Is Lumina really such a change?

Comment by gilch on Open Thread Spring 2024 · 2024-04-18T16:40:04.135Z · LW · GW

I didn't say "cavity"; I said, "tooth decay". No-one is saying remineralization can repair a chipped, cracked, or caved-in tooth. But this dentist does claim that the decay (caries) can be reversed even after it has penetrated the enamel and reached the dentin, although it takes longer (a year instead of months), by treating the underlying bacterial infection and promoting mineralization. It's not clear to me if the claim is that a small hole can fill in on its own, but a larger one probably won't although the necessary dental treatment (filling) in that case will be less invasive if the surrounding decay has been arrested.

I am not claiming to have tested this myself. This is hearsay. But the protocol is cheap to try and the mechanism of action seems scientifically plausible given my background knowledge.

Comment by gilch on Open Thread Spring 2024 · 2024-04-17T21:33:14.218Z · LW · GW

PSA: Tooth decay might be reversible! The recent discussion around the Lumina anti-cavity prophylaxis reminded me of a certain dentist's YouTube channel I'd stumbled upon recently, claiming that tooth decay can be arrested and reversed using widely available over-the-counter dental care products. I remember my dentist from years back telling me that if regular brushing and flossing doesn't work, and the decay is progressing, then the only treatment option is a filling. I wish I'd known about alternatives back then, because I definitely would have tried that first. Remineralization wouldn't have helped in the case when I broke a tooth, but I maybe could have avoided all my other fillings. I am very suspicious of random health claims on the Internet, but this one seemed reasonably safe and cheap to try, even if it ultimately doesn't work.

Comment by gilch on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-16T16:56:31.587Z · LW · GW

Have you tried using a VPN?

Comment by gilch on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-13T16:59:26.014Z · LW · GW

Yes, agreed (and I endorse the clarification), hence my question about dualism. (If consciousness is not a result of computation, then what is it?)

Comment by gilch on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-12T22:21:58.174Z · LW · GW

The Church-Turing thesis gives us the "substrate independence principle". We could be living in a simulation. In principle, AI could be conscious. In principle, minds could be uploaded. Even granting that there's such a thing as the superfluous corpuscles, the Universe still has to be computing the wave function.

Then the people made out of "pilot" waves instead of pilot waves and corpuscles would still be just as conscious as AIs or sims or ems could (in principle) be, and they would far outnumber the corpuscle folk. How do you know you're not one of them? Is this an attachment to dualism, or do you have some other reason? Why do the corpuscles even need to exist?

Comment by gilch on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-12T00:18:50.398Z · LW · GW

Uncharitable punchline is "if you take pilot wave but keep track of every possible position that any particle could have been (and ignore where they actually were in the actual experiment) then you get many worlds." Seems like a dumb thing to do to me.

Except I don't know how you explain quantum computers without tracking that? If you stop tracking, isn't that just Copenhagen? The "branches" have to exist and interfere with each other and then be "unobserved" to merge them back together.

What does the Elitzur–Vaidman bomb tester look like in Pilot Wave? It makes sense in Many Worlds: you just have to blow up some of them.

Comment by gilch on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-11T23:56:34.096Z · LW · GW

Didn't Carroll already do that? Is something still missing?

Comment by gilch on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-11T23:54:41.636Z · LW · GW

You mean, "Now simply delete the superfluous corpuscles." We need to keep the waves.

Comment by gilch on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-11T18:59:46.066Z · LW · GW

Cherry-picking FTW. I wonder if Udio would be any easier than Suno, due to reportedly higher quality and backwards extension, although it wouldn't have been available at the time.

Comment by gilch on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-11T02:58:38.692Z · LW · GW

We should not expect any bases not containing conscious observers to be observed, but that's not the same as saying they're not equally valid bases. See Everett and Structure, esp. section 7.

Comment by gilch on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-10T20:52:38.078Z · LW · GW

OK, what exactly is wrong with Sean Carroll's derivation?

Comment by gilch on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-10T19:38:15.339Z · LW · GW

I don't know where you heard that, but the short answer is that no-one knows. There are models of space that curve back in on themselves, and thus have finite extent, even without any kind of hard boundary. But astronomical observations thus far indicate that spacetime is pretty flat, or we'd have seen distortions of scale in the distance. What we know to available observational precision, last I heard indicates that even if the universe does curve in on itself, it must be so slight that the total Universe is at least thousands of times larger (in volume) than the observable part, which is still nowhere near big enough for Tegmark Level I, but that's a lower bound and it may well be infinite. (There are more complicated models that have topological weirdness that might allow for a finite extent with no boundary and no curvature in observable dimensions that might be smaller.)

I don't know if it makes any meaningful difference if the Universe is infinite vs "sufficiently large". As soon as it's big enough to have all physically realizable initial conditions and histories, why does it matter if they happen once or a googol or infinity times? Maybe there are some counterintuitive anthropic arguments involving Boltzmann Brains. Those seem to pop up in cosmology sometimes.

Comment by gilch on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-10T18:56:15.942Z · LW · GW

Well, slightly better than wild speculation. We observe broken symmetries in particle physics. This suggests that the corresponding unbroken symmetries existed in the past and could have been broken differently, which would correspond to different (apparent) laws of physics, meaning particles we call "fundamental" might have different properties in different regions of the Cosmos, although this is thought to be far outside our observable universe.

The currently accepted version of the Big Bang theory describes a universe undergoing phase shifts, particularly around the inflationary epoch. This wouldn't necessarily have happened everywhere at once. In the Eternal Inflation model, in a brief moment near the beginning of the observable universe, space used to be expanding far faster than it is now, but (due to chance) a nucleus of what we'd call "normal" spacetime with a lower energy level occurred and spread as the surrounding higher-energy state collapsed, ending the epoch.

However, the expansion of the inflating state is so rapid, that this collapse wave could never catch up to all of it, meaning the higher-energy state still exists and the wave of collapse to normal spacetime is ongoing far away. Due to chance, we can expect many other lower-energy nucleation events to have occurred (and to continue to occur) inside the inflating region, forming bubbles of different (apparent) physics, some probably corresponding to our own, but most probably not, due to the symmetries breaking in different directions.

Each of these bubbles is effectively an isolated universe, and the collection of all of them constitutes the Tegmark Level II Multiverse.

Comment by gilch on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-09T19:22:09.386Z · LW · GW

Getting rid of Many Worlds doesn't get rid of the Multiverse. They pop up in many different ways in cosmology. Max Tegmark elaborated four levels., the simplest of which (Level I) ends up looking like a multiverse if the ordinary universe is sufficiently large.

In the field of astronomy, there's the concept of a cosmological horizon. There are several kinds of these depending on exactly how they're defined. That's why they use the term "observable universe". Whatever process kicked off the Big Bang obviously created a lot more of it than we can see, and the horizon is expanding over time as light has more time to get to us. What we can see is not all there is.

Our current understanding of physics implies the Bekenstein bound: for any finite region of space, there is a finite amount of information it can contain. Interestingly, this measure increases with surface area, not volume. (If you pack too much mass/energy in a finite volume, you get an event horizon, and if you try to pack in more, it gets bigger.) Therefore, the current cosmological horizon also contains a finite amount of information, and there are a finite number of possible initial conditions for the part of the Universe we can observe, which must eventually repeat if the Cosmos has a larger number of regions than that number, by the pigeonhole principle. We also expect this to be randomized, so any starting condition will be repeated (and many times) if the Cosmos is sufficiently large. Tegmark estimated that there must be a copy of our volume, including a copy of you about meters away, and this also implies the existence of every physically realizable variation of you, which ends up looking like branching timelines.

Comment by gilch on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-09T18:44:20.214Z · LW · GW

Pilot Wave is just Many Worlds in disguise:

Are Many Worlds & Pilot Wave THE SAME Theory? (youtube.com)

Comment by gilch on Why it's so hard to talk about Consciousness · 2024-04-07T23:44:22.659Z · LW · GW

I'm pretty sure I'm in the Chalmers camp if I'm in either (because qualia are obviously epistemically primitive, and Dennett is being silly), and I've had the same thought about memory formation. Not from the above thought experiments, just from earlier musings on the topic. It seems possible that memory formation is required somehow, but it also seems possible that it isn't, and I have yet to come up with a thought experiment to distinguish them.

I'm not ready to call a camera conscious just because it is saving data (although I can't totally rule out panpsychism yet, I think we currently probably have no nonliving examples of things with consciousness), so I don't know that memory formation is identical to qualia (but maybe). Maybe memory formation is a necessary, but not sufficient condition?

Or maybe the only methods we currently have to directly observe consciousness are internal to ourselves and happen to go through memory formation before we can report on it (certainly to ourselves). I believe things exist that can't be interacted with, so the inability to observe (past) qualia without going through memory formation doesn't prove that (present) qualia can't exist in the moment without forming memory, but should we care? Midazolam, for example, is a drug that causes sedation and anterograde amnesia, but (reportedly) not unconsciousness. Does a sedated patient have qualia? They seem to act like they do. Is memory formation not happening at all? Or is it just not lasting? Is working memory alone sufficient? I don't know.

Comment by gilch on What's with all the bans recently? · 2024-04-04T21:51:08.729Z · LW · GW

Makes sense. Given that perspective, do you have any idea for a better approach?

Comment by gilch on My simple AGI investment & insurance strategy · 2024-03-31T21:04:08.185Z · LW · GW

Have you considered using OTM call ratio backspreads? One could put them on for a credit so they make money instead of losing it if your timing is off or if the market crashes. There is still a dip around the long strike where one could lose money, but not when volatility increases (and you close/roll before expiry) nor if the market blows past it.

(Disclaimer: I'm not a financial advisor for any of you. I don't know your financial situation. I'm not necessarily endorsing the thesis, and this is not financial advice.)

Comment by gilch on Can a stupid person become intelligent? · 2024-03-25T19:23:25.895Z · LW · GW

Update: Increasing IQ is trivial seems relevant. I think the case is far from proven, but worth a look.

It's worth mentioning that IQ tests have substantial error bars, and those get wider near the tails. Once you get higher than a few standard deviations (15 points each) above the mean (set at 100) the differences become too hard to measure to be very meaningful for individuals, a single one of whom can show substantial variation from among the different IQ tests.

Comment by gilch on Market Misconceptions · 2024-03-25T19:04:21.081Z · LW · GW

I thought of a few more. Taxes, fees, and penalties can cost you. Be especially careful about mutual funds, which can charge outrageous amounts in an attempt to keep you locked in. One can avoid a lot of taxes by using retirement accounts, but you really can't take the other side of these deals.

Inflation is another big one. You're not technically losing anything in nominal terms, but your buying power does shrink over time. The Fed's 2% target is not such a big deal for one making 20%, but sometimes inflation is much higher.

Bear market risk is fairly easy to understand: a rapid selloff decreases the value of stocks you own; in which case one is better off holding cash or commodities. That's left-tail risk. OTM puts are left-tail insurance.

Right-tail risk is less intuitive. After all, if your stocks rapidly appreciate, isn't that a good thing? But a period of high inflation can also inflate stock prices, although the relationship is complicated (value stocks tend to do better, while growth stocks may be hurt by the economic effects of inflation by more than their prices inflate). Inflation is a Red Queen race: you have to run (in dollar terms) just to hold your position (in buying power terms). In a period of higher inflation, one has to run faster to keep up. OTM calls are right-tail insurance (there is also a sense in which they're equivalent to a married put). Commodities (but not cash) can also be helpful here.

Even without high inflation, missing out on the right tail can mean being left behind compared to a non-dollar benchmark, like passive index investing. Hence the buy-and-hold adage about time in the market rather than timing the market. However, cutting off both tails would have done similarly well, at least historically. You can theoretically do that with a costless options collar, i.e., sell a covered call to fund an OTM (married) put, although IV skewness across strikes makes that less obviously a win as the downside has to be further OTM. Using (e.g.) VIX calls as insurance may be more efficient, but it's also more complicated.

Comment by gilch on Weighing reputational and moral consequences of leaving Russia or staying · 2024-02-23T21:10:47.633Z · LW · GW

I don't feel like the results of the Black Death situation generalizes to Russia's current demographics. Medieval Europe was near its carrying capacity given the technology of the day. The plague injected some slack into the system to allow for progress. That's really not the situation in Russia, is it? Food isn't the limiting factor.

Furthermore, Russia's population is aging on net, and the war is exacerbating the problem. On the contrary compared to Medieval Europe, this would tend to remove slack from the system as the working age Russians have to spend more of their resources to support the elderly while at the same time they're burning resources to fight the war and growing less than they could otherwise due to sanctions.

Did the Black Death have that effect? I couldn't find any information on age demographics during that period, but on priors, I'd expect disease to affect the old as well, if not more, in most cases. (What I did find suggested that the poor were disproportionately affected due to their living conditions.)

Comment by gilch on Wacky, risky, anti-inductive intelligence-enhancement methods? · 2024-02-21T20:07:48.340Z · LW · GW

I recently saw What's up with psychonetics?. It seems like a kind of meditation practice, but one focused on gaining access to and control of mental/perceptual resources. Not sure how risky this is, but the linked text had some warnings about misuse. It might be applicable to working or long-term memory, and specifically talks about conceptual understanding ("pure meanings") as a major component of the practice.

Comment by gilch on Enhancing intelligence by banging your head on the wall · 2024-02-21T19:59:20.209Z · LW · GW

Did you see the question on Psychonetics yet? I'm wondering if these ideas can be connected. Could someone learn a savant skill through Psychonetic practice? Has the Psychonetic community tried?

Comment by gilch on What's your visual experience when reading technical material? · 2024-02-21T19:53:45.811Z · LW · GW

Did you ever try WILD? Did it work?

I recently saw What's up with psychonetics? and thought of you.

I'm wondering if an aphantasiac would be able to work up to the final Two colored circles meditation.

(PU.CC.6.ADV) (Advanced) Creating a figure in a perceptual uncertainty

A practitioner forces the perception to see any arbitrary red figure appear on the blue surface (or blue figure on the red surface).

Because this (supposedly) works on your actual visual field instead of your internal mind's eye, you might be able to do it and learn to create mental visualizations that way. (You'd have to be staring at the figure though.)

It looked like some of the other psychonetic practices might make it easier to achieve a WILD as well. If you could learn to do that quickly and easily, it might be useful as a surrogate mind's eye, or the practice might help you find yours or develop one.

Comment by gilch on What's up with psychonetics? · 2024-02-21T19:36:08.587Z · LW · GW

It seems pretty interesting, but also seems like it would take a lot of time to practice for uncertain benefits. I wonder about the applications. Reading through the link a bit, it mentioned learning echolocation or inducing lucid dreams, which could be fun.

I thought the exercise with crossing the red and blue circles was interesting. It said one could eventually learn to see a figure in one color with the other color as the background. I wonder if it's possible to teach an aphantasiac to visualize this way. Would they have to be staring at the circles for that to work, or could they eventually learn to visualize without them? Also, if it's possible to learn to quickly and easily enter and exit a lucid dream state, that might work even better, although only a subset of aphantasiacs even have visual dreams.

I also wonder if savant skills can be learned this way.

Comment by gilch on Weighing reputational and moral consequences of leaving Russia or staying · 2024-02-21T18:51:28.246Z · LW · GW

I've been wondering for a while if China will try that. I would not have guessed Russia would, but maybe I'm not that informed? Have the Russians actually suggested it?

It still takes decades for any new babies to grow up to working age. That might not be soon enough to save Russia. The right time to try something like that was probably 20 years ago. Immigration would be faster, in theory. Seems to be working for Canada.

Comment by gilch on OpenAI's Sora is an agent · 2024-02-20T19:20:25.932Z · LW · GW

Yes, maybe? That kind of thing is presumably in the training data and the generator is designed to have longer term coherence. Maybe it's not long enough for plans that take too long to execute, so I'm not sure if Sora per se can do this without trying it (and we don't have access), but it seems like the kind of thing a system like this might be able to do.

Comment by gilch on Weighing reputational and moral consequences of leaving Russia or staying · 2024-02-19T05:19:53.397Z · LW · GW

I feel that doesn't hold when at least leaving is an option. One can't avoid rulers altogether, but one who is free to go could better choose from among the least bad ones available. Russia's is not that.

Comment by gilch on Weighing reputational and moral consequences of leaving Russia or staying · 2024-02-19T05:12:38.484Z · LW · GW

Hah! I'll not dispute that some circumstances might be even worse (nor do I claim that Russia has literally zero recent immigrants), but are there enough of them to compensate for Russia's population loss? I think not, but maybe you have data?

Comment by gilch on What are some real life Inadequate Equilibria? · 2024-02-19T05:07:12.302Z · LW · GW

This video makes a compelling case for base two as optimal (not just for computers, but for humans) which I had dismissed out of hand as unworkable due to the number of digits required. The more compact notation with digit groupings demonstrated gives binary all the advantages of quarternary, octal, or hexadecimal, while binary's extreme simplicity makes many manual calculation algorithms much easier than even seximal. I'm not convinced the spoken variant is adequate, but perhaps it could be improved upon.

Comment by gilch on Weighing reputational and moral consequences of leaving Russia or staying · 2024-02-18T22:04:24.033Z · LW · GW

Maybe more practical than moral, but nations around the world have put sanctions on Russia. This would theoretically hamper Russia's economic growth relative to other nations over time. (Although the Russian kleptocracy was already doing that to some degree.) Are you sure it's wise to try to build a global startup in such a country?

Russia is in a state of demographic decline with an aging population due to insufficient birth rates in recent decades. Labor shortages were projected as early as next year. Each worker will have to support more and more retirees.

Russia's strategy in war (if you can call it that) seems to be to throw more young men into the meat grinder. Because the soldiers are also working-age men, this will exacerbate the problem, not just due to the direct warfighter casualties, but due to the resulting brain drain and human capital flight that causes as the best and brightest rightly fear conscription and flee. This is not speculation; it is already happening. How much longer will you have the option of leaving? Borders can be closed to emigrants, especially in times of war. The risk of that seems high, so you'd probably keep your options more open by leaving now. It's probably easier to get back in than out.

Probably the only feasible way of fixing the demography is mass immigration. But who would want to move into a sanctioned pariah state that might conscript you for war? This is probably not happening any time soon.

On the question of repute, consider TikTok, a social media app with overwhelming popularity in the younger generation. Despite this, the United States, although famous for granting broad freedom to do things, is seriously considering banning it, due to the perceived influence of the Chinese Communist Party. It's already banned in India and in Montana (although not yet enforced pending litigation), and much more widely banned on government devices, not just in the U.S., but in Australia, Canada, and much of Europe. Being based in China has been bad for its business, due to China's poor reputation.

Similarly, Kaspersky Lab (Лаборатория Касперского), though well-regarded internationally for years, has been facing similar government bans due to allegations of ties with the Russian Federal Security Service (since 2017 in the U.S., at least). Any tech startup in Russia potentially faces similar hurdles for international adoption.

Morally, it's tricky, but there is a sense in which anyone living in Russia and at least paying taxes is complicit in the war and human rights abuses. However, everyone has to live somewhere and touching the international economy at all is going to be connected to some negative consequences somewhere. The global economy has a lot of abuses. I don't think this justifies becoming a hermit, which has a different set of moral problems, but one can do relatively better or worse here.

It's not clear to me that minimizing contacts with friends and family in Russia would be a net good. Breaking up family seems bad to me on its face. You seem concerned they might be a bad influence. But influence goes both ways; you can be a positive influence on them. Still, those you interact with most will influence your thinking. Choose friends wisely.

Comment by gilch on OpenAI's Sora is an agent · 2024-02-17T19:23:53.428Z · LW · GW

Did you come up with your hunger drive on your own? Sex drive? Pain aversion? Humans count as agents, and we have these built in. Isn't it enough that the agent can come up with subgoals to accomplish the given task?

Comment by gilch on How to (hopefully ethically) make money off of AGI · 2024-02-15T22:47:27.473Z · LW · GW

I'm maxing those out. There are a few ways to withdraw from them early, for example, you can pay a 10% penalty to do so. This is probably worth it to avoid taxes on growth in the meantime, assuming it grows enough.

Comment by gilch on How to (hopefully ethically) make money off of AGI · 2024-02-15T21:38:01.752Z · LW · GW

Disclaimer: I'm not your investment advisor.

But hypothetically:

  • SOXL maybe? It's 3x leveraged exposure to semiconductor manufacturers.
  • FNGU is another 3x leveraged one to consider, tracking the FANG+ index, which includes META, TSLA, NVDA, AMD, NFLX, AAPL, AMZN, SNOW, MSFT, and GOOGL.

The intrinsic daily compounding of 3x ETFs will drag on gains in a sideways market (which is why leveraged funds are often discouraged for long-term investment) but will actually accelerate returns in a bull market. And (of course) leverage is double edged and will hurt more in a bear market. OTM put options can protect against the left tail without dragging on gains too much, but they're not for free. (FNGU currently has no options, but FNGS, which tracks the same index without the leverage, does.) If rates go up too much it could do weird things to leveraged funds.

Comment by gilch on My thoughts on the Beff Jezos - Connor Leahy debate · 2024-02-05T05:39:27.814Z · LW · GW

As I'm pretty sure I said in the post, you can apply this reasoning to pretty much any expression of values or goals. Let's say your goal is stopping AI progress. If you're consistent, that means you'd want humanity to go extinct, because then AI would stop. This is the exact argument that Connor was using, it's so transparent and I'm disappointed that you don't see it.

I see what you're saying, and yes, fully general counterarguments are suspect, but that is totally not what Connor was doing. OK, sure, instrumental goals are not terminal values. Stopping AI progress is not a terminal value. It's instrumental, and hopefully temporary. Bostrom himself has said that stopping progress on AI indefinitely would be a tragedy, even if he does see the need for it now. That's why the argument can't be turned on Connor.

The difference is, and this is critical, Beff's stated position (as far as Connor or I can tell) is that acceleration of growth equals the Platonic Good. This is not instrumental for Beff; he's claiming it's the terminal value in his philosophy, i.e., the way you tell what "Good" is. See the difference? Connor thinks Beff hasn't thought this through, and this would be inconsistent with Beff's moral intuitions if pressed. That's the Fisher-Price Nick Land comment. Nick bit the bullet and said all humans die is good, actually. Beff wouldn't even look.

Comment by gilch on My thoughts on the Beff Jezos - Connor Leahy debate · 2024-02-05T05:24:25.471Z · LW · GW

No it's not, and obviously so. The actual topic is AI safety. It's not false vacuum, it's not a black marble, or a marble of any color for that matter.

It is, and Connor said so repeatedly throughout the conversation. AI safety is a subtopic, a special case, of Connor's main thrust, albeit the most important one. (Machine transcript, emphasis mine.)

Non-ergodicity, not necessarily AI:

The world is not ergodic, actually. It's actually a very non-ergodic you can die. [...] I'm wondering if you agree with this, forget [A]I for a moment that at some point not saying it's [A]I just at some point we will develop technology that is so powerful that if you fuck it up, it blows up everybody.

Connor explicitly calls out AGI as not his main point:

The way I see things is, is that never mind. Like, I know AGI is the topic I talk about the most and whatever comes the most pressing one, but [A]I actually AGI is not the main thing I care about. The main thing I care about is technology in general, and of which AGI is just the most salient example in the current future. You know, 50 if I was born 50 years ago, I would care about nukes [...] And the thing I fundamentally care about is the stewardship of technology. [...] of course things can go bad. It's like we're[...] mimetically engineering, genetically engineering, super beings. Like, of course this is dangerous. Like, if we were genetically engineering super tigers, people would be like, hey, that seems maybe a bit, but let let's talk about this

Beff starts talking before he could finish, so skipping ahead a bit:

The way I see things is, is that our civilization is just not able to handle powerful technology. I just don't trust our institutions. Our leaders are, you know, distributed systems. Anything with, you know, hyper powerful technology at this point in time, this doesn't mean we couldn't get to systems that could handle this technology without catastrophic or at least vastly undesirable side effects. But I don't think we're there.

This is Connor's mindset in the whole debate. Backing up a bit:

But I want to make clear again, just the point I'm trying to make here. Is that the point I'm trying to make here is, is that predictably, if you have a civilization that doesn't even try, that just accelerates fast as possible, predictably guaranteed, you're not going to make it. You're definitely not going to make it. At some point, you will develop technology that is too powerful to handle if you just have the hands of random people, and if you do it as unsafe as possible, eventually an accident will happen. We almost nuked ourselves twice during the Cold War, where only a single person was between a nuke firing and it not happening. If the same thing happens with, say, superintelligence or some other extremely powerful technology which will happen in your scenario sooner or later. You know, maybe it goes well for 100 years, maybe it goes well for a thousand years, but eventually your civilization is just not going to make it.

Also the rolling death comment I mentioned previously. And the comment about crazy wackos.

Comment by gilch on My thoughts on the Beff Jezos - Connor Leahy debate · 2024-02-05T04:24:15.112Z · LW · GW

Connor explains more about what he was trying to do here: https://twitter.com/NPCollapse/status/1753902877452439681#m

There is a pattern of debate where you make an argument of the form "X -> Y", and the other person hears "X is true", and then retorts with "But X isn't true!"

There is a viral (and probably fake) meme about prisoners and having breakfast that illustrates this pattern.

Why is it useful to make arguments of this shape? Why not just talk about X directly?

Arguments like this are useful to avoid arguing about points that aren't actually cruxes and wasting time in a debate.

As a concrete example, it is worth asking the question "if you believed that AGI was dangerous (X) -> would you agree it shouldn't be open sourced (Y)?"

The reason this is useful to establish before talking about whether AGI is actually dangerous or not is that if the other person denies that we shouldn't open source it even in principle (denies "X -> Y", independent of whether X is true or not, which is a thing more than one person I have debated has bitten the bullet on), then there's no point in arguing about X, because whether or not it is true, it will not change their view on Y, which is the thing I care about.

If the other person agrees that if it was really that dangerous, then yeah maybe it shouldn't be open sourced (accepts "X -> Y", but not "X is true"), then it is useful to move on to a discussion about whether X is true or not, because it is an actual crux that could lead to minds being changed.

Mapping out what the cruxes/degrees of freedom are in an opponent's worldview is the core of understanding the other and hopefully changing minds, rather than wasting time on points that the opponent has already decided to never change their mind on.

Unfortunately, if it takes someone 20 minutes to answer a simple yes or no "X -> Y" question, this can still run out the clock. Alas.

There are a number of other recent Tweets from Connor (@NPCollapse) with more thoughts about the debate.

Comment by gilch on Open Thread – Winter 2023/2024 · 2024-02-04T20:28:35.303Z · LW · GW

Initial ask would be compute caps for training runs. In the short term, this means that labs can update their models to contain more up-to-date information but can't make them more powerful than they are now.

This need only apply to nations currently in the lead (mostly U.S.A.) for the time being but will eventually need to be a universal treaty backed by the threat of force. In the longer term, compute caps will have to be lowered over time to compensate for algorithmic improvements increasing training efficiency.

Unfortunately, as technology advances, enforcement would probably eventually become too draconian to be sustainable. This "pause" is only a stopgap intended to buy us more time to implement a more permanent solution. That would at least look like a lot more investment in alignment research, which unfortunately risks improving capabilities as well. Having spent a solid decade already, Yudkowsky seems pessimistic that this approach can work in time and has proposed researching human intelligence augmentation instead, because maybe then the enhanced humans could solve alignment for us.

Also in the short term, there are steps that could be taken to reduce lesser harms, such as scamming. AI developers should have strict liability for harms caused by their AIs. This would discourage the publishing of the weights of the most powerful models. Instead, they would have to be accessed through an API. The servers could at least be shut down or updated if they start causing problems. Images/videos could be steganographically watermarked so abusers could be traced. This isn't feasible for text (especially short text), but servers could at least save their transcripts, which could be later subpoenaed.

Comment by gilch on My thoughts on the Beff Jezos - Connor Leahy debate · 2024-02-04T04:22:12.746Z · LW · GW

That seems like a pretty uphill battle, because they already kind of vibe with Beff, and this would naturally prejudice them. How big/dangerous is e/acc, really? Are they getting worse? Maybe we should be choosing different battles.

Connor also has fans (like me) and Beff utterly failed to move me. Would Beff draw away the marginal rationalist with his performance? I kind of think not. But that's maybe not the part that matters.

Comment by gilch on My thoughts on the Beff Jezos - Connor Leahy debate · 2024-02-04T04:10:56.354Z · LW · GW

OK, that's a fair enough ask. Do you have an alternative candidate in mind with approximately Connor's position and said experience? If wishes were horses beggars could ride. Connor understands the arguments and the epistemics, to the point that (from my perspective) he's doing an even better job at live debates than Yudkowsky. (You might not consider that a high bar.) The only way he gets more debate skill is more practice, or perhaps much more specific guidance than you have given. Maybe doesn't have to be public, but would Beff have agreed otherwise? And who would critique them?

I'm really frustrated with folks here for their blindness to how lopsided the debate was socio-emotionally.

Not obviously true to me, although admitedly bad if so. I accept that my perspective might be biased here, as I went in already somewhat familiar with Connor's arguments. But I can only call what I'm capable of seeing. What's your evidence? Anything legible to me? Beff's fan club in the YouTube comments (or on Twitter X)? That's not a good indicator of how a neutral party would see it, although I can see the comments themselves maybe skewing their perspective.

Comment by gilch on My thoughts on the Beff Jezos - Connor Leahy debate · 2024-02-04T03:22:41.452Z · LW · GW

I had watched the whole thing and came away with a very different impression. From where I'm standing, Connor is just correct about everything he said, full stop. Beff made a few interesting points but was mostly incoherent, equivocating, and/or evasive. Connor tried very hard for hours to go for his cruxes rather than get lost in the weeds, but Beff wouldn't let him. Maybe Connor could have called him on it more skillfully, but I don't think I could have done any better. Maybe he'll try a different tack if there's a next time. The moderator really should have intervened.

At some point they start building their respective cases - what if you had a false vacuum device? Would we be fucked? Should we hide it? What should we do? And on Beff's side - what if there are dangerous aliens?

For the love of god, please talk about the actual topic.

This is the actual topic. It's the Black Marble thought experiment by Bostrom, and the crux of the whole disagreement! Later on Connor called it rolling death on the dice. Non-ergodicity. Beff's whole position seems to be to redefine "the good" to be "acceleration of growth", but Connor wants to add "not when it kills you!"

About 50 minutes in, Connor goes on an offensive in a way that, to me is an extremely blatant slippery slope reasoning. The main point is that if you care about growth, you cannot care about anything else, because of course everyone's views are the extremist parodies of themselves. Embarrassing tbh.

Again, Connor is simply correct here. This is not a novel argument. It's Goodhart's Law. You get what you optimize, even if it's only a proxy for what you want. The tails come apart. You can overshoot and get your proxy rather than your target. Remember, Beff's position: "growth = good", which is obviously (to me, Connor, and Eliezer) false. Connor tried very hard to lead Beff to see why, but Beff was more interested in muddying the waters than achieving clarity or finding cruxes.

He also points out, many many times, that "is" != "ought", which felt like virtue signalling? Throwing around shibboleths? Not quite sure. But not once was it a good argument as far as I can tell.

Again, Connor is simply correct. This isn't about virtue signaling at all; that completely misses the point. Beff is equivocating. Connor is trying to point out the distinct definitions required to separate the concepts so he can move the argument forward to the next step. Beff just wasn't listening.

"Should the blueprints for F16 be open-sourced? Answer the question. Answer the question! Oh I was just trying to probe your intuition, I wasn't making a point"

Immediately followed by "If an AI could design an F16, should it be open-sourced?"

Is there something wrong with trying to understand the other position before making a point? No, and Beff should have tried harder to understand the other position. Kudos to Connor for trying. This is the Black Marble again (maybe a gray one in this case). Beff seems to have the naiive position that open source is an unmitigated good, which is obviously (to me and Connor) false, because infohazards. I don't think F16s were a great example, but it could have been any number of other things.

So e/acc should want to collapse the false vacuum?

Holy mother of bad faith. Rationalists/lesswrongers have a problem with saying obviously false things, and this is one of those.

Totally unfair characterization. I think this is Connor simply not understanding Beff's position, rather than Connor doing anything underhanded. The question was not simply rhetorical, and the answer was important for updating Connor's understanding (of Beff's position). From Connor's point of view, an intelligence explosion eats most of the future light cone anyway, so it's not that different from a false vacuum collapse: everybody dies, and the future has no value. There are some philosophies that actually bite the bullet to remain consistent in the limit and actually want all humans to die. (Nick Land came up.) Connor thinks Beff's might be that on reflection, but it's not for the reason Connor thought here.

It's in line with what seems like Connor's debate strategy - make your opponent define their views and their terminal goal in words, and then pick apart that goal by pushing it to the maximum. Embarrassing.

Again, this is what Eliezer, Connor, and I think is the obvious thing that would happen once an unaligned superintelligence exists: it pushes its goals to the limit at the expense of all we value. This is not Connor being unfair; this is literally his position.

Libertarians are like house cats, fully dependent on a system they neither fully understand nor appreciate.

Thanks for that virtue signal, very valuable to the conversation.

OK, maybe that's a signal (it's certainly a quip), but the point is valid, stands, and Connor is correct. I am sympathetic to the libertarian philosophy, but the naiive application is incomplete and cannot stand on its own.

After about 2 hours and 40 minutes of the "debate", it seems we finally got to the point!

Finally? Connor has been talking about this the whole time. Black marble!

If I were to respond to this myself, I'd say - at some point, depending how technology progresses, we might very well need to pause, slow down, or stop entirely.

Yep. That was yesterday. Connor would be interested in talking all about why he thinks that and (as evidenced by the next quote) wants to know Beff's criteria for when that point is, so Connor can move on and either explain why that point has already passed, or point out that Beff doesn't have any criteria and will just go ahead and draw the black marble without even trying to prepare for it. (Which means everybody dies.)

To which Connor has another one of the worst debate arguments ever: "So when is the right time? When do we know?"

Connor is correctly making a very legit point here. There are no do-overs. If you draw the black marble before you're prepared for it, then everybody dies. If you refuse to even think about how to prepare for it and not only keep drawing marbles but try to draw them faster and faster, then by default you die, and sooner and sooner! This is not unfair and this is not a bad argument. This is legitimately Connor's position (and mine and Bostrom's).

I don't know when is the right time to stop overpopulation on Mars.

That is a very old, very bad argument. If NASA discovered a comet big and fast enough to cause a mass extinction event that they estimated to have a 10% chance of colliding with Earth in 100 years, we shouldn't start worrying about it until it's about to hit us. Right? Or from the glass-half-full perspective, we've got a 90% chance of surviving anyway, so let's just forget about the whole thing. Right? Do you understand how absurd that sounds?

But Connor (and Eliezer and I (and Hinton)) don't think we have 100 years. We think it's probably decades or less, maybe much less. And Connor (and Eliezer and I) don't think we have a 90% chance of surviving by default. Quite the reverse, or even worse.

In response, Connor resorts to yelling that "You don't have a plan!"

No shit. Not only that, but e/acc seems to be trying very hard to make the problem worse, by giving us even less time to prepare and sabotaging efforts to buy more.

This is the point where we should move on to narrowing down why we need to have a plan for overpopulation on Mars right now. Perhaps we do.

Yes. That would have been good. I could tell Connor was really trying to get there. Beff wasn't listening though.

This was largely a display of tribal posturing via two people talking past each other.

Maybe describes Beff. Connor tried. Could've been better, but we have to start somewhere. Maybe they'll learn from their mistakes and try again.

Poor performance from both of them, but particularly Connor's behavior is seriously embarrassing to the AI safety movement.

I was embarrassed by Connor's headshot comment, which I thought was inappropriate. Thought experiments that could be interpreted as veiled death threats against one's interlocutor are just plain rude. Could have been worded differently. I don't think Connor actually meant it that way, and perfection is an unreasonable standard in a frustrating three-hour slog of a debate. But still bad form.

Besides that (which you didn't even mention), I cannot imagine what Connor possibly could have done differently to meet your unstated standards, given his position. Should he have not gone for cruxes? Because that's how progress gets made. Debaters can easily waste inordinate amounts of time on points that neither cares about (that don't matter) because they happened to come up. Connor was laser focused on making some actual progress in the arguments, but Beff was being so damn evasive that he managed to waste a couple of hours anyway. It's a shame, but this is so not on Connor. What do you even want from him?

Comment by gilch on on neodymium magnets · 2024-01-31T20:28:57.940Z · LW · GW

https://www.nironmagnetics.com/ claims to have the iron nitride magnets figured out. They appear to be a startup though. Having patents doesn't necessarily mean they can deliver.

Comment by gilch on What rationality failure modes are there? · 2024-01-20T21:57:54.441Z · LW · GW

Cultivating epistemic at the expense of instrumental rationality. They're both very important, but I think LessWrong has focused too much on the former. The explore-exploit tradeoff also applies to humans, not just to machine learning. Rationalists should be more agentic, applying what they've learned to the real world more than most seem to. Instead, cultivating too much doubt has broken our resolve to act.

Comment by gilch on Open Thread – Winter 2023/2024 · 2024-01-04T21:18:18.008Z · LW · GW

Maybe the series starting with You Need More Money

Comment by gilch on Enhancing intelligence by banging your head on the wall · 2023-12-13T04:45:35.915Z · LW · GW

Do we know which brain region was affected? Maybe it could be targeted non-invasively with Transcranial Magnetic Stimulation (TMS), which is already used therapeutically on a number of mental disorders. Especially considering that some people didn't even bang their head, maybe brain damage is not actually required.

Comment by gilch on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T03:29:54.445Z · LW · GW

Do unusually smart people have any serious problems that normal people don't?

Torsion dystonia seems to add 10 IQ points. I think there are a few other genetic diseases more common among Ashkenazi Jews that are also associated with higher intelligence.