Posts

Is there any reason to expect subjective continuity in "mind uploading"? 2023-10-23T00:30:23.689Z

Comments

Comment by Hide on Sleep, Diet, Exercise and GLP-1 Drugs · 2025-01-22T00:09:13.669Z · LW · GW

People will be like ‘we have these correlational studies so you should change your entire diet to things your body doesn’t tell you are good and that bring you zero joy.’

I mean, seriously, f*** that s***. No.

I do buy that people have various specific nutritional requirements, and that not eating vegetables and fruits means you risk having deficits in various places. The same is true of basically any exclusionary diet chosen for whatever reason, and especially true for e.g. vegans.

In practice, the only thing that seems to be an actual issue is fiber.


"I don't find this tasty" is not the same thing as "my body doesn't tell me it's good", and this concept is at the core of many suboptimal fad diets, as well as a common blanket justification for being fat and unhealthy.

If you eat Krispy Kremes and pizza exclusively, your body will "tell you it's good". The whole reason people get fat in the first place is that the taste and satiety mechanisms we've evolved in an ancestral context are maladaptive for the modern hypercaloric, hyperpalatable environment.

If you eat donuts and burgers, and take a multivitamin to avoid deficiencies, I'd challenge you to crush, chew and savour the multivitamin on your tongue and see what your body has to say about that.

By omitting vegetables and fruits, you not only risk vitamin deficiencies, but miss out on the most under-appreciated aspect of whole plant foods: their phytonutrient and antioxidant content. Plants have an enormous array of complex, immensely beneficial and poorly understood compounds which interact with our bodies in ways that invariably prove immensely beneficial. 

You can handwave the ubiquitously agreed upon benefits of fruit and vegetable consumption as "reliant on correlational studies", but this is a major handwave indeed, and includes ignoring the strong mechanistic bases to assume this is almost certainly true.

Fundamentally, the obesity epidemic appears largely due to a mismatch between the body's evolved hunger and satiety systems and the foods that have been created to wirehead them. Therefore, using "my body's hunger and satiety systems tell me that eating XYZ is good" is very uncompelling.


 

Comment by Hide on (My) self-referential reason to believe in free will · 2025-01-08T00:26:14.882Z · LW · GW

“Meaningless” is vaguely defined here. You defined free will at the beginning, so it must have some meaning in that sense.

It seems like “meaningless” is actually a placeholder for “doesn’t really exist”.

Which would make the trilemma boil down to:

  1. Free will doesn’t exist
  2. It exists and I have it
  3. It exists and I don’t have it

And your basis for rejecting point 1 is that “truth wouldn’t matter, anything would be justified, therefore it’s false”.

I don’t think this follows.

Ultimately, what you’re pointing out is an issue of distinguishing between a non-free operating system that tends to accurately believe true things, versus a confused non-free operating system that tends to believe false things.

Just because this distinction cannot be subjectively resolved with 100% confidence (because what if the axioms of logic and self-coherence are wrong?), doesn’t make this automatically “moot”.

You have to at some level assume logic, memory and a degree of rationality no matter what circumstance you’re in. If you don’t assume that, then you’re not free either, you’re just acausally operating based on random whims - and that’s something you don’t control by definition.

Comment by Hide on Turing-Test-Passing AI implies Aligned AI · 2024-12-31T22:19:42.006Z · LW · GW

Then of what use is the test? Of what use is this concept?

You seem to be saying “the true Turing test is whether the AI kills us after we give it the chance, because this distinguishes it from a human”.

Which essentially means you’re saying “aligned AI = aligned AI”

Comment by Hide on Hire (or Become) a Thinking Assistant · 2024-12-25T05:20:35.151Z · LW · GW

It’s true any job can find unqualified applicants. What I’m saying is that this in particular relies on an untenably small niche of feasible candidates that will take an enormous amount of time to find/filter through on average.

Sure, you might get lucky immediately, but without a reliable way to find the “independently wealthy guy who’s an intellectual and is sufficiently curious about you specifically that he wants to sit silently and watch you for 8 hours a day for a nominal fee”, your recruitment time will, on average, be very long, especially in comparison to what would likely be a very short average tenure given the many countervailing opportunities that would be presented to such a candidate.

Yes, it’s possible in principle to articulate the perfect candidate, but my point is more about real-world feasibility.

Comment by Hide on Hire (or Become) a Thinking Assistant · 2024-12-25T03:44:21.647Z · LW · GW

Do you genuinely think that you can find such people “reliably”?

Comment by Hide on Hire (or Become) a Thinking Assistant · 2024-12-23T22:57:38.309Z · LW · GW

Unless you’re paying gratuitously, the only people who would reliably be interested in doing this would be underqualified randoms. Expect all benefit to be counteracted by the time it takes to get into a productive rhythm, at which point they’ll likely churn in a matter of weeks anyway.

Comment by Hide on I played the AI box game as the Gatekeeper — and lost · 2024-02-12T23:59:11.202Z · LW · GW

I cannot imagine losing this game as the gatekeeper either, honestly.

Does anyone want to play against me? I’ll bet you $50 USD.

Comment by Hide on Skepticism About DeepMind's "Grandmaster-Level" Chess Without Search · 2024-02-12T23:52:59.093Z · LW · GW

I play on lichess often. I can tell you that a lichess rating of 2900 absolutely corresponds to grandmaster level strength. It is rare for FMs or IMs to exceed a 2800 blitz rating. Most grandmasters hover around 2600-2800 blitz.

Comment by Hide on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-27T22:58:14.077Z · LW · GW

The discussion on attack surfaces is very useful, intuitive and accessible. If a better standalone resource doesn’t already exist, such a (perhaps expanded) list/discussion would be a useful intro for people unfamiliar with specific risks.

Comment by Hide on ∀: a story · 2023-12-19T22:46:29.013Z · LW · GW

This was excruciatingly frustrating to read, well done.

Comment by Hide on "Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity · 2023-12-17T23:02:34.697Z · LW · GW

This is well-reasoned, but I have difficulty understanding why this kind of takeover would be necessary from the perspective of a powerful, rational agent. Assuming AGI is indeed worth its name, it seems the period of time needed for it to "play nice" would be very brief.

AGI would be expected to be totally unconcerned with being "clean" in a takeover attempt. There would be no need to leave no witnesses, nor avoid rousing opposition. Once you have access to sufficient compute, and enough control over physical resources, why wait 10 years for humanity to be slowly, obliviously strangled?

You say there's "no need" for it to reveal that we are in conflict, but in many cases, concealing a conflict will prevent a wide range of critical, direct moves. The default is a blatant approach - concealing a takeover requires more effort and more time.

The nano-factories thing is a rather extreme version of this, but strategies like poisoning the air/water, building/stealing an army of drones, launching hundreds of nukes, etc., all seem like much more straightforward ways to cripple opposition, even with a relatively weak (99.99th percentile-human-level) AGI.

It could certainly angle for humanity to go out with a whimper, not a bang. But if a bang is quicker, why bother with the charade?

Comment by Hide on The likely first longevity drug is based on sketchy science. This is bad for science and bad for longevity. · 2023-12-13T00:43:04.316Z · LW · GW

My first thought as well. IGF-1 exists for a reason. Growth is universally necessary for development, repair and function.

Comment by Hide on The Offense-Defense Balance Rarely Changes · 2023-12-10T22:34:13.445Z · LW · GW

shift the

 

Minor edit -  should be "shift in the"

Comment by Hide on We're all in this together · 2023-12-05T21:57:03.239Z · LW · GW

It's encouraging to see more emphasis recently on the political and public-facing aspects of alignment. We are living in a far-better-than-worst-case world where people, including powerful ones, are open to being convinced. They just need to be taught - to have it explained to them intuitively.

It seems cached beliefs produced by works like you get about five words have led to a passive, unspoken attitude among many informed people that attempting to explain anything complicated is futile. It isn't futile. It's just difficult. 

Comment by Hide on Stupid Question: Why am I getting consistently downvoted? · 2023-11-30T00:45:20.643Z · LW · GW

In another of your most downvoted posts, you say 

I kind of expect this post to be wildly unpopular

I think you may be onto something here.

Comment by Hide on Never Drop A Ball · 2023-11-23T22:13:01.366Z · LW · GW

You can fail to get rid of balls. All of your energy and effort can go into not allowing something to crash or fall, averting each disaster shortly before it would be too late. Speaking for ten minutes with each of fifty of sources every day can be a good way to keep any of them from being completely neglected, but it’s a terrible way to actually finish any of those projects. The terminal stage of this is a system so tied up in maintaining itself and stopping from falling behind that it has no slack to clear tasks or to improve its speed.

 

This is the salient danger of this approach. While valuable, it absolutely must be paired with a ruthless, exacting and periodic inventory of the balls that matter, otherwise your slack will be completely burned and you will die an exhausted and unaccomplished juggler. 

Comment by Hide on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-21T22:17:42.556Z · LW · GW

It seems intuitively bad:

  • Capabilities and accelerationist-focused researchers have gone from diluted and restrained to concentrated and encouraged
  • Microsoft now has unbounded control, rather than a 49% stake
  • Microsoft cannot be expected to have any meaningful focus on alignment/safety
  • They are not starting from scratch: a huge chunk of their most capable staff and leadership will be involved
  • The "superalignment" project will be at best dramatically slowed, and possibly abandoned if OpenAI implodes
  • Other major labs smell blood in the water, possibly exacerbating race dynamics, not to mention a superficial increase (by 1) in the number of serious players. 
Comment by Hide on More metal less ore · 2023-11-14T22:30:48.867Z · LW · GW

All good practices. Although, isn't this just "more metal", rather than "less ore"? I imagine one would want to maximize both the inputs and outputs, even if opportunities for increasing inputs are exhausted more quickly.

Comment by Hide on Stuxnet, not Skynet: Humanity's disempowerment by AI · 2023-11-05T22:39:41.074Z · LW · GW

How is such a failure of imagination possible?

It's odd to claim that, contingent upon AGI being significantly smarter than us, and wanting to kill us, that there is no realistic pathway for us to be physically harmed. 

Claims of this sort by intelligent, competent people likely reveal that they are passively objecting to the contingencies rather than disputing whether these contingencies would lead to the conclusion.

The quotes you're responding to here superficially imply "if smart + malicious AI, it can't kill us", but it seems much more likely this is a warped translation of either "AI can't be smart", or "AI can't be malicious".

Comment by Hide on Mission Impossible: Dead Reckoning Part 1 AI Takeaways · 2023-11-03T02:32:22.921Z · LW · GW
Comment by Hide on Lying to chess players for alignment · 2023-10-26T05:29:32.955Z · LW · GW

I would happily play the role of B.

I do not have an established FIDE rating, but my strength is approximately 1850 FIDE currently (based on playing against FIDE rated players OTB quite often, as well as maintaining 2100-2200 blitz ratings on Lichess & Chess.com, and 2200-2300 bullet). I'd be available after 6:30 pm (UTC+10) until ~12:00 pm (UTC+10). Alternatively, weekends are very flexible. I could do a few hours per week. 

I agree short/long time controls are a relevant, because speed is a skill that is almost entirely independent of conceptual knowledge and is mostly a function of baseline playing ability. 

 

Edit: Would also be fine with C

Comment by Hide on AI Safety is Dropping the Ball on Clown Attacks · 2023-10-22T22:32:25.067Z · LW · GW

Strongly agree. To my utter bewilderment, Eliezer appears to be exacerbating this vulnerability by making no efforts whatsoever to appear credible to the casual person. 

In nearly all of his public showings in the last 2 years, he has:

  • Rocked up in a trilby
  • Failed to adequately introduce himself
  • Spoken in condescending, aloof and cryptic tones; and
  • Failed to articulate the central concerns in an intuitive manner

As a result, to the layperson, he comes off as an egotistical, pessimistic nerd with fringe views - a perfect clown from which to retreat to a "middle ground", perhaps offered by the eminently reasonable-sounding Yann LeCun - who, after all, is Meta's chief AI scientist. 

The alignment community is dominated by introverted, cerebral rationalists and academics, and consequently, a common failure is to ignore the significance of image as either a distraction or an afterthought.