Posts

Comments

Comment by GdL752 on Luck based medicine: inositol for anxiety and brain fog · 2023-09-23T22:25:29.453Z · LW · GW

"A" graded evidence on examine for PCOS symptoms and "fertility". "B" for anxiety (slight improvement for anxiety, moderate for "panic symptoms") . 

 

Now, I have a lot of TBI's in my past and originally came across this for "OCD symptoms" , I wont bore you with details but it would definitely be considered sub clinical and not meeting DSM criteria for an actual OCD diagnosis. I came across inositol I think in 2013 or 14, either the nootropics or MTHFR sub reddits. 

 

"C" rating on examine but that's because they only have one human study linked. Up to 12 grams a day oral in adults usually only results in GI upset although a thorough long term and dose dependant study has yet to be done so we can't definitively say its "safe and harmless". My own regime is 2 grams in the morning and 2 in the afternoon for months at a time (been doing this for probably a decade) with a few weeks off every now and then when I forget to order more. I do twice yearly labs and so far my CBC and CMP are unremarkable, 38 male, testosterone levels where they need to be.

 

Honestly I can't say anything I get from it isn't just placebo, even this far in. I'm not keeping "weird sort of OCD / anxiety" symptom journals when I don't have it and I randomly arrives at the current 4 grams a day (I get 1 gram tablets so two is just easy to remember and dispense into my supplement case)

Comment by GdL752 on Diet Experiment Preregistration: Long-term water fasting + seed oil removal · 2023-09-13T22:25:33.399Z · LW · GW

Diesnt that hypothesis run counter to observed health benefits and lower obesity in say japan and countries that could broadly be described as engaging with the mediteranean diet?

Both with lots of linoleic acid / PUFA's

Comment by GdL752 on Which rationality posts are begging for further practical development? · 2023-07-24T15:37:27.853Z · LW · GW

Regarsing your example , Vacalav Smil has a lot of interesting books about the energy economy or resources as a whole (natural).

Exposing yourself to his ideas might open up some correlates or advise a heuristic you hadn't thought of in that context. "The energy economy of rationalist based thought process" or something like that.

Comment by GdL752 on I'm consistently overwhelmed by basic obligations. Are there any paradigm shifts or other rationality-based tips that would be helpful? · 2023-07-24T15:32:52.190Z · LW · GW

Are you in any sort of psychotherapy for it specifically?

That seems like exactly something that could be worked on with empirically supported OCD specific methods.

Comment by GdL752 on I'm consistently overwhelmed by basic obligations. Are there any paradigm shifts or other rationality-based tips that would be helpful? · 2023-07-24T15:30:37.834Z · LW · GW

Cognitive behavioral therapy for what appears to be fairly severe underlying anxiety?

Rebt in particular might apply as you seem to overwhelm yourself with the thought of the thing more than the thing.

Undiagnosed ADD comes to mind as "existential crisis doing chores" comes up a lot to describe it when I talk to adults.

Unified mindfulness would also be a suggestion, you can use the opportunity of the hated chores to wire up a more peaceful sensory experience and relationship to your body and mind.

Comment by GdL752 on Another medical miracle · 2023-06-27T20:01:16.674Z · LW · GW

We also have a "someone elses problem" milieu. So the ER's cant turn away the homeless but they onpy need to "stabilize" them.

Same with you or me really. So things that could be completely resolved with an inpatient stay dont end up with an admission because "cost". Nothing is definitively "solved" in a timely manner because of managed care.

So thongs are left to stew and get worse (in your case a proper holistic evaluation initially might jave involved exploring your diet vs years and multiple visits to all sorts of docs).

It ends up "costing" more but no one with decision power sees the cost because its spread over time and to different hospitals or communities etc

So the first person to see someone has no incentive to spend the resources to dig deep and then to actually solve the problem.

Comment by GdL752 on AI self-improvement is possible · 2023-06-19T12:40:52.612Z · LW · GW

Its just biology so it isn't applicable to AI. "Neoteny" is you want to dig deeper , to have a baby born with above 50% adult brain size would require another three months in the womb and the birthing of such a cranium would be pretty deadly to the mother.

Humans also have a few notable "pruning" episodes through childhood which correlate and are hypothesized to be involved with both autism spectrum disorder and schizophrenia , pruning that also has no logical bearing on how an LLM / ASI might develop.

Comment by GdL752 on My guess for why I was wrong about US housing · 2023-06-15T05:01:15.386Z · LW · GW

I had similar hpusing related "i'm the smartest guy in the room" belief some years back.

I was looking at broad amounts people were retiring on (not enough) in the US and then extrapolated that these older folks would have to sell or get second mortgages just to live.

And since the baby boomers are retiring , I thought (with no more data or numbers to back me) that we would see signifigant downward pressure on housing prices.

But of course as long as this doesn't happen in large piles , in large numbers of zipcodes and sort of in a short amount of time , then its not an issue.

Over decades in large parts of the world facing demographic challenges yeh. Not here.

Comment by GdL752 on Michael Shellenberger: US Has 12 Or More Alien Spacecraft, Say Military And Intelligence Contractors · 2023-06-09T19:56:02.010Z · LW · GW

To broaden things a bit discussionwise.

The leap from 1950's transistors and semi conductors to what...early 90's?

I'm not familiar enough with material science or any of that to make an intelligent call but does it seem like a logical progression or on inspection does it actually raise questions about recovered UFO technology?

At the very least I feel like experts in those fields either have or could point out that something seems fishy or they could convincgly dismiss the assertion.

Comment by GdL752 on Transformative AGI by 2043 is <1% likely · 2023-06-07T17:02:24.287Z · LW · GW

continue to fail at basic reasoning.

But , a huge huge portion of human labor doesnt require basic reasoning. Its rote enough to use flowcharts , I don't need my calculator to "understand" math , I need it to give me the correct answer.

And for the "hallucinating" behavior you can just have it learn not do to that by rote. Even if you still need 10% of a certain "discipline" (job) to double check that the AI isn't making things up you've still increased productivity insanely.

And what does that profit and freed up capital do other than chase more profit and invest in things that draw down all the conditionals vastly?

5% increased productivity here , 3% over here , it all starts to multiply.

Comment by GdL752 on Transformative AGI by 2043 is <1% likely · 2023-06-07T16:51:38.946Z · LW · GW

I guess I just feel completely different about those conditional probabilities.

Unless we hit another AI winter the profit and national security incentives just snowball right past almost all of those. Regulation? "Severe depression"

I admit that thr loss of taiwan does innfact set back chip manufactyre by a decade or more regardless of resoyrces thrown at it but every other case just seems way off (because of the incentive structure)

So we're what , 3 months post chatgpt and customer service and drive throughs are solved or about to be solved? , so lets call that the lowrst hanging fruit. So just some quick back of the napkin google fu , the customer service by itself is a 30 billion dollar industry just in the US.

And how much more does the math break down if say , we have an AGI that can do construction work (embodied in a robot) at say 90% human efficiency for...27 dollars an hour?

In my mind every human task fully (or fully enough) automated snowballs the economic incentive and pushes more resources and man hours into solving problems with material science and things like...idk piston designs or multifunctionality or whatever.

I admit I'm impressed by the collected wisdom and apparent track records of these authors but it seems like its missing the key drivers for further improvement in the analysis.

Like would the authors have put the concept of a smartphone at 1% by 2020 if asked in 2001 based on some abnormally high conditionals about seemingly rational but actually totally orthagonal concern based on how well palm pilots did?

I also dont see how the semi conductor fab bottleneck is such a thing? , 21 million users of openai costs 700k a day to run.

So taking some liberties here but thats 30 bucks a person (so a loss with their current model but thats not my point)

If some forthcoming iteration with better cognitive architecture etc costs about that then we have , 1.25$ per hour to replace a human "thinking" job.

Im having trouble seeing how we don't rapidly advance robotics and chip manufacture and mining and energy production etc when we stumble into a world where thats the only bottleneck standing in our way to 100% replacemwnt of all useful human labor.

Again , you got the checkout clerks at grocery stores last decade. 3 months in and the entire customer service industry is on its knees. Even if you only get 95% as good as a human and have to sort of take things one at a time to start with , all that excess productivity and profit then chases the next thing. It snowballs from here.

Comment by GdL752 on Morality is Accidental & Self-Congratulatory · 2023-05-29T15:00:33.119Z · LW · GW
Comment by GdL752 on Hands-On Experience Is Not Magic · 2023-05-29T14:40:43.167Z · LW · GW

Well to flesh that out , we could have an ASI that seems valye aligned and controllable...until it isn't.

Or the sociap effects (deep fakes for example) cpuld ruin the world or land us in a dystopia well before actual AGI.

But that might be a bit orthagonal and in the weeds (specific examples of how we end up with x-risk or s-risk end scenarios without the attributing magic powers to the ASI)

Comment by GdL752 on Hands-On Experience Is Not Magic · 2023-05-29T14:38:01.432Z · LW · GW

I think degree to which LPE is actually necessary for solving problems in any given domain, as well as the minimum amount of time, resources, and general tractability of obtaining such LPE, is an empirical question which people frequently investigate for particular important domains.

Isn't it sort of "god in the gaps" to presume that the ASI , simply by having lots of compute , no longer actually has to validate anything and apply the scientific method in the reality its attempting to exert control over?

We have machine learning algo's in biomedicine screen for molecules of interest. This lowers the fail rate of new pharmaceuticals , most of them still fail. Most of them during rat and mouse studies.

So all available human data on chemistry , pharmacodynamics , pharmacokinetics etc + the best simulation models available (alphago etc) still wont result in it being able to "hit" on a new drug for say "making humans obedient zombies" on the first try.

Even if we hand wave and say it discovers a bunch of insights in our data we dont have access to , their are simply too many variables and sheer unknowns for this to work without it being able to simulate human bodies down to the molecular level.

So it can discover a nerve gas thats deadly enough no problem , but we already have deadly nerve gas.

It just again , seems very hand wavy to have all these leaps in reasoning "because ASI" when good hypothesis prove false all the time upon application of avtual experimentation.

Comment by GdL752 on Hands-On Experience Is Not Magic · 2023-05-28T02:28:03.668Z · LW · GW

But every environment which isn't perfectly known and every "goal" which isn't complete concrete , opens up error. Which then stacka upon error as any "plan" to interact with / modify reality adds another step.

If the ASI can infer some materials science breakthroughs with given human knowledge and existing experimental data to some great degree of certainty , ok I buy it.

What I don't buy is that it can simulate enough actions and reactions with enough certainty to nail a large domain of things on the first try.

But I suppose thats still sort of moot from an existential risk perspective because FOOM and sharp turns aren't really a requirement.

But "inferring" the best move in tic tac toe and say "developing a unified theory of reality without access to super colliders" is a stretch that doesn't hold up to reason.

"Hands on experience ia not magic" , neither is "superintelligence" , the LLM's already hallucinate and any concievable future iteration will still be bound by physics , a few wrong assumptions compounded together can whiff a lot of hyperintelligent schemes.

Comment by GdL752 on What is the literature on long term water fasts? · 2023-05-25T13:45:52.233Z · LW · GW

For point one , yes. We have evidence that your body has a steady state homeostatic "weight" that it will attempt to return you to. Which is why on the whole all fad diets are equivalent and none are reccomended.

"Non metabolic" is sort of a vague statement but top of my head besides "organs" i'd imagine the possible gut flora problems could be huge (or it might be great because presumably you have flora right now encouraging excess fat etc)

Comment by GdL752 on Coercion is an adaptation to scarcity; trust is an adaptation to abundance · 2023-05-24T20:44:03.546Z · LW · GW

I'm not sure the terms as you define them really hold.

https://ourworldindata.org/trust

So , the nations with high trust levels dont seem to map to your take. Chinas rated very highly , but from a western perspecrive its rather coercive socially right?

And what about small cohesive agricultural towns? , my...knee jerk take is that you should re-evaluate this model with a "maslows hierarchy" foundation.

Comment by GdL752 on TED talk by Eliezer Yudkowsky: Unleashing the Power of Artificial Intelligence · 2023-05-11T01:31:32.514Z · LW · GW

Right right. It doesn't need to be finctionalized , just a kind of fun documentary. The key is , this stuff is not interesting for most folks. Mesa optimization sounds like a snore.

You have to be able to walk the audience through it in ane engaging way.

Comment by GdL752 on TED talk by Eliezer Yudkowsky: Unleashing the Power of Artificial Intelligence · 2023-05-10T18:03:20.796Z · LW · GW

Ok well. Lets forget that exact example (which I now admit having not seen in almost twenty years)

I think we need a narrarive style film / docudrama. Beggining , middle , end. Story driven.

1.) Introduces the topic.

2.) Expands on it and touches on concepts

3.) Explains them in an ELI5 manner.

And that it should include all the relevant things like value alignment , control , inner and outer alignment etc without "losing" the audience.

Similarly if its going to touch on niche examples of x-risk or s-risk it should just "wet the imagination" without pulling down the entire edifice and losing the forest for the trees.

I think this is a format that is more likely to be engaged by a wider swathe of persons , I think (as I stated elsewhere in this thread) that rob miles , yudkowski and a large number of other AI experts can be quoted or summarized but do not offer the tonality / charisma to keep an audience engaged.

Think "attenborough" and the planet earth series.

It also seems sensible to me to kind of meld socratic questioning / rationality to bring the audience into the fold in terms of the deductive reasoning leading to the conclusions vs just outright feeding it to them upfront. Its going to be very hard to make a popular movie thst essentially promises catastophe. However if the narrator is asking the audience as it goes along "now , given the alien nature of the intelligence, why would it share human values? , imagine for a moment what it wpuld be like to be a bat..." then when you get to thr summary points any audience member with an iq above 80 is already halfway or more to the point independantly.

Thats what I like about the reddit controlproblem faq , it touches on all the basic superficial / kneejerk questions anyone who hasnt read like all of "superintelligence" would have when casually introduced to this.

Comment by GdL752 on TED talk by Eliezer Yudkowsky: Unleashing the Power of Artificial Intelligence · 2023-05-09T12:45:26.362Z · LW · GW

I love robert miles but he suffers from the same problem as elizer or say connor leahy. Not a radio voice. Not a movie face. Also his existing videos are "deep dive" style.

 

You need to be able to introduce the overall problem, the reasons / deductions on why and how its problematic. Address the obvious pushback (which the reddit control problem faq does well) and then introduce the more "intelligentsia" concepts like "mesa optimization" in an easily digestible manner for a population with an average reading comprehension of a 6th grade level and a 20 second attention span.

 

So you could work off of Robert miles videos but they need to fit into a narrative / storytelling format. Beggining, middle and end. The end should be basically where were all at "we're probably all screwed but it doesn't mean we can't try" and then actionable advise (which should be sprinkled throughout the film, that's foreshadowing)

 

Regarding that documentary , I see a major flaw as drifting off into specifics like killer drones. The media has already primed peoples imaginations for lots of the specific ways x risk or s risk might plan out (matrix trilogy , black mirror etc). You could go down an entire rabbot hole on just nano tech or bioweapons. IMO you sprinkle those about to keep the audience engaged (and so that the takeaway isn't just "something something paperclips") but driving into them too much grts you lost in the weeds.

 

For example , I foresaw the societal problems of deepfakes but the way its actually played out (mass distributed powerful llm's people can diy with) coupled with the immediacy of the employment problem introeuces entire new vectors in social cohesion as problems I hadn't thought through at all. So , better to broadly introduce individual danger scenarios while keeping the narrative focused on the value alignment / control problems themselves.

Comment by GdL752 on TED talk by Eliezer Yudkowsky: Unleashing the Power of Artificial Intelligence · 2023-05-09T12:41:40.832Z · LW · GW

You should pull them up on youtube or whatever and then just jump around (sound off is fine) , the film maker is independent. I'm not saying that particular producer / film maker is the go to but the "style" and "tone" and overall storytelling fits the theme. 

 

"Serious documentary about the interesting thing you never heard about" , also this was really popular with young adults when it came out, it caught the flame of a group of young Americans who came of age during 9/11 and the middle east invasions and sort of shaped up what became the occupy wall street movement. Now, that's probably not exactly the demographic you want to target, most of them are tech savvy enough that they'll stumble upon this on their own (although they do need something digestible) but broadly speaking it seems to me like having a cultural "phenomenon" that brings this more into the mainstream and introduces the main takeaways or concepts is a must have project for our efforts.

Comment by GdL752 on TED talk by Eliezer Yudkowsky: Unleashing the Power of Artificial Intelligence · 2023-05-08T17:32:59.885Z · LW · GW

It seems to me like AGI risk needs a "zeitgeist addendum" / "venus project" style movie for the masses. Open up the overton window and touch on things like mesa optimization without boring the average person to death.

The /r/controlproblem faq is the most succinct summary i've seen but I couldn't get the majority of average folks to read that if I tried and it would still go over their heads.

Comment by GdL752 on Google "We Have No Moat, And Neither Does OpenAI" · 2023-05-05T03:59:41.090Z · LW · GW

Except "aligned" AI (or at least corrugibility) benefits folks who are doing even shady things (say trying to scam people)

So any gains in those areas that are easily implemented will be widely spread and quickly.

And altruistic individuals already use their own compute and GPU for things like seti@home (if youre old enough to remember) and to protein folding projects for medical research. Those same people will become aware of AI safety and do the same and maybe more.

The cats out of the bag , you can't "regulate" AI use at home , I can run models on a smartphone.

What we can do is try and steer things toward a beneficial nash equilibrium.

Comment by GdL752 on Google "We Have No Moat, And Neither Does OpenAI" · 2023-05-05T03:50:03.568Z · LW · GW

Aren't a lot of the doomy scenarios predicated on a single monolithic AI though? (Or multipolar AI that all agree to work togrther for naughtiness for some reason)

A bunch of them being tinkered on by lots of people seems like an easier path to alignment and as a failsafe in terms of power distribution.

You have lots of smaller scale dangers introduced but they certaintly dont seem to me to rise to the level of x or s risk in the near term.

What have we had thus far? A bunch of think tanks using deductive reasoning with no access to good models and a few monoliths with all the access. Seems to me that having the capability to actually run experiments at a community level will neccesarily boost efforts on alignment and value loading more than it assists actual AGI being born.

Comment by GdL752 on Top lesson from GPT: we will probably destroy humanity "for the lulz" as soon as we are able. · 2023-04-17T23:44:42.790Z · LW · GW

This seems like the most obvious short term scenario that will occur. We have doomsday cults right now today.

Counterpoint , the once a century pandemic happened before now. So we can make vaccines much faster thsn ever thought possible but given the...material...timeline and all the factors for virility and debility / lethality at play with bioweapons i'm not sure thats much comfort.

It seems like the kind of thing where we'll almost assuredly be reacting to such an event vs whatever guardrails can be put in place.

Comment by GdL752 on Are we in an AI overhang? · 2020-07-27T18:45:12.193Z · LW · GW

Well they already have an industry for behavioral / intent marketing - this could make it a lot better. SO taking data and using it to find correlates to a behavior in the buying process and monetizing that. We have IoT taking off, imagine a scenario where we have so much data being fed to a machine learning algorithm driven by this that we could type into a console "what behaviors predict that someone will buy a home in the next 3 months?" , now imagine that its answer is pretty predictive, how much is that worth to a real estate agent?

Now apply it to literally any purchase behavior where the profit margin allows for the use of this technology (obviously more difficult in places with different data privacy laws) , the machine learning algo could know you want a new pink sweater before its even occurred to you with whatever level of accuracy.

As far as creative work i'd be real curious to see how it handles comedy, throw it in a writing room for script punchup (and that's only until it can completely write the scripts) - punchup is where they hire comedians and comedy writers to sit around and add jokes to movies or tv shows.

I also see a lot of use as far as making law accessible because it could conceivably parse through huge amounts of law and legal theory (I know it can't reason but, even just using its current model - bare with me) and spit out fairly coherent answers for layman (maybe as a free search engine profitable via ads for lawyers)

If we do see the imagined improvements by just giving it more computronium we may be staring down the advent of a volitionless "almost oracle"

I'm really excited to see what happens when you give it enough GPU's and train it on physics models.