Posts

intrepidadventurer's Shortform 2023-02-18T23:47:32.992Z

Comments

Comment by intrepidadventurer on How to make real-money prediction markets on arbitrary topics (Outdated) · 2023-07-30T06:43:03.749Z · LW · GW

I love it! Next up, supporting wallet connect :) 

Comment by intrepidadventurer on intrepidadventurer's Shortform · 2023-02-18T23:47:33.185Z · LW · GW

Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks is a paper that I recently tried to read and tried to recreate its findings and succeeded.  Whether or not LLMs have TOM feels directionally unanswerable, is this a consciousness level debate? 

However, I followed up by asking questions prompted by the phrase "explain Sam's theory of mind" which got much more cohesive answers. It's not intuitive to me yet how much order can arise from prompts. Or where the order arises from? Opaque boxes indeed.  

Comment by intrepidadventurer on Thoughts On Expanding the AI Safety Community: Benefits and Challenges of Outreach to Non-Technical Professionals · 2023-01-02T16:42:08.248Z · LW · GW

Also consider including non-ML researchers in the actual org building. Project management for example, or other administration folks. People who've got experience in ensuring organizations don't fail, ML researchers need to eat, pay their taxes etc.

Comment by intrepidadventurer on Exterminating humans might be on the to-do list of a Friendly AI · 2022-01-04T04:55:43.744Z · LW · GW

I posit that we've imagined basically everything available with known physics, and extended into theoretical physics. We don't need to capitulate to the ineffable of a superintelligence, known + theoretical capabilities already suffice to absolutely dominate if managed by an extremely competent entity.

Comment by intrepidadventurer on Exterminating humans might be on the to-do list of a Friendly AI · 2022-01-04T04:45:30.348Z · LW · GW

I agreed with the conclusions, now that you had brought up the point of the incomprehensibility of an advanced mind, FAI almost certainly will have plans that we deem as hostile and are to our benefit. Monkeys being vaccinated, seems like a reasonable analogy. I want us to move past the "we couldn't imagine their tech" to me a more reasonable "we couldn't imagine how they did their tech"  

Comment by intrepidadventurer on Exterminating humans might be on the to-do list of a Friendly AI · 2021-12-07T18:58:43.551Z · LW · GW

I find this thought pattern frustrating. That these AI's possess magic powers that are unimaginable.  Even with our limited brains, we can imagine all the way past the current limits of physics and include things like potential worlds if the AI could manipulate space-time in ways we don't know how too.

I've seen people imagining computronium, and omni-universal computing clusters. Figuring out ways to generate negentropy, literally re-writing the laws of the universe, Bootstrapped Nano-factories, using the principle of non-locality to effect changes at the speed of light using only packets of energy.  Like what additional capabilities do they need to get?

FAI will be unpredictable in what/how, but we've already imagined outcomes and capabilities past anything achievable into what amounts to omnipotence.

Comment by intrepidadventurer on Framing Practicum: Stable Equilibrium · 2021-12-02T16:33:04.742Z · LW · GW

the number of popular X in a human system Y: 

  • highly attractive people as a % of the population; because its a competitive force no matter the actual underling state people will just change the thing they are competing on.
    • to change this I imagine you'd have to change "the bowl" ie: add more attention available per human
  • Youtube creators as a % of watchtime / engagement

Orbits just came to me, not sure if that counts a novel but I had never thought of them before as a stable equilibrium. They should stay the same unless perturbed by an outside force... but now that I think about it, pushes on an orbit are a permanent change. So I think that changes my answer to a non-stable equilibrium. 

It feels to obvious, but fungible, replicable, commodities equilibrate sales price = MR. 

Political environments should be stable as well until someone changes the system which created them. I'm labelling this as qasi_stable, they find a local minimum based on the rule set, but external forces can eventually break the system (see: all historical empires)  
 

Comment by intrepidadventurer on Open thread, 7-14 July 2014 · 2014-07-09T22:05:36.979Z · LW · GW

I have been thinking about the argument of the singularity in general. This proposition that an intellect sufficiently advanced can / will change the world by introducing technology that is literally beyond comprehension. I guess my question is this, is there some level of intelligence such that there are no possibilities that it can't imagine even if it can't actually go and do them.

Are humans past that mark? We can imagine things literally all the way past what is physically possible and or constrained to realistic energy levels.

Comment by intrepidadventurer on Open thread for December 9 - 16, 2013 · 2013-12-11T19:21:26.131Z · LW · GW

I did encounter this problem (once) and I was experiencing resistance to going back even though I had a lot of success with the chat. I figured having a game plan for next time would be my solution.

Comment by intrepidadventurer on Open thread for December 9 - 16, 2013 · 2013-12-11T19:13:38.399Z · LW · GW

This post and reading "why our kind cannot cooperate" kicked me off my ass to donate. Thanks Tuxedage for posting.

Comment by intrepidadventurer on Open thread for December 9 - 16, 2013 · 2013-12-10T18:33:33.846Z · LW · GW

Fair critique. Despite the lack of clarity on my part the comments have more than satisfactorily answered the question about community norms here. I suppose the responders can thank g-factor for that :)

Comment by intrepidadventurer on Open thread for December 9 - 16, 2013 · 2013-12-10T06:01:16.305Z · LW · GW

It does answer my question. Also thanks for suggestion to focus on the behaviour rather than the person. I didn't even realize I was thinking like that till you two pointed it out.

Comment by intrepidadventurer on Open thread for December 9 - 16, 2013 · 2013-12-09T20:02:38.073Z · LW · GW

What are community norms here about sexism (and related passive aggressive "jokes" and comments about free speech) at the LW co-working chat? Is LW going for wheatons law or free speech and to what extent should I be attempting to make people who engage in such activities feel unwelcome or should I be at all?

I have hesitated to bring this up because I am aware its a mind-killer but I figured If facebook can contain a civil discussion about vaccines then LW should be able to talk about this?

Comment by intrepidadventurer on What are you working on? October 2013 · 2013-10-02T05:25:26.224Z · LW · GW

I have committed to a food log with social back up, I am testing the hypothesis that to the first degree of approximation calories out > calories in = weight loss.

I have started to hard code a personal website using team tree-house (style sheets and two pages complete). I figure that the last comparative advantage we have before the machines take over is coding so why not test if I can do it.

so un retracting is not possible.

Comment by intrepidadventurer on What are you working on? October 2013 · 2013-10-02T05:13:40.777Z · LW · GW

I have committed to a food log with social back up, I am testing the hypothesis that to the first degree of approximation calories out > calories in = weight loss.

I have started to hard code a personal website using team tree-house (style sheets and two pages complete). I figure that the last comparative advantage we have before the machines take over is coding so why not test if I can do it.

Comment by intrepidadventurer on Help FHI crowdsource a directory of researchers with high consequentialist significance and win a prize · 2013-09-19T07:11:47.962Z · LW · GW

To what extent do you prefer the spreadsheet to have additional rows versus complete columns?