Posts

Comments

Comment by accolade on Luna Lovegood and the Chamber of Secrets - Part 4 · 2020-12-20T17:05:49.399Z · LW · GW

you could use a tool like https://visualping.io to track & notify about changes on https://www.lesswrong.com/s/TF77XsD5PbucbJsG3
(To convert e.g. from mail notifications to RSS, you could certainly google another tool, maybe https://zapier.com has something)

Comment by accolade on Coronavirus: Justified Practical Advice Thread · 2020-03-26T21:05:56.179Z · LW · GW

I guesstimate the deal is not negligible.

Input to my intuition:

a study identified over a hundred different strains of bacteria on dollar bills

Traces of cocaine can be found on almost 80 percent of dollar bills.

(source http://theconversation.com/atms-dispense-more-than-money-the-dirt-and-dope-thats-on-your-cash-79624 )

A powder called Glo Germ, meant to visualize germ spread, was still visible to the naked eye after 8 handshakes (but not 9) in an informal experiment by YouTuber Mark Rober. ( https://youtu.be/I5-dI74zxPg?t=346 )

Comment by accolade on How to Beat Procrastination · 2017-05-24T02:49:06.693Z · LW · GW

((
Pretty much deader than disco, but my inet-fu was able to dig up the following excerpts of the original article (from http://newsinfo.inquirer.net/25019/overcoming-procrastination):

“Too many people set goals that are simply unrealistic. Too big, they want it too soon, and they wonder why they don’t have any results in their life. What happens to a person who is consistently setting big goals that are outside of their scope, outside of their belief system, and they keep coming short of them? What kind of pattern does it set up in their mind? That sort of person starts to say, ‘Why do I bother with this goal setting stuff—I don’t ever achieve anything.’

“Set yourself a goal that is realistic, something you can see that isn’t too far and isn’t overpowering, not too far away, but at the same time, giving you a bit of a stretch, getting you out of your comfort zone. And once you’ve done that, and you’ve built your belief, you’ve built your power, then you set yourself another realistic goal, with another stretch factor. And once you’ve done that, another one. So it’s like a series of stepping stones, still getting you in the same direction, but having a staggered approach. Also, the wrong goal is something that’s too low. It doesn’t stimulate you, drive you, because you’ve done it before or you can do it or it’s simple. It doesn’t give you that drive, to give you that ‘take action step,’ to beat procrastination and help you as well.”

Also since I have evidently no life, I mini-doxed Sam in case someone would like to ask him whether he still has a copy of the whole article, lol:
https://www.linkedin.com/in/sam-tornatore-7b87b911a/
https://www.facebook.com/sam.tornatore.9

Comment by accolade on Attention! Financial scam targeting Less Wrong users · 2017-04-05T21:32:39.670Z · LW · GW

But they could still use/ sell your address for spam that doesn’t work with a mail response, but clicking a link. (E.g. shopping for C1/\L|S.)

Comment by accolade on Eliezer Yudkowsky Facts · 2017-04-02T11:21:46.610Z · LW · GW

• Everett branches where Eliezer Yudkowsky wasn’t born have been deprecated. (Counterfactually optimizing for them is discouraged.)

Comment by accolade on Cards Against Rationality · 2017-04-02T10:59:32.185Z · LW · GW

"That which can be destroyed by being a motherfucking sorceror should be"

Brilliant!! x'D x'D

(This might make a good slogan for pure NUs …)

Comment by accolade on Seeking better name for "Effective Egoism" · 2017-03-17T20:56:56.059Z · LW · GW

“Effective Hedonism”
“Effective Personal Hedonism”
“Effective Egoistic Hedonism”
“Effective Egocentric Hedonism”
“Effective Ego-Centered Hedonism”
“Effective Self-Centric Hedonism”
“Effective Self-Centered Hedonism”

Comment by accolade on Timeless Identity · 2016-11-17T03:57:56.659Z · LW · GW

Germany: http://www.biostase.de/

Comment by accolade on Epilogue: Atonement (8/8) · 2016-01-21T22:56:45.340Z · LW · GW

why would anyone facing a Superhappy in negotiation not accept and then cheat?

The SH cannot lie. So they also cannot claim to follow through on a contract while plotting to cheat instead.

They may have developed their negotiation habits only facing honest, trustworthy members of their own kind. (For all we know, this was the first Alien encounter the SH faced.)

Comment by accolade on Meetup : Bi-weekly Frankfurt Meetup · 2015-12-02T03:00:28.100Z · LW · GW

Been there, loved it!

Comment by accolade on Less Wrong Study Hall: Now With 100% Less Tinychat · 2015-11-11T11:12:09.541Z · LW · GW

Thank you so much for providing and super-powering this immensely helpful work environment for the community, Malcolm!

Let me chip in real quick... :-9

There - ✓ 1 year subscription GET. I can has a complice nao! \o/
"You're Malcolm" - and awesome! :)

Comment by accolade on Leave a Line of Retreat · 2015-09-30T19:48:27.522Z · LW · GW

related: http://lesswrong.com/lw/9p/extreme_rationality_its_not_that_great/

Comment by accolade on Trying to Try · 2013-09-27T01:23:57.508Z · LW · GW

[ TL;DR keywords in bold ]

Assuming freedom of will in the first place, why should you not be able to choose to try harder? Doesn't that just mean allocating more effort to the activity at hand?

Did you mean to ask "Can you choose to do better than your best?" ? That would indeed seem similar to the doubtable idea of selecting beliefs arbitrarily. By definition of "best", you can not do better than it. But that can be 'circumvented' by introducing different points in time: Let's say at t=1 your muscle capacity enables you to lift up to 10 kg. You can not actually choose to lift more. You can try, but would fail. But you can choose to do weight training, with the effect that until t=2 you have raised your lifting power to 20 kg. So you can do better (at t=2) than your best (at t=1).

But Eliezer's point was a different one, to my understanding: He suggested that when you say (and more or less believe) that you "try your best", you are wrong automatically. (But only lying to the extent of your awareness of this wrongness.) Because you do better when setting out to "succeed" instead of to "try"; because these different mindsets influence your chances of success.

About belief choice: Believing is not a simply choosable action like any other. But I can imagine ways to alter one's own beliefs (indirectly), at least in theory:

  • Influencing reality: one example is the aforementioned weightlifting: That is a device for changing the belief "I am unable to lift 20 kg" - by changing the actual state of reality over time.
  • Reframing a topic, concentrating on different (perspectives on) parts of the available evidence, could alter your conclusion.
  • Self-fulfilling prophecy effects, when you are aware of them, create cases where you may be able to select your belief. Quoting Henry Ford:

    If you think you can do a thing or think you can't do a thing, you're right.

    If you believe this quote, then you can select whether to believe in yourself, since you know you will be right either way.

  • (Possibly a person who has developed a certain kind of mastery over her own mind can spontaneously program herself to believe something.)

(More examples of manipulating one's own beliefs, there in the form of "expectancy", can be found under "Optimizing Optimism" in How to Beat Procrastination. You can also Google "change beliefs" for self-help approaches to the question. Beware of pseudoscience, though.)

Comment by accolade on Rationality Quotes September 2013 · 2013-09-26T04:53:43.976Z · LW · GW

And the mock ads at the bottom.

ETA: Explanation: Sometimes the banner at the bottom will contain an actual (randomized) ad, but many of the comics have their own funny mock ad associated. (When I noticed this, I went through all the ones I had already read again, to not miss out on that content.)

(I thought I'd clarify this, because this comment got downvoted - possibly because the downvoter misunderstood it as sarcasm?)

Comment by accolade on LW anchoring experiment: maybe · 2013-01-27T20:30:19.093Z · LW · GW

Never too late to upboat a good post! \o/ (…and dispense some bias at the occasion…)

Comment by accolade on LW anchoring experiment: maybe · 2013-01-27T14:26:49.311Z · LW · GW

Upvoted.

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-22T19:50:08.607Z · LW · GW

Thanks for the feedback on the bold formatting! It was supposed to highlight keywords, sort of a TL;DR. But as that is not clear, I shall state it explicitly.

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-22T12:07:33.806Z · LW · GW

If the author assumes that most people would even put considerable (probabilistic) trust into his assertion of having won, he would not maximize his influence on general opinion by employing this bluff of stating he has almost won. This is amplified by the fact that the statement of an actual AI win is more viral.

Lying is further discouraged by the risk that the other party will sing.

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-22T10:47:58.560Z · LW · GW

[TL;DR keywords in bold]

I find your hypothesis implausible: The game was not about the ten dollars, it was about a question that was highly important to AGI research, including the Gatekeeper players. If that was not enough reason for them to sit through 2 hours of playing, they would probably have anticipated that and not played, instead of publicly boasting that there's no way they would be convinced.

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-22T10:42:46.458Z · LW · GW

Jung vf guvf tvoorevfu lbh'er jevgvat V pna'g ernq nal bs vg‽

@downvoters: no funny? :) Should I delete this?

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-22T10:25:39.812Z · LW · GW

Ok, I take it by "one-way-blind" you mean that each layer gets no new information that is not already in its database, but what is explicitly controlled by the humans. (E.g. I guess each layer should know the human query, in order to evaluate if AI's answer is manipulative.)

I also understand that we do look at complex information given by the AI, but only if the security bit signals "ok".

Ideally the AI […] knows as little as possible about humans and about our universe's physics.

That seems problematic, as these kinds of knowledge will be crucial for the optimization we want the AI to calculate.

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-21T15:34:11.662Z · LW · GW

Persuasion/hyperstimulation aren't the only way. Maybe these can be countered by narrowing the interface, e.g. to yes/no replies, for using the AI as an oracle ("Should we do X?"). Of course we wouldn't follow its advice if we had the impression that that could enable it to escape. But its strategy might evade our 'radar'. E.g. she could make us empower a person, of whom she knows that they will free her but we don't know.

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-21T15:16:59.015Z · LW · GW

I think you are right, I just shifted and convoluted the problem somewhat, but in principle it remains the same:

To utilize the AI, you need to get information from it. That information could in theory be infected with a persuasive hyperstimulus, effectively making the recipient an actuator of the AI.

Well, in practice the additional security layer might win us some time. More on this in the update to my original comment.

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-21T14:54:17.694Z · LW · GW

Update

Have the button turn off the AI immediately instead of doing nothing, so she doesn't have time to switch to a plan B of having the persuaded Gatekeeper find a way to actually free her.

Of course she might anticipate that strategy and thoroughly indoctrinate and train him before he pushes the button. Thus, persuaded test Gatekeepers must be considered infected and put into cryogenic sleep* until the FAI problem is solved.

Furthermore, great caution would have to be taken in analyzing the discourse between Boxy and Guinea, because her arguments must be considered highly memetically infectious. Staff would have to stay under thorough psychological supervision and maybe replaced after some time.

No matter how much caution, we would be doomed if we consider Boxy's IQ to be ∞. But if we continuously keep on Gatekeeper-testing every so often, starting at a low AI IQ, we might find persuasion patterns that are dangerous, but against which training/immunization is possible. Of course, there will be a limit to this process. At some point, Boxy will have become smart enough to produce 'mind HIV' - a thought virus we have no cure for yet.

A humorous example of an extremely effective mind virus: The Funniest Joke In The World by Monty Python


* ETA: They would have declared consent to the cryogenic sleep before their unwitting 'AI-Box Experiment'.

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-21T14:48:09.232Z · LW · GW

I don't get the hint. Would you care to give another hint, or disclose your hypothesis?

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-21T14:04:30.740Z · LW · GW

Please elaborate: What are the benefits of a Gatekeeper? How could you get them without one?

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-21T13:48:13.253Z · LW · GW

Cool, n=65 already. :) When interpreting the results, mind the bias created by my answer preceding the poll question.

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-21T13:39:29.460Z · LW · GW

How would humanity harness the AI's potential when the only information that escapes the system is a status bit? (Maybe I misunderstood your model.)

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-21T13:30:54.954Z · LW · GW

It is, after all, much harder to convince a group of mutually-suspicious humans than to convince one lone person.

That sounds right. Would you have evidence to back up the intuition? (This knowledge would also be useful for marketing and other present life persuasion purposes.)

#( TL;DR: Mo' people - mo' problems?

I can think of effects that could theoretically make it easier to convince a group:

  • For some reason, Boxy might be better skilled at manipulating social/group dynamics than at influencing a lone wolf.
  • More people make the system more complex. Complexity generally increases the likelihood of security holes.
    • Every extra person makes another target and will bring new soft spots to the table, which the AI could pounce on.
  • Supposing that the most competent person available would get the position of the lone Gatekeeper, the average competence would fall when adding more staff.
    • Then the machine could go for an inductive approach - convince the weakest link first, proceed from there with this human ally on her side.
      • Persuaded humans could principally be employed as actuators, e.g. for pressuring, even attacking opposing group members.
        • The lone wolf could be strong against a computer but weak against fellow humans.
        • Surely you will say "But any communication with the terminal will be supervised by everyone!" But that does not strictly make such influence impossible as far as I can tell.
          • Also the superintelligence could get creative, e.g. instill a discussion among the colleagues so that most of them are distracted.

(You could take preemptive measures against these worries, but Boxy might find security holes in every 'firewall' you come up with - an arms race we could win?)

#)

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-21T05:59:37.304Z · LW · GW

How about fooling both the AI and guinea Gatekeepers into thinking that they can let it out, say, at the push of a big red button which in actual fact does nothing? (Make AI forget about the button by reverting it to an older backup.)

Comment by accolade on I attempted the AI Box Experiment (and lost) · 2013-01-21T04:38:08.036Z · LW · GW

"Yes but not sure." -_-