Killing Moloch with "Nudge tech"?

post by nathanlippi · 2021-03-11T20:28:58.957Z · LW · GW · 5 comments

Contents

  Context
      Specific questions I hope to answer:
  Project Description
    "Turn tech from addicting to revitalizing, with nudge tech"
    Nudge tech - definition
None
5 comments

Context

I come from the SaaS startup world and I intend to help kill some inadequate equilibria in the next few years, specifically around the attention economy.

I'll outline what I'm thinking, below, and would appreciate any help from the LW community:

Specific questions I hope to answer:

 

Project Description

"Turn tech from addicting to revitalizing, with nudge tech"

A continually-improving "nudge tech" that digitally overlays computers/smartphones, and gently changes the quality of life for those individuals and populations who opt in.

It will be continually improved by open-source plugin contributions; plugins will be tested automatically and successful ones will be rolled out to those who opt in.

 

Nudge tech - definition

Nudge tech is technology that changes the probability of specific human behaviors by modifying 1+ of the following:

 

'Nudge tech', is used by tech companies like Facebook, Netflix, etc. to keep users engaged and therefore make money. But nudge tech can be used to defend against exploitation by these same companies.

 

That's it! I'm looking forward to your thoughts.

5 comments

Comments sorted by top scores.

comment by Bastiaan · 2021-03-12T06:49:01.446Z · LW(p) · GW(p)

Interesting idea, but sparse on how you'd actually achieve this. What is your vision for what an MVP would actually do?

And, If you succeed, what stops this from becoming evil after all? Writing "Don't be evil" helps, but it's not enough.

How are you going to make money off of this? Without money there is a real risk of something slowly dying out.

Good luck!

Replies from: nathanlippi, nathanlippi
comment by nathanlippi · 2021-03-12T20:04:40.528Z · LW(p) · GW(p)

As for "Don't Be Evil" -- this is something I am concerned about.

  1. Methods of monetization must be as closely aligned with positive outcomes for the end user as possible. From what I hear, Vanguard is a great model for doing this well. I haven't yet studied the specifics.
  2. There must be a moat that prevents less scrupulous companies from growing faster with a copycat product. One method is to be to be donation-driven, or funded by the government. Another would be solid branding that educates as to why other monetization models aren't a good idea. This last one feels weaker, though.

Thoughts on any of this is welcome!

comment by nathanlippi · 2021-03-12T19:55:11.751Z · LW(p) · GW(p)

Thank you for the questions and feedback, Bastiaan!

I'll answer your question about the MVP / and money together; here is a yet-untested problem to solve:

People are spending hours per day on their phones; falling asleep with them; this disrupts their sleep, productivity, and their relationships.

I haven't yet looked closely at how to solve this but some approaches might be a combination of:

  • Disrupting push notifications from offending apps
  • Sending counter-push-notifications to disrupt people in flow on the offending apps
  • Selectively hiding or de-colorizing the most offending app icons
  • Gamifying or otherwise disincentivizing high phone engagement (e.g. opt-in monetary penalty, some sort of social/addiction score)
  • Teaching people skills to disengage from phones using CBT
  • Accountability partners who can see the other's engagement

 

Not only does this need to be proven to work, but they need to be self-funding, as you mentioned.

I'll have to do some research first, but I assume there are opportunities for this to be self-funded -- people do pay money for diet apps, exercise apps, and certain types of specialized phone alarms.

comment by Dagon · 2021-03-12T18:17:53.583Z · LW(p) · GW(p)

Specific questions I hope to answer:

Is that a typo?  Those are in no way specific.  A specific question would be to fill in the mad-lib of "I notice a common X behavior in Y situation, and I hypothesize that Z would interrupt the thoughtless process and lead to a different equilibrium".  What parts of this should I test first, and how?

Replies from: nathanlippi
comment by nathanlippi · 2021-03-12T20:17:15.947Z · LW(p) · GW(p)

Thank you for the comment, Dagon :).

I was/am looking for feedback from a high level; I want to use "nudge tech" to influence the behavior of large groups of people; I'm wondering where large projects like this might tend to fail -- one example would be groups of people could get suspicious that their data is being collected if it's not properly anonymized or provably kept on their phones.

That being said, most people I've talked with these last few days are hungry for specific examples -- I haven't done enough customer research yet to be sure, but I've shared a somewhat more specific example in another comment to Bastiaan!