Posts

Technological unemployment as another test for rationalist winning 2023-05-02T04:16:46.614Z
What's the deal with Effective Accelerationism (e/acc)? 2023-04-06T04:03:19.392Z
What are some ideas that LessWrong has reinvented? 2023-03-14T22:27:04.199Z
Medlife Crisis: "Why Do People Keep Falling For Things That Don't Work?" 2023-02-21T06:22:23.608Z
Avoid large group discussions in your social events 2023-02-15T21:05:58.512Z
Tools for finding information on the internet 2023-02-09T17:05:28.770Z
The 2/3 rule for multi-factor authentication 2023-02-04T02:57:20.487Z
RomanHauksson's Shortform 2023-01-30T01:22:58.532Z
What are your thoughts on the future of AI-assisted software development? 2022-12-09T10:04:55.473Z

Comments

Comment by RomanHauksson (r) on Would you have a baby in 2024? · 2023-12-25T06:51:57.899Z · LW · GW

Having kids does mean less time to help AI go well, so maybe it’s not so much of a good idea if you’re one of the people doing alignment work.

Comment by RomanHauksson (r) on Monthly Roundup #13: December 2023 · 2023-12-20T04:11:11.414Z · LW · GW

I love how it has proven essentially impossible to, even with essentially unlimited power, rig a vote in a non-obvious way. I am not saying it never happens deniably, and you may not like it, but this is what peaked rigged election somehow always seems to actually look like.

(Maybe I misunderstood, but isn’t this weak evidence that non-obviously rigging an election is essentially impossible, since you wouldn‘t notice the non-obvious examples?)

Comment by RomanHauksson (r) on Upgrading the AI Safety Community · 2023-12-16T23:32:27.543Z · LW · GW

Are there any organizations or research groups that are specifically working on improving the effectiveness of the alignment research community? E.g.

  • Reviewing the literature on intellectual progress, metascience, and social epistemology and applying the resulting insights to this community
  • Funding the development of experimental “epistemology software”, like Arbital or Mathopedia
Comment by RomanHauksson (r) on Moral Mountains · 2023-12-14T18:04:26.137Z · LW · GW

I'll end with this thought: I think you can probably use these ideas of moral weights and moral mountains to quantify how altruistic someone is.

Maybe “altruistic” isn’t the right word. Someone who spends every weekend volunteering at the local homeless shelter out of a duty to help the needy in their community but doesn’t feel any specific obligation towards the poor in other areas is certainly very altruistic. The amount that one does to help those in their circle of consideration seems to be a better fit for most uses of the word altruism.

How about “morally inclusive”?

Comment by RomanHauksson (r) on Red Line Ashmont Train is Now Approaching · 2023-12-14T08:58:11.693Z · LW · GW

I would find this deeply frustrating. Glad they fixed it!

Comment by RomanHauksson (r) on What are your thoughts on the future of AI-assisted software development? · 2023-12-08T06:27:59.698Z · LW · GW

One year later, what you think about the field now?

Comment by RomanHauksson (r) on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-05T22:56:30.747Z · LW · GW

I’m a huge fan of agree/disagree voting. I think it’s an excellent example of a social media feature that nudges users towards truth, and I’d be excited to see more features like it.

Comment by RomanHauksson (r) on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-05T22:54:43.347Z · LW · GW

(low confidence, low context, just an intuition)

I feel as though the LessWrong team should experiment with even more new features, treating the project of maintaining a platform for collective truth-seeking like a tech startup. The design space for such a platform is huge (especially as LLMs get better).

From my understanding, the strategy that startups use to navigate huge design spaces is “iterate on features quickly and observe objective measures of feedback”, which I suspect LessWrong should lean into more. Although, I imagine creating better truth-seeking infrastructure doesn’t have as good of a feedback signal as “acquire more paying users” or “get another round of VC funding”.

Comment by RomanHauksson (r) on Update #2 to "Dominant Assurance Contract Platform": EnsureDone · 2023-11-28T20:16:09.628Z · LW · GW

This is really exciting. I’m surprised you’re the first person to spearhead a platform like this. Thank you!

I wonder if you could use a dominant assurance contract to raise money for retroactive public goods funding.

Comment by RomanHauksson (r) on Help to find a blog I don't remember the name of · 2023-11-24T01:39:52.437Z · LW · GW

Is it any of the results from this Metaphor search?

Comment by RomanHauksson (r) on OpenAI Staff (including Sutskever) Threaten to Quit Unless Board Resigns · 2023-11-20T19:47:45.531Z · LW · GW

A research team's ability to design a robust corporate structure doesn't necessarily predict their ability to solve a hard technical problem. Maybe there's some overlap, but machine learning and philosophy are different fields than business. Also, I suspect that the people doing the AI alignment research at OpenAI are not the same people who designed the corporate structure (but this might be wrong).

Welcome to LessWrong! Sorry for the harsh greeting. Standards of discourse are higher than other places on the internet, so quips usually aren't well-tolerated (even if they have some element of truth).

Comment by RomanHauksson (r) on It's OK to be biased towards humans · 2023-11-20T11:40:48.767Z · LW · GW

I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence?

This was just an arbitrary example to demonstrate the more general idea that it’s possible we could make the wrong assumption about what makes humans valuable. Even if we discover that consciousness comes with intelligence, maybe there’s something else entirely that we’re missing which is necessary for a being to be morally valuable.

I don't want "humanism" to be taken too strictly, but I honestly think that anything that is worth passing the torch to wouldn't require us passing any torch at all and could just coexist with us…

I agree with this sentiment! Even though I’m open to the possibility of non-humans populating the universe instead of humans, I think it’s a better strategy for both practical and moral uncertainty reasons to make the transition peacefully and voluntarily.

Comment by RomanHauksson (r) on It's OK to be biased towards humans · 2023-11-12T19:11:41.994Z · LW · GW

I think the risk of human society being superseded by an AI society which is less valuable in some way shouldn't be guarded against by a blind preference for humans. Instead, we should maintain a high level of uncertainty about what it is that we value about humanity and slowly and cautiously transition to a posthuman society.

"Preferring humans just because they're humans" or "letting us be selfish" does prevent the risk of prematurely declaring that we've figured out what makes a being morally valuable and handing over society's steering wheel to AI agents that, upon further reflection, aren't actually morally valuable.

For example, say some AGI researcher believes that intelligence is the property which determines the worth of a being and blindly unleashes a superintelligent AI into the world because they believe that whatever it does with society is definitionally good, simply based on the fact that the AI system is more intelligent than us. But then maybe it turns out that phenomenological consciousness doesn't necessarily come with intelligence, and they accidentally wiped out all value from this world and replaced it with inanimate automatons that, while intelligent, don't actually experience the world they've created.

Having an ideological allegiance to humanism and a strict rejection of non-humans running the world even if we think they might deserve to would prevent this catastrophe. But I think that a posthuman utopia is ultimately something we should strive for. Eventually, we should pass the torch to beings which exemplify the human traits we like (consciousness, love, intelligence, art) and exclude those we don't (selfishness, suffering, irrationality).

So instead of blind humanism, we should be biologically conservative until we know more about ethics, consciousness, intelligence, et cetera and can pass the torch in confidence. We can afford millions of years to get this right. Humanism is arbitrary in principle and isn't the best way to prevent a valueless posthuman society.

Comment by RomanHauksson (r) on Can a stupid person become intelligent? · 2023-11-09T00:57:37.029Z · LW · GW

Others have provided sound general advice that I agree with, but I’ll also throw in the suggestion of piracetam for a nootropic with non-temporary effects.

Comment by RomanHauksson (r) on What's the deal with Effective Accelerationism (e/acc)? · 2023-10-27T05:21:02.296Z · LW · GW

7 months later, from Business Insider: Silicon Valley elites are pushing a controversial new philosophy.

Comment by r on [deleted post] 2023-10-26T04:06:56.378Z

I've also been thinking a lot about this recently and haven't seen any explicit discussion of it. It's the reason I recently began going through BlueDot Impact's AI Governance course.

A couple questions, if you happen to know:

  • Is there anywhere else I can find discussion about what the transition to a post-superhuman-level-AI society might look like, on an object level? I also saw the FLI Worldbuilding Contest.
  • What are the implications for this on career choice for a early-career EA trying to make this transition go well?
Comment by RomanHauksson (r) on RomanHauksson's Shortform · 2023-10-22T21:42:15.309Z · LW · GW

https://www.astralcodexten.com/p/ro-mantic-monday-21323

Comment by RomanHauksson (r) on RomanHauksson's Shortform · 2023-10-22T19:58:12.768Z · LW · GW

Manifold.love is in alpha, and the MVP should be released in the next week or so. On this platform, people can bet on the odds that each other will enter in at least a 6-month relationship.

Comment by RomanHauksson (r) on Leveraging Bayes' Theorem to Supercharge Memory Techniques · 2023-10-09T06:39:59.199Z · LW · GW

I suspect this was written by ChatGPT. It doesn’t say anything meaningful about applying Bayes’ theorem to memory techniques.

Comment by RomanHauksson (r) on RomanHauksson's Shortform · 2023-10-05T22:25:08.669Z · LW · GW

Microsolidarity

Microsolidarity is a community-building practice. We're weaving the social fabric that underpins shared infrastructure.

The first objective of microsolidarity is to create structures for belonging. We are stitching new kinship networks to shift us out of isolated individualism into a more connected way of being. Why? Because belonging is a superpower: we’re more courageous & creative when we "find our people".

The second objective is to support people into meaningful work. This is very broadly defined: you decide what is meaningful to you. It could be about your job, your family, or community volunteering. Generally, life is more meaningful when we are being of benefit to others, when we know how to contribute, when we can match our talents to the needs in the world.

Comment by RomanHauksson (r) on RomanHauksson's Shortform · 2023-10-05T01:44:28.260Z · LW · GW

You don't even necessarily do it on purpose, sometimes entire groups simply drift into doing it as a result of trying to up each other in trying to sound legitimate and serious (hello, academic writing).

Yeah, I suspect some intellectual groups write like this for that reason: not actively trying to trick people into thinking it's more profound than it is, but a slow creep into too much jargon. Like a frog in boiling water.

Then, when I look at their writing, it seems needlessly intelligible to me, even when it's writing designed for a newcomer. How do they not realize this? Maybe the water just feels warm to them.

Comment by RomanHauksson (r) on RomanHauksson's Shortform · 2023-10-05T01:39:39.652Z · LW · GW

When the human tendency to detect patterns goes too far

And, apophenia might make you more susceptible to what researchers call ‘pseudo-profound bullshit’: meaningless statements designed to appear profound. Timothy Bainbridge, a postdoc at the University of Melbourne, gives an example: ‘Wholeness quiets infinite phenomena.’ It’s a syntactically correct but vague and ultimately meaningless sentence. Bainbridge considers belief in pseudo-profound bullshit a particular instance of apophenia. To find it significant, one has to perceive a pattern in something that is actually made of fluff, and at the same time lack the ability to notice that it is actually not meaningful.

Comment by RomanHauksson (r) on Have Attention Spans Been Declining? · 2023-09-09T05:09:42.939Z · LW · GW

Np! I actually did read it and thought it was high-quality and useful. Thanks for investigating this question :)

Comment by RomanHauksson (r) on Have Attention Spans Been Declining? · 2023-09-08T15:31:45.717Z · LW · GW

Too long; didn’t read

Comment by RomanHauksson (r) on RomanHauksson's Shortform · 2023-09-07T00:43:06.183Z · LW · GW

From Pluriverse:

A viable future requires thinking-feeling beyond a neutral technocratic position, averting the catastrophic metacrisis, avoiding dysoptian solutionism, and dreaming acutely into the techno-imaginative dependencies to come.

Comment by RomanHauksson (r) on The Parable of the Dagger - The Animation · 2023-07-30T04:35:08.407Z · LW · GW

How do you decide which writings to convert to animations?

Comment by RomanHauksson (r) on H5N1. Just how bad is the situation? · 2023-07-09T00:50:03.098Z · LW · GW

Metaculus puts 7% on the WHO declaring it a Public Health Emergency of International Concern, and 2.4% on it killing more than 10,000 people, before 2024.

Comment by RomanHauksson (r) on AI #19: Hofstadter, Sutskever, Leike · 2023-07-07T06:38:41.066Z · LW · GW

I was also disappointed to read Zvi's take on fruit fly simulations. "Figuring out how to produce a bunch of hedonium" is not an obviously stupid endeavor to me and seems completely neglected. Does anyone know if there are any organizations with this explicit goal? The closest ones I can think of are the Qualia Research Institute and the Sentience Institute, but I only know about them because they're connected to the EA space, so I'm probably missing some.

Comment by RomanHauksson (r) on Can LessWrong provide me with something I find obviously highly useful to my own practical life? · 2023-07-07T05:02:29.363Z · LW · GW

You can browse the "Practical" tag to find posts which are directly useful. Here are some of my favorites:

Comment by RomanHauksson (r) on BOUNTY AVAILABLE: AI ethicists, what are your object-level arguments against AI notkilleveryoneism? · 2023-07-07T00:46:58.559Z · LW · GW

I see. Maybe you could address it towards "DAIR, and related, researchers"? I know that's a clunkier name for the group you're trying to describe, but I don't think more succinct wording is worth progressing towards a tribal dynamic between researchers who care about X-risk and S-risk and those who care about less extreme risks.

Comment by RomanHauksson (r) on BOUNTY AVAILABLE: AI ethicists, what are your object-level arguments against AI notkilleveryoneism? · 2023-07-06T23:01:49.467Z · LW · GW

I don't think it's a good idea to frame this as "AI ethicists vs. AI notkilleveryoneists", as if anyone that cares about issues related to the development of powerful AI has to choose to only care about existential risk or only other issues. I think this framing unnecessarily excludes AI ethicists from the alignment field, which is unfortunate and counterproductive since they're otherwise aligned with the broader idea of "AI is going to be a massive force for societal change and we should make sure it goes well".

Suggestion: instead of addressing "AI ethicists" or "AI ethicists of the DAIR / Stochastic Parrots school of thought", why not address "AI X-risk skeptics"?

Comment by RomanHauksson (r) on RomanHauksson's Shortform · 2023-07-05T00:25:50.111Z · LW · GW

Does anyone know whether added sugar is bad for you if you ignore the following points?

  1. It spikes your blood sugar quickly (it has a high glycemic index)
  2. It doesn't have any nutrients, but it does have calories
  3. It does not make you feel full, so it makes it easier to eat more calories, and
  4. It increases tooth decay.

I'm asking because I'm trying to figure out what carbohydrate-dense foods to eat when I'm bulking. I find it difficult to cram in enough calories per day, so most of my calories come from fat and protein at the moment. I'm not getting enough carbs. But most "carby foods for bulking" (e.g. potatoes, rice) are very filling! E.g., a cup of rice has 200 kcal, but a cup of nuts has 800.

I did some stats to figure out what carby foods have a low glycemic index but also a low satiation index, i.e. how quickly they make you feel full. My analysis showed that sponge cake was a great choice, having a glycemic index of only 40 while being the least filling of all the foods I analyzed!

But common sense says that cake would be classified as a "dirty bulk" food, which I'm trying to avoid. If it's not dirty for its glycemic index, what makes it dirty? Is it because cake has a "dirty" kind of fat, or is there something bad about sugar besides its glycemic index?

Just going off of the points I listed, eating cake to bulk up isn't "dirty", except for tooth decay. That's because

  1. Cake has a low glycemic index, I think because it has a lot of fat?
  2. I would be getting enough nutrients from the rest of what I eat; cake would make up the surplus.
  3. The whole point of me eating cake is to get more calories, so this point is nil.

What am I missing?

Comment by RomanHauksson (r) on Micro Habits that Improve One’s Day · 2023-07-04T04:31:09.927Z · LW · GW

They meant a physical book (as opposed to an e-book) that is fiction.

Comment by RomanHauksson (r) on Micro Habits that Improve One’s Day · 2023-07-04T04:28:21.454Z · LW · GW

I've also reflected on "microhabits" – I agree that the epistemics are tricky, of maintaining a habit even when you can't observe causal evidence for it being beneficial. I'll implement a habit if I've read some of the evidence and think it's worth the cost, even if I don't observe any effect in myself. Unfortunately, that's the same mistake homeopathics make.

I'm motivated to follow microhabitats mostly out of faith that they have some latent effects, but also out of a subconscious desire to uphold my identity, like what James Clear talks about in Atomic Habits.

Like when I take a vitamin D supplement in the morning, I'm not subconsciously thinking "oh man, the subtle effects this might have on my circadian rhythm and mood are totally worth the minimal cost!". Instead, it's more like "I'm taking this supplement because that's what a thoughtful person who cares about their cognitive health does. This isn't a chore; it's a part of what it means to live Roman's life".

Here's a list of some of my other microhabits (that weren't mentioned in your post) in case anyone's looking for inspiration. Or maybe I'm just trying to affirm my identity? ;P

  • Putting a grayscale filter on my phone
  • Paying attention to posture – e.g., not slouching as I walk
  • Many things to help me sleep better
    • Taking 0.3 mg of melatonin
    • Avoiding exercise, food, and caffeine too close to bedtime
    • Putting aggressive blue light filters on my laptop and phone in the evening and turning the lights down
    • Taking a warm shower before bed
    • Sleeping on my back
    • Turning the temperature down before bed
    • Wearing headphones to muffle noise and a blindfold
  • Backing up data and using some internet privacy and security tools
  • Anything related to being more attractive or likable
    • Whitening teeth
    • Following a skincare routine
    • Smiling more
    • Active listening
    • Avoiding giving criticism
  • Flossing, using toothpaste with Novamin, and tounge scraping
  • Shampooing twice a week instead of daily

I haven't noticed any significant difference from any of these habits individually. But, like you suggested, I've found success with throwing many things at the wall: it used to take me a long time to fall asleep, and now it doesn't. Unfortunately, I don't know what microhabits did the trick (stuck to the wall).

It seems like there are three types of habits that require some faith:

  1. Those that take a while to show effects, like weightlifting and eating a lot to gain muscle.
  2. Those that only pay off for rare events, like backing up your data or looking both ways before crossing the street.
  3. Those with subtle and/or uncertain effects, like supplementing vitamin D for your cognitive health or whitening your teeth to make a better first impression on people. This is what you're calling microhabits.
Comment by RomanHauksson (r) on Park Toys · 2023-06-24T02:22:31.389Z · LW · GW

I find it interesting that all but one toy is a transportation device or a model thereof.

Comment by RomanHauksson (r) on Why didn't virologists run the studies necessary to determine which viruses are airborne? · 2023-06-20T21:31:43.296Z · LW · GW

Regardless of whether the lack of these kinds of studies is justified, I think you shouldn't automatically assume that "virology is unreasonable" or "there's something wrong with virologists". Because you're asking why the lack exists, there's something you don't know about virology, and your prior should be that it's justified, similar to Chesterton's Fence.

Comment by RomanHauksson (r) on Updates and Reflections on Optimal Exercise after Nearly a Decade · 2023-06-09T16:37:05.976Z · LW · GW

I also don't particularly like the hedonic gradient of pushing yourself to run at the volume and frequency that seems necessary to really git gud

What do you mean by "hedonic gradient" in this context?

Comment by RomanHauksson (r) on Optimal Clothing · 2023-05-31T04:33:33.340Z · LW · GW

For those of us who don't know where to start (like me), I also recommend checking out the wiki from r/malefashionadvice or r/femalefashionadvice.

Comment by RomanHauksson (r) on Creating Flashcards with LLMs · 2023-05-30T03:03:30.704Z · LW · GW

Related: Wisdolia is a Chrome extension which automatically generates Anki flashcards based on the content of a webpage you're on.

Comment by RomanHauksson (r) on Technological unemployment as another test for rationalist winning · 2023-05-29T08:03:21.746Z · LW · GW

That's a good point. I conflated Moravec's Paradox with the observation that so far, it seems as though cognitive tasks will be automated more quickly than physical tasks.

Comment by RomanHauksson (r) on New User's Guide to LessWrong · 2023-05-18T15:04:58.689Z · LW · GW

We take tending the garden seriously

Ironic typo: the link includes the proceeding space.

Comment by RomanHauksson (r) on How to have Polygenically Screened Children · 2023-05-08T02:08:59.176Z · LW · GW

Suppose a family values the positive effects that screening would have on their child at $30,000, but in their area, it would cost them $50,000. Them paying for it anyway would be like "donating" $20,000 towards the moral imperative that you propose. But would that really be the best counterfactual use of the money? E.g. donating it instead to the Against Malaria Foundation would save 4-5 lives in expectation.[1] Maybe it would be worth it at $10,000? $5,000?

Although, this doesn't take into account the idea that an additional person doing polygenic screening would increase its acceptance in the public, incentivizing companies to innovate and drive the price down. So maybe the knock-on effects would make it worth it.

  1. ^

    Okay, I've heard that this scale of donations to short-termist charities is actually a lot more complicated than that, but this is just an example.

Comment by RomanHauksson (r) on Properties of Good Textbooks · 2023-05-08T01:52:50.714Z · LW · GW

I agree. Maybe it's time to repost The Best Textbooks on Every Subject again? Many of the topics I want to self-study I haven't found recommendations for in that thread. Or maybe we should create a public database of textbook recommendations instead of maintaining an old forum post.

Comment by RomanHauksson (r) on Recent Database Migration - Report Bugs · 2023-04-27T06:20:47.579Z · LW · GW

Just curious: what motivated the transition?

Comment by RomanHauksson (r) on RomanHauksson's Shortform · 2023-04-26T23:57:40.030Z · LW · GW

Prioritizing subjects to self-study (advice wanted)

I plan to do some self-studying in my free time over the summer, on topics I would describe as "most useful to know in the pursuit of making the technological singularity go well". Obviously, this includes technical topics within AI alignment, but I've been itching to learn a broad range of subjects to make better decisions about, for example, what position I should work in to have the most counterfactual impact or what research agendas are most promising. I believe this is important because I aim to eventually attempt something really ambitious like founding an organization, which would require especially good judgement and generalist knowledge. What advice do you have on prioritizing topics to self-study and for how much depth? Any other thoughts or resources about my endeavor? I would be super grateful to have a call with you if this is something you've thought a lot about (Calendly link). More context: I'm a undergraduate sophomore studying Computer Science.

So far, my ordered list includes:

  1. Productivity
  2. Learning itself
  3. Rationality and decision making
  4. Epistemology
  5. Philosophy of science
  6. Political theory, game theory, mechanism design, artificial intelligence, philosophy of mind, analytic philosophy, forecasting, economics, neuroscience, history, psychology...
  7. ...and it's at this point that I realize I've set my sights too high and I need to reach out for advice on how to prioritize subjects to learn!
Comment by RomanHauksson (r) on Raj Thimmiah's Shortform · 2023-04-25T17:12:54.953Z · LW · GW

Thx!

Comment by RomanHauksson (r) on Power laws in Speedrunning and Machine Learning · 2023-04-24T21:00:41.727Z · LW · GW

This is really clever. Good work!

Comment by RomanHauksson (r) on Raj Thimmiah's Shortform · 2023-04-24T20:57:57.499Z · LW · GW

I don't have a ton of programming experience either (still a student, done an internship and some hackathons) but I'd be very interested in poking around at what you have already have and potentially contributing. I've had this exact idea before.

Comment by RomanHauksson (r) on RomanHauksson's Shortform · 2023-04-24T09:27:17.524Z · LW · GW

socialhacks

A characteristic feature of the effective altruism and rationalism communities is what I call "socialhacks", or unusual tricks to optimize social or romantic activity, akin to lifehacks. Examples include

  • Dating documents
  • Monetary bounties for those who introduce someone to a potential romantic partner if they hit it off
  • A custom-printed T-shirt listing topics one enjoys discussing, their name, or a QR code to their website
  • Booking casual one-on-one calls using Calendly
  • Maintaining an anonymous feedback form
  • Reciprocity: a site where people can choose which others they would hang out with / date, and it only reveals the preference of the other party if they also want to do that activity

Lifehacks live in the fuzzy boundary between useful and non-useful: if an activity is not useful at all, it's not a good lifehack, but if it's too universally useful, it becomes common practice and no longer worthy of being called a "hack" (e.g. wearing a piece of cloth in between one's foot and their shoe to make it easier to put on the shoe and reduce odor, i.e. socks).

Similarly, socialhacks are useful but live on the boundary between socially acceptable and unacceptable. They're never unethical, but they are weird, which is why they're only popular within open-minded, highly coordinated, and optimizing-mindset groups like EAs and rats. Some things would totally be considered socialhacks if they weren't mainstream, like dating apps and alcohol.

I asked GPT-4 to generate ideas for new socialhacks. Here's a curated list. Do you have any other ideas?

  • Hosting regular "speed friend-dating" events where participants have a limited time to talk to each other before moving on to the next person, helping to expand social circles quickly.
  • Using personalized business cards that include not only one's contact information but also a brief description of their hobbies and interests to hand out to potential friends or romantic interests.
  • Developing a "personal brand" that highlights one's unique qualities, interests, and strengths, making it easier for others to remember and connect with them.
  • Establishing a regular "friend check-in" routine, where you reach out to friends you haven't spoken to in a while to catch up and maintain connections.
  • Using a digital portfolio, such as a personal website or blog, to showcase one's interests, hobbies, and achievements, making it easier for potential romantic partners or friends to learn more about them.
  • Utilizing a "get-to-know-me" quiz or survey app, where you can create a personalized questionnaire for friends or potential partners to fill out, discovering shared interests and compatibility.
  • Developing a personal "social calendar" app or tool that helps you track and manage social events, as well as set reminders to reach out to friends and potential romantic partners.

Unfortunately, "social hacking" is already a term in security. The only good suggestion I got out of GPT-4 was "socialvation". So, a second question: do you have any other suggestions?

Comment by RomanHauksson (r) on EniScien's Shortform · 2023-04-24T04:26:33.269Z · LW · GW

LibraryThing has a great book recommendation feature.