Posts

Tall tales and long odds 2024-08-10T15:22:16.958Z
You don't know how bad most things are nor precisely how they're bad. 2024-08-04T14:12:54.136Z
One way violinists fail 2024-05-29T04:08:17.675Z
Things Solenoid Narrates 2024-04-12T23:57:16.169Z
Announcement: AI Narrations Available for All New LessWrong Posts 2023-07-20T22:17:33.454Z
Solenoid_Entity's Shortform 2023-05-24T13:06:05.823Z
Is there a 'time series forecasting' equivalent of AIXI? 2023-05-17T04:35:18.523Z
Reflective journal entries using GPT-4 and Obsidian that demand less willpower. 2023-04-15T12:45:25.982Z
Repugnant levels of violins 2023-04-12T17:11:47.734Z
The UBI dystopia: a glimpse into the future via present-day abuses 2023-04-12T14:44:03.494Z
Test post for formatting 2023-03-07T17:48:56.790Z
Using Obsidian if you're used to using Roam 2022-12-11T08:59:57.848Z
Pronunciations 2022-08-28T11:43:54.042Z
Announcing the LessWrong Curated Podcast 2022-06-22T22:16:58.170Z
Too right to write 2022-01-20T13:21:48.499Z

Comments

Comment by Solenoid_Entity on Solenoid_Entity's Shortform · 2024-12-01T14:29:55.489Z · LW · GW

Under MWI of QM, anthropics gets weird.

In a single universe interpretation, we can posit biogenesis is rare, but we do know it happened at least once in ~two trillion galaxies worth of stars in ~13 billion years.

In MWI it could be even rarer - with unlimited branches for wild coincidences of chemistry to occur, we’re necessarily living in a branch where such did occur. Allow for argument’s sake that biogenesis is so rare that branches where life is found are tiny in measure. We find ourselves in such a branch, so anthropics and branching kind of gives us the first miracle for free. But given we’re here, the chance it happened here independently TWICE is vanishingly small again.

If biogenesis is so rare it occurs in a tiny minority of branches only, then in almost all branches where it does occur, it only occurs once.

If I haven’t badly misunderstood something, I think if we accept MWI then it seems much more plausible that we are the only life in the universe.

Comment by Solenoid_Entity on One way violinists fail · 2024-08-04T12:02:22.890Z · LW · GW

There are definitely some opportunities like that, but being a classical violinist with an orchestra is the first preference by far because it's so much more enjoyable to play the orchestral repertoire, and because having a full-time seat in an orchestra also puts you at the top of every booking agent's list for casual gigs too. Aim high, fail high, seems to be a good approach.

Comment by Solenoid_Entity on One way violinists fail · 2024-08-04T11:58:40.299Z · LW · GW

(To be fair, I would make anything sound this extreme, if I was writing about it while in the mood I was in when I wrote this. I love a rant.)

I guess any classical instrument is a device for torturing perfectionists, but violin has a particularly brutal drop-off in sound quality as you reduce your daily focused practice time. Between 'lapsed professional piano' and 'lapsed professional violin' I know which one I'd pick to listen to. You just can't do a few hours of practice a week and play the violin very nicely in tune, or at least I've never met anyone who can.

There's also the fact we're pack animals. There are normally 8-12 violinists in each section of the orchestra, playing in unison, consciously blending our sound together. We're so wired to watching the others' bows, listening to their vibrato and articulation, and keeping as much focus on the leader of the section as on the conductor, that I swear we develop a hive mind. I guess this exacerbates the social pressures we feel, since we're always so aware of each others' playing.

Comment by Solenoid_Entity on Announcing the LessWrong Curated Podcast · 2023-11-09T09:39:18.296Z · LW · GW

Ahh good point, sorry I didn't notice that. I'll update the post shortly.

Comment by Solenoid_Entity on Announcing the LessWrong Curated Podcast · 2023-10-30T01:59:52.418Z · LW · GW

Thanks for checking! The Libsyn feed has been redirected, it's now hosted on BuzzSprout. All new episodes should still be going to all the platforms. Are you having trouble with any of the platforms, or just the Libsyn site itself? That one won't work anymore, unfortunately.

Comment by Solenoid_Entity on Announcing the LessWrong Curated Podcast · 2023-08-15T05:21:37.089Z · LW · GW

See here: https://www.lesswrong.com/posts/b9oockXDs2xMdYp66/announcement-ai-narrations-available-for-all-new-lesswrong 

Please share your feedback here or in the comments on that post, it's helpful for our decision-making on this :)

Comment by Solenoid_Entity on Solenoid_Entity's Shortform · 2023-08-09T05:04:51.189Z · LW · GW

Big +1 to playing with others, especially others around the same level or slightly better or worse.
Motivation is one thing, but it's also just... healthier. One's musical 'practice' can't be totally inward-looking, that's when perfectionism starts to bite. Orchestra forces you to compromise and actually learn and perform music, gets you out of the practice room, and generally turbocharges your learning by exposing you to a more varied set of demands on your playing and musicality.
Super hard mode is forming a string quartet with others, since your playing is super exposed and it forces you to stay in time and balance your sound with others. 

Comment by Solenoid_Entity on Announcement: AI Narrations Available for All New LessWrong Posts · 2023-07-28T01:57:48.182Z · LW · GW

Thanks for the feedback!

The audio reflecting updates to the text is relatively easily fixed, and that feature is in the pipeline (though for now user reports are helpful for this.)

There's some fairly complex logic we use for lists — trying to prevent having too many repetitive audio notes, but also keeping those notes when they're helpful. We're still experimenting with it, so thanks for pointing out those formatting issues!


 

Comment by Solenoid_Entity on Announcement: AI Narrations Available for All New LessWrong Posts · 2023-07-25T05:35:04.385Z · LW · GW

You'd probably want to factor in some time for making basic corrections to pronunciation, too.
ElevenLabs is pretty awesome but in my experience can be a little unpredictable with specialist terminology, of which HPMOR has... a lot.
It wouldn't be crazy to do an ElevenLabs version of it with multiple voices etc., but you're looking at significant human time to get that all right.

Comment by Solenoid_Entity on Announcement: AI Narrations Available for All New LessWrong Posts · 2023-07-23T02:47:41.363Z · LW · GW

It's unlikely we'll ever actually GENERATE narrations for every post on LessWrong (distribution of listening time would be extremely long-tailed), but it's plausible if the service continues that we'll be able to enable the player on all LW posts above a certain Karma threshold, as well as certain important sequences.
If you have specific sequences or posts in mind, feel free to send them to us to be added to our list!

Comment by Solenoid_Entity on Announcement: AI Narrations Available for All New LessWrong Posts · 2023-07-23T02:42:56.616Z · LW · GW

This is great to hear, and please feel free to contact us with any other features or improvements you'd find helpful :)

Comment by Solenoid_Entity on Announcement: AI Narrations Available for All New LessWrong Posts · 2023-07-23T02:41:57.293Z · LW · GW

Ha, oops! Yeah, there's a lot of specialist terminology, we find feedback like this really helpful as often we're able to quickly fix this.

Comment by Solenoid_Entity on Announcement: AI Narrations Available for All New LessWrong Posts · 2023-07-23T02:40:43.924Z · LW · GW

Currently we can trigger this if someone requests it, and we have a feature in the pipeline to detect significant changes automatically and re-narrate. 

Comment by Solenoid_Entity on Solenoid_Entity's Shortform · 2023-05-24T13:06:06.030Z · LW · GW

Double-attrition perfectionism and the violin

An interesting thing about violin is that the learning process seems nearly designed to produce 'tortured perfectionists' as its output.

The first decade of learning operates as a two-pronged selection process that attrits students at different times in their learning journey, requiring perfectionism at some times and tolerance at others. 

You could be boring and argue that it always requires both attention to detail and tolerance of imperfection, simultaneously. You could also argue that there's a fractal, scale invariant pattern of striving for perfection and then tolerating failure. You're boring and probably right, but I think there's actually a common, macro structure to that decade, that goes 'tolerance-perfectionism-tolerance-perfectionism.'

Specifically:

  • When you first start, you need to tolerate being terrible, especially in the first months, but really for several years. (Grade 1- Grade ~3)
    • You suck, it's horribly offensive to your ears and everyone else's too. You must simply ignore how bad you sound and force your body to learn the required movements.
    • Mistakes on violin are brutal, they almost hurt to hear.
  • Then for several more years you must suddenly become intolerant of these same deficiencies. (Grade ~3 to Grade ~6)
    • You must obsessively eliminate scratches and squawks, develop clear and even tone. Polish your 'beginner' skills. 
    • You must learn to play in tune, which requires intensive practice and polishing.
  • Then for several more years you must again stop worrying about sounding bad and start 'pushing the envelope' and playing more expressively. (Grade ~6 - Grade ~8)
    • Developing exciting and varied sounds means a lot of nasty failures that sound awful and make people wince and/or bang walls.
  • Then for several more years, you have to again polish and refine this expressiveness. (Associate diploma, Bachelor of Music.)
    • You have to learn to platy really in tune.
    • Like really really in tune.
    • Like unless you're lucky you probably lack the pitch resolution in your hearing to even notice the difference.
    • Like
      • More in tune than a well-tuned piano. Not strictly 'Just Intonation' but a compromise intonation system that allows the series of perfect 5ths G, D, A and E to remain fixed in all keys, but other notes to fall perfectly in tune with each other around these fixed points.
      • You're supposed to learn this system intuitively by just playing scales as in-tune as you can, often playing two notes at the same time (thirds, sixths, octaves, 11ths).
      • These changes correspond to fractions of a millimeter difference in position on the string.
      • Practice sessions now involve hours of obsessive, tiny intonation adjustments.

The result is that if someone plays violin at a professional level, they either have a very healthy relationship with their perfectionism and can adapt it to the needs of the moment (hahahahhhahahah), or they are a deeply disturbed individual who is somehow either able to pretend not to hate their playing for years, or able to force themselves to care about details that don't bother them in the slightest.

This is as far as I've gone (I'm on the final step, trying to reach professional level). If you go further and become a soloist, I don't know what that implies about your psychology. Soloists seem normal and occasionally seem well-adjusted, but perhaps we should learn to fear them.

Comment by Solenoid_Entity on Steering GPT-2-XL by adding an activation vector · 2023-05-18T01:00:31.691Z · LW · GW

I think there may be a typo in the table directly under the heading "Token probability shifts."
If it's not a typo, why are both coefficients positive? Aren't we meant to subtract the vector for ' '?
 

Comment by Solenoid_Entity on The UBI dystopia: a glimpse into the future via present-day abuses · 2023-04-21T07:41:48.303Z · LW · GW

Edit: removed a bad point. 

If you object strongly to the use of the term UBI in the post, you can replace it with something else. 
Then I make a number of substantive arguments.

Your response so far is 'if it's a UBI it won't suffer from these issues by its very definition.'

My response is 'yes it will, because I believe any UBI policy proposal will degrade into something less than the ideal definition almost immediately when implemented at scale, or just emerge from existing welfare systems piecemeal rather than all at once. Then all the current concerning 'bad things that happen to people who depend on government money' will be issues to consider.

Comment by Solenoid_Entity on The UBI dystopia: a glimpse into the future via present-day abuses · 2023-04-19T04:37:45.074Z · LW · GW

I'm speaking about the policy that's going to be called UBI when it's implemented. You're allowed to discuss e.g. socialism without having to defer to a theoretical socialism that is by definition free of problems.

Anyway, it's a quibble, feel free to find and replace UBI with 'the policy we'll eventually call UBI', it doesn't change the argument I make.

Comment by Solenoid_Entity on The UBI dystopia: a glimpse into the future via present-day abuses · 2023-04-18T14:44:29.285Z · LW · GW

Where do I call existing welfare systems UBI? That's a misunderstanding of my argument.

My point is that I don't think it's likely that future real-world policies will BE universal. They'll be touted as such, they might even be called UBI, but they won't be universal. I argue they're likely to emerge from existing social welfare systems, or absorb their infrastructure and institutions, or at least their cultural baggage.

I can see the confusion, and maybe I should have put 'UBI' in quotes to indicate that I meant 'the policy I think we'll actually get that people will describe as UBI or something equivalent.'

Comment by Solenoid_Entity on The UBI dystopia: a glimpse into the future via present-day abuses · 2023-04-18T14:33:32.997Z · LW · GW

My point is not to argue that existing welfare systems are UBI. I don't use any non-standard definitions. I don't call existing welfare systems UBI. 

My point is that the real-world policy we're likely to eventually call UBI probably won't actually be universal, and if it emerges as a consequence of more and more people relying on social welfare, or else is associated with social welfare culturally, bad things will likely happen. Then I give some examples of the sort of bad things I mean.

I frequently hear people saying something like "and this is why we need a UBI"

This is a good point. I would like it very much if we could implement a UBI policy that did not come with the cultural baggage of existing social welfare systems. I would like it if existing social welfare systems would become more unconditional. I see why people think UBI would achieve this. I think they're more optimistic than I am about our ability to shed our social attitudes to work and welfare. Maybe it'll change with demographics, who knows...

Comment by Solenoid_Entity on Reflective journal entries using GPT-4 and Obsidian that demand less willpower. · 2023-04-16T15:58:37.604Z · LW · GW

I’m writing the original paragraph, and answering a bunch of questions designed to prompt me to reflect.

Comment by Solenoid_Entity on Reflective journal entries using GPT-4 and Obsidian that demand less willpower. · 2023-04-16T11:44:23.729Z · LW · GW

There are a few Obsidian plugins that do similar stuff using LLMs, (they purport to read your notes and help you something something).

I'm thinking of mocking something up over the next week or so that does this 'diary questions' thing in a more interactive way, via the API, from inside Obsidian. 

Comment by Solenoid_Entity on Reflective journal entries using GPT-4 and Obsidian that demand less willpower. · 2023-04-16T11:42:38.634Z · LW · GW

I also realise how much I sound like Chat-GPT in that comment... dammit

Comment by Solenoid_Entity on Reflective journal entries using GPT-4 and Obsidian that demand less willpower. · 2023-04-16T11:36:48.500Z · LW · GW

Yeah, I agree with a lot of this, and this privacy concern was actually my main reason to want to switch to Obsidian in the first place, ironically.

I remember in the book In the Age of Surveillance Capitalism there's a framework for thinking about privacy where users knowingly trade away their privacy in exchange for a service which becomes more useful for them as a direct consequence of the privacy tradeoff. So for example, a maps app that remembers where you parked your car. This is contrasted with platforms where the privacy violations aren't 'paid back' to the users in terms of useful features that benefit them, they just extract value from users in exchange for providing a service at all.

So in this case, I guess the more private information I submit to Chat-GPT, the more directly useful and relevant and insightful its responses to me get. Considering how much a life coach or career coach or therapist can cost, this is a lot of value I'm getting for it.

I understand the theoretical concern about our righteous future overlords whom I fully support and embrace, but while I think you could learn a lot about me from reading my diary, including convincingly simulating my personality, I would feel surprised if reading my diary was enough to model my brain in sufficient fidelity that it's an s-risk concern...

Comment by Solenoid_Entity on Reflective journal entries using GPT-4 and Obsidian that demand less willpower. · 2023-04-16T11:28:59.211Z · LW · GW

Currently just copy-pasting into GPT-4 via the web interface. I've got it working via the GPT-3 API as well today, but for now I prefer to suffer the inconvenience and get the better model. The questions it asks are MUCH more insightful.

Comment by Solenoid_Entity on The UBI dystopia: a glimpse into the future via present-day abuses · 2023-04-14T11:20:53.349Z · LW · GW

The argument is:

1. You probably can't make it universal.

2. If people can be excluded from the program and depend on it, it creates a power differential that can be abused.

3. There are lots of present-day examples of such abuse, so absent a change, that abuse or similar will continue to exist even if we have a UBI.

Comment by Solenoid_Entity on Repugnant levels of violins · 2023-04-14T11:16:12.346Z · LW · GW

I'd just explicitly ask the teacher if they're happy with the instrument's setup. It's probably fine, but maybe they'll tell you it needs work. Generally 1/4 instruments aren't going to sound great anyway, but the setup is still very important.

Comment by Solenoid_Entity on Repugnant levels of violins · 2023-04-12T22:59:40.076Z · LW · GW

Thanks, great recommendation! I'll check it out for sure.

Comment by Solenoid_Entity on Sapir-Whorf for Rationalists · 2023-02-07T01:14:25.033Z · LW · GW

On the subject of jargon, there's one piece of jargon that I've long found troubling on LW, and that's the reference to 'tech' (for mental techniques/tools/psycho-technologies), which I've seen Duncan use a few times IIRC.

A few issues:
1. It's exactly the same usage as the word 'tech' in the fake scifi 'religion' that must not be named (lest you summon its demons to the forum through the Google portal). They do exercises to give them new mental tools, based on reading the lengthy writings of their founder on how to think, and those lessons/materials/techniques are always referred to as 'tech.' This doesn't automatically make our usage of it bad, but it's probably smart to avoid so closely mirroring their usage imo. 

2. Using the word 'tech' doesn't shine much light. I'm aware of the concept of 'exaptation' and that things external to the mind can be integrated into the mind much the way that a craftsman stops seeing the hammer as separate from his hand. Still, it doesn't seem very useful to blur the distinction between mental techniques and reasoning strategies we can learn and internalise by reading blog posts, and literal technology we might use to augment or supplement our thinking abilities.

Comment by Solenoid_Entity on Using Obsidian if you're used to using Roam · 2022-12-13T03:19:17.154Z · LW · GW

Amazing, thanks!

Comment by Solenoid_Entity on Simulators · 2022-09-05T03:24:29.506Z · LW · GW

One question that occurred to me, reading the extended GPT-generated text. (Probably more a curiosity question than a contribution as such...)

To what extent does text generated by GPT-simulated 'agents', then published on the internet (where it may be used in a future dataset to train language models), create a feedback loop? 

Two questions that I see as intuition pumps on this point:

  1. Would it be a bad idea to recursively ask GPT-n "You're a misaligned agent simulated by a language model and your name is [unique identifier]. What would you like to say, knowing that the text you generate will be used in training future GPT-n models, to try to influence that process?" then use a dataset including that output in the next training process? What if training got really cheap and this process occurred billions of times?
  2. My understanding is that language models are drawing on the fact that the existing language corpus is shaped by the underlying reality - and this is why they seem to describe reality well, capture laws and logic, agentic behaviour etc. This works up until ~2015, when the corpus of internet text begins to include more text generated only by simulated writers. Does this potentially degrade the ability of future language models to model agents, perform logic etc? Since their reference pool of content is increasingly (and often unknowably) filled with text generated without (or with proportionally much less) reference to underlying reality? (Wow, who knew Baudrillard would come in handy one day?)
Comment by Solenoid_Entity on Pronunciations · 2022-09-04T23:04:12.024Z · LW · GW

Yeah this is a super useful method and increasingly my go-to for esp. Chinese 

Comment by Solenoid_Entity on Pronunciations · 2022-08-29T03:16:31.632Z · LW · GW

Some websites are great, but I've found they're wrong often enough I usually want to corroborate them with something else.

Comment by Solenoid_Entity on Announcing the LessWrong Curated Podcast · 2022-06-29T14:54:30.222Z · LW · GW

Thanks for the endorsement!

Comment by Solenoid_Entity on Announcing the LessWrong Curated Podcast · 2022-06-29T14:54:10.560Z · LW · GW

Glad to hear it's useful :)

Comment by Solenoid_Entity on Announcing the LessWrong Curated Podcast · 2022-06-29T14:53:52.148Z · LW · GW

Thanks :)

Comment by Solenoid_Entity on Announcing the LessWrong Curated Podcast · 2022-06-29T14:53:20.506Z · LW · GW

Thanks, glad you're enjoying :)

Comment by Solenoid_Entity on Too right to write · 2022-01-26T11:37:32.612Z · LW · GW

The correlation between "bothers to have an opinion on correctness of others' writing" and "knows what the correct answer actually is" seems too high.

(Edit: I'm reading between the lines and assuming you're saying you think the cohort of people who actually care enough about faze/phase to be judgemental about it, but don't themselves know the correct spelling is 'faze', is small.)

This is very interesting. I certainly agree this is our point of difference – I think there's a big cohort out there with strong, judgey opinions about 'correctness' and an active interest in spelling and grammar as a way of displaying status. These people also happen to very frequently be wrong. 

But this might well me me typical minding and I'll try to think about this group more rigorously in my next post about this. 

Comment by Solenoid_Entity on Too right to write · 2022-01-26T09:33:38.486Z · LW · GW

Just pitching in on the last two: there's an abbreviated register of speech in English called 'note-taking register' that has crept its way into a lot of parts of speech and writing, including website navigation. Dropping the definite article (or most articles in general) is a core part of that register.

Note taking = abbreviated English register. Has crept into parts of speech, writing inc. website nav. Dropping definite article core part of register.

I suspect dropping the definite article in 'refresh page' is not related to definiteness, it's a linguistic tendency towards abbreviation. Funnily enough, it's a trait shared by the stereotypical 'robot voice', as well as 'baby voice' and some others. 

Comment by Solenoid_Entity on The Best Software For Every Need · 2021-09-19T22:47:35.784Z · LW · GW

Software: Newsfeed Eradicator + Leechblock NG

Need: Resilient self-control/anti-akrasia for web browsing.

Other programs I've tried: Stayfocusd, Forest

The problem with Stayfocusd and any website blocker is that, invariably, you have to navigate to a given tweet or youtube video or facebook profile, for legitimate reasons, and it means you have to go and deactivate the plugin. This is bad because 1. it trains you to do this action and 2. It incentivises you to avoid making deactivating the plugin too tricky.

Newsfeed Eradicator kills only the problem parts of social media apps. I can still check individual subreddits. I can still visit a Facebook page. I can still watch a YouTube video. I just don't get served recommendations, I don't get exposed to a scrolling feed. This has transformed how I use e.g. YouTube, where I now cultivate my library of channels I subscribe to and get new content via the subscriptions page, rather than passively browsing algorithmic recommendations.

I pair this with Leechblock NG to block websites I don't want to visit at all. The number of options is STAGGERING compared with Stayfocusd. I can create different categories of blocker, I can program different behaviours, Time-delays instead of blocking if you want. You're not just stuck with 'x minutes per day', you can have 'x minutes per hour, replenishing every hour.' if you want. The list goes on. 

Comment by Solenoid_Entity on The LessWrong 2018 Book is Available for Pre-order · 2021-01-21T06:17:42.780Z · LW · GW

I do the SSC Podcast; one of my patreon supporters said he'd be really keen to have this as an audiobook. I'd certainly be keen to get an idea of the demand for that and could potentially make it happen if it seemed like it would be useful. If you wanted to chat about it you can get me on slatestarpodcast@gmail.com. Thanks!