LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
These days, I've left both traditional and roam-like note-taking apps behind because they all left me with collections of half finished notes on ideas/writings that I would seldomly revisit again. Instead I started to just use Anki for all my note-taking. It's not made for this use case, but it is part of my daily workflow anyway and it solves my biggest problem of stale notes by making me revisit them regularly.
With Anki I record any fleeting ideas as standalone notes, without an "answer" component. Later, when these note come up for review I spend a few moments to refine each idea. This keeps the notes dynamic and evolving. If the idea turns into something promising, I'll add an item on my normal ToDo list for some dedicated in-depth exploration of this idea. Conversely, if the idea seems like a dead end, I'll suspend it so it is not shown anymore during review.
The feature I'm missing most is being able to easily link to related notes. Anki's notes can be grouped into decks, and tagged, but I find jumping to the note browser and entering search terms cumbersome.
Example of the note-taking workflow: I have an idea of an Anki feature which would automatically link related notes to each other, and e. g. show the links at the bottom of each answer card. I suspect that text embeddings could help there. So I add a note "Using text embeddings for automatic Anki note linking" to my Ideas deck. The next time this card is shown during review I might edit it to add "Implement as an Anki plugin that will regularly run on and update all notes in the collection" and the next time maybe some thought about an implementation detail like "? how to make sure that the links it placed in the note by the plugin are ignored for embedding purposes".
ryankidd44 on MATS Winter 2023-24 RetrospectiveYeah, that amount seems reasonable, if on the low side, for founding a small org. What makes you think $300k is reasonably easy to raise in this current ecosystem? Also, I'll note that larger orgs need significantly more.
smaug123 on DyslucksiaBy the way, as an extremely verbally-fluent nondyslexic person who was also an excellent choral singer, I can confirm the superpowers of singing versus talking. For example:
Thoroughly underused technique for minimal effort parroting.
tag on Freedom under Naturalistic DualismBut you are not legitimising it as a subjective impression that correctly represents reality... only as an illusion: you can feel free in a deterministic world, but you can't be free in one.
review-bot on LLMs Sometimes Generate Purely Negatively-Reinforced TextThe LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?
three-monkey-mind on What comes after Roam's renaissance?Is there an alternative to constantly adding endless features? Can software be designed to operate without daily updates, similar to programming languages?
"daily" in "daily updates" is hyperbole, but you can probably get most of the way there with
prefers-color-scheme
).The second bullet point is important, at least occasionally. I dropped my beloved VoodooPad because it never got a publicly-released version that supports dark mode that works on macOS, iOS, and iPadOS. I figure VoodooPad is nearly dead because its current owners can't figure out how to turn it into something that gets enough revenue to justify the time that it would take to make it a modern app.
At any rate, the notes I had in VoodooPad got moved into Ulysses some time after the Ulysses team added projects back in 2022. Ulysses is not a good personal wiki (internal linking isn't nearly as low-friction as in Obsidian), but it's adequate for my purposes and I dislike having a gazillion different personal-wiki software packages that I need to divvy my attention between.
As far as update cadence goes…
If you look at Ulysses' Releases page and make note of the dates in the headings, you can see that they've been steadily, but not all that quickly, been releasing features. There's probably at least one programming language out there with this release cadence, but I wouldn't know which one it is.
austin-chen on MATS Winter 2023-24 RetrospectiveStarting new technical AI safety orgs/projects seems quite difficult in the current funding ecosystem. I know of many alumni who have founded or are trying to found projects who express substantial difficulties with securing sufficient funding.
Interesting - what's like the minimum funding ask to get a new org off the ground? I think something like $300k would be enough to cover ~9 mo of salary and compute for a team of ~3, and that seems quite reasonable to raise in this current ecosystem for pre-seeding a org.
neel-nanda-1 on MATS Winter 2023-24 Retrospective(EDIT: I just saw Ryan posted a comment a few minutes before mine, I agree substantially with it)
As a Google DeepMind employee I'm obviously pretty biased, but this seems pretty reasonable to me, assuming it's about alignment/similar teams at those labs? (If it's about capabilities teams, I agree that's bad!)
I think the alignment teams generally do good and useful work, especially those in a position to publish on it. And it seems extremely important that whoever makes AGI has a world-class alignment team! And some kinds of alignment research can only really be done with direct access to frontier models. MATS scholars tend to be pretty early in their alignment research career, and I also expect frontier lab alignment teams are a better place to learn technical skills especially engineering, and generally have a higher talent density there.
UK AISI/US AISI/METR seem like solid options for evals, but basically just work on evals, and Ryan says down thread that only 18% of scholars work on evals/demos. And I think it's valuable both for frontier labs to have good evals teams and for there to be good external evaluators (especially in government), I can see good arguments favouring either option.
44% of scholars did interpretability, where in my opinion the Anthropic team is clearly a fantastic option, and I like to think DeepMind is also a decent option, as is OpenAI. Apollo and various academic labs are the main other places you can do mech interp. So those career preferences seem pretty reasonable to me there for interp scholars.
17% are on oversight/control, and for oversight I think you generally want a lot of compute and access to frontier models? I am less sure for control, and think Redwood is doing good work there, but as far as I'm aware they're not hiring.
This is all assuming that scholars want to keep working in the same field they did MATS for, which in my experience is often but not always true.
I'm personally quite skeptical of inexperienced researchers trying to start new orgs - starting a new org and having it succeed is really, really hard, and much easier with more experience! So people preferring to get jobs seems great by my lights
ryankidd44 on MATS Winter 2023-24 RetrospectiveI think the high interest in working at scaling labs relative to governance or nonprofit organizations can be explained by:
Note that the career fair survey might tell us little about how likely scholars are to start new projects as it was primarily seeking interest in which organizations should attend, not in whether scholars should join orgs vs. found their own.
azergante on Deep HonestyI think being as honest as reasonably sensible is good for oneself. Being honest applies pressure on oneself and one’s environment until the both closely match. I expect the process to have its ups and downs but to lead to a smoother life on the long run.
An example that comes to mind is the necessity to open up to have meaningful relationships (versus the alternative of concealing one’s interests which tends to make conversations boring).
Also honesty seems like a requirement to have an accurate map of reality: having snappy and accurate feedback is essential to good learning, but if one lies and distorts reality to accomplish one’s goals, reality will send back distorted feedback causing incorrect updates of one’s beliefs.
On another note: this post immediately reminded me of the buddhist concept of Right Speech, which might be worth investigating for further advice on how to practice this. A few quotes:
"Right speech, explained in negative terms, means avoiding four types of harmful speech: lies (words spoken with the intent of misrepresenting the truth); divisive speech (spoken with the intent of creating rifts between people); harsh speech (spoken with the intent of hurting another person's feelings); and idle chatter (spoken with no purposeful intent at all)."
"In positive terms, right speech means speaking in ways that are trustworthy, harmonious, comforting, and worth taking to heart. When you make a practice of these positive forms of right speech, your words become a gift to others. In response, other people will start listening more to what you say, and will be more likely to respond in kind. This gives you a sense of the power of your actions: the way you act in the present moment does shape the world of your experience."
Thanissaro Bhikkhu (source: https://www.accesstoinsight.org/lib/authors/thanissaro/speech.html)