Posts

Gaming Incentives 2021-07-29T13:51:05.459Z
MIRIx Trivandrum Index 2021-07-04T09:32:07.829Z
MIRIx Part I: Insufficient Values 2021-06-16T14:33:28.313Z
Utopic Nightmares 2021-05-14T21:24:09.993Z

Comments

Comment by Jozdien on Online LessWrong Community Weekend · 2021-09-05T16:19:50.059Z · LW · GW

I'm done with the frontend, but the guy working on it with me got caught up with some stuff, so the backend still needs some tweaks.  He's free this week, so it's possible it'll be done in time.  I'll post about it if it is.

Comment by Jozdien on Open & Welcome Thread September 2021 · 2021-09-05T14:03:31.491Z · LW · GW

Not an article, but I have a link to an interview where Ian tells that story (timestamp around 3:40 if you only want that part, 2:44 if you want it as part of the complete story).

Comment by Jozdien on Rage Against The MOOChine · 2021-08-07T19:41:20.752Z · LW · GW

I agree with your points on practical programming in the course, but I also think that's not even Andrew Ng's core intent with his courses.  As Teja Prabhu mentioned in his comment, learning through taking on projects of your own is a method that I can't think of many good alternatives to, as far as practical usage goes.  But getting there requires that you cast a wide net breadth-wise to at least know what's possible and what you can use, in machine learning.  You can, and probably will, learn the math depth-wise as you try working on your own projects, but to get there?  I think he throws just the right amount of technical math at you.  Trying to fit all the math involved in all the different ML methods he covers, from the ground up, is probably infeasible as anything but a year-long degree, and you don't need that to start learning it yourself depth-wise.

That, and a working understanding of ML theory are what I think his primary intent is, with his courses.  I did his Deep Learning specialization a couple months ago, and while the programming is slightly more hands-on there, it's still massively aided by hints and the like.  But he even says in one of those videos that the point of doing the programming exercises is only to further your understanding of theory, not as practice for building your own projects - writing code from scratch for a predefined goal in a course wouldn't be a great way of motivating people to learn that stuff.  Incidentally, this is why I think MOOCs for learning programming actually are pointless.

Comment by Jozdien on What 2026 looks like (Daniel's Median Future) · 2021-08-07T02:02:04.230Z · LW · GW

Are Google, Facebook, and Deepmind currently working on GPT-like transformers?  I would've thought that GPT-2 would show enough potential that they'd be working on better models of that class, but it's been two and a half years and isn't GPT-3 the only improvement there? (Not a rhetorical question, I wasn't reading about new advances back then.)  If yes, that makes me think several other multimodal transformers similar in size to GPT-3 would be further away than 2022, probably.

Comment by Jozdien on Gaming Incentives · 2021-07-30T11:17:21.561Z · LW · GW

I also think it's more audience identification.  Actually optimizing for that would give clearer game systems, though they'd probably be much more controversial.  Like having sport categories consisting of "people that would inspire marginalized classes to see performing at the highest level".  But you know, phrased in a way that won't probably ruin the whole thing.

I think it could also be more about dominance specifically.  I can't tell if this is actually true, but it feels like chess would be less popular/exciting if there weren't one player clearly dominant over the others.  There I don't think it comes down to identifying, because having more equal players at the top would mean broader classes of people identifying with them.

Comment by Jozdien on Gaming Incentives · 2021-07-29T21:00:57.009Z · LW · GW

You're right.  I remember being surprised by what I saw on the wiki back when I wrote this, but looking at the edit history of that page, I can't find anything that would have made me write "conflicting".  Thank you for bringing that up, I've edited the post.  I apologize for not noticing that earlier.

Comment by Jozdien on Did they or didn't they learn tool use? · 2021-07-29T17:24:24.373Z · LW · GW

It could be that the Tool Use in the graph is the "Tool Use Gap" task instead of the "Tool Use Climb" task.  But they don't specify anywhere I could find easily.

Comment by Jozdien on Did they or didn't they learn tool use? · 2021-07-29T14:16:23.929Z · LW · GW

From the description of that figure in the paper, it says "three points in training" of a generation 5 agent, so probably the performance of that agent on the task at different learning steps?

Edit: To clarify, I think it's 0-shot learning on the six hand-authored tasks in the figure, but is being trained on other tasks to improve on normalized score percentiles.  That figure is meant to show the correlation of this metric with improvement on the hand-authored tasks.

Comment by Jozdien on DeepMind: Generally capable agents emerge from open-ended play · 2021-07-29T09:27:24.930Z · LW · GW

Judging from what I’ve read here on LW, it’s maybe around 3/4ths as significant as GPT-3? I might be wrong here, though.

Disclaimer to the effect that I'm not very experienced here either and might be wrong too, but I'm not sure that's the right comparison.  It seems to me like GPT-2 (or GPT, but I don't know anything about it) was a breakthrough in having one model that's good at learning on new tasks with little data, and GPT-3 was a breakthrough in showing how far capabilities like that can extend with greater compute.  This feels more like the former than the latter, but also sounds more significant than GPT-2 from a pure generalizing capability standpoint, so maybe slightly more significant than GPT-2?

Comment by Jozdien on Open and Welcome Thread – July 2021 · 2021-07-24T15:40:13.660Z · LW · GW

Thank you.

I saw that guide a while back and it was helpful, but it helped more with "what" than "how" - although it still does how better than most guides.  For the most part, I'm concerned about things I'm missing that are obvious if you have the right context.  Like that given my goals, there are better things to be prioritizing, or that I should be applying to X for achieving Y.

I've been thinking about it for a while since posting it, and I think I agree with you on that applying for a Master's is the best route for me.  (By the way, did you mean the universities the article mentions in the "Short-term Policy Research Options" subheading?  I didn't find any other).

Comment by Jozdien on Open and Welcome Thread – July 2021 · 2021-07-23T08:18:24.416Z · LW · GW

Thanks.

I'm not sure if you thought of it while reading my comment or if it's generally your go-to advice, but I may have accidentally given the wrong impression about how much I prioritize work over being around other people.  It's good to be actively reminded about it though for entropy reasons, so I appreciate it.

I admit that what I know about AI Safety comes from reading posts about it instead of talking with the experts about their meta-level ideas, but that doesn't sound like the impression I got.  CEV, for example, is one example that deals with the ethical mess of which people's values are worth including.  The discussion around that generally had a very negative prior to anyone having the power to decide whose values are good enough, is what it appeared like to me.  Elon's proposal comes with its own set of problems, a couple that stick out to me being co-ordination problems between multiple AGI, and grid-linking not completely solving the alignment problem because we'll still be far inferior to good AGI.

Comment by Jozdien on Open and Welcome Thread – July 2021 · 2021-07-17T15:25:23.153Z · LW · GW

I need some advice.

A little context: I'm a CSE undergraduate who'll graduate next July.  I think AI Safety is what I should be working on.  There are, as far as I've seen, no opportunities for that in my country.  I don't know what path to go down in the immediate future.

Ideally, I'd begin working in Safety directly next year.  But I don't know how likely that is, given I don't have a Master's degree or a PhD; and MIRI's scaling back on new hires, as I understand it (I thought about interning after I graduate, but I’m not sure if they’ll take interns next year).

I plan to apply to Master's programs anyway, but those are also a long shot - the tuition fees are even steeper when converted to other currencies, so I don't want to apply to programs that aren't worth it (it's possible my qualifications are sufficient for some of the ones I'll apply to, but I have little context to tell).

I could work in software for a couple years, trying to do independent research in that time, to switch over after.  This is complicated both by that independent research is shockingly difficult when you can't bounce ideas off of someone who gets it (I don't really know anyone who does at the level where this is viable), and that I'll need to spend a non-insignificant amount of time and effort now to get a decent chance of getting a great job (I really don't want to work at a job I both don't believe in and that doesn't build career capital), time that could be better spent, especially if my chances aren't good even in the end (again, I speak from my expectations given limited context).

I've been thinking seriously about this for a couple weeks from several angles (I tried to hold off on this until I had enough qualifications to make credible predictions about my chances, but now I think the bar for credible is much higher than I'd expected), and came to some answers, but also decided I needed to ask the opinion of someone who gets my motivations, and hopefully has better context for any of this than I do.  Both about the future, and the present.

Some additional info about what I think I'm good at, relative to an undergrad level, if that helps: I have a couple years of experience with frontend systems for web and mobile (although I've recently been told I should work on improving my code structure, since I learned it all on my own and have worked primarily on my own projects).  I understand ML theory (DL slightly more) to an extent (I have a preprint on cGAN image processing I’m trying to figure out how to publish, since my university really doesn’t help with this stuff, I welcome any advice on that too).  I also have some amount of experience tinkering with their code; while I doubt it reaches the level of familiarity even a new industry ML developer would have, I'm fairly confident that I could get there without much trouble (could be wrong, correct me if this isn't your experience).

I typically try to avoid making posts of this sort, but this is kinda sorta important to me, and I feel comfortable trusting the people here to help me a little in making the right call here.  So thanks for that.  And thanks in advance for any suggestions.

Comment by Jozdien on Decision Theory · 2021-07-16T18:18:44.992Z · LW · GW

I think I'm missing something with the Löb's theorem example.

If  can be proved under the theorem, then can't  also be proved?  What's the cause of the asymmetry that privileges taking $5 in all scenarios where you're allowed to search for proofs for a long time?

Comment by Jozdien on Online LessWrong Community Weekend · 2021-07-01T20:29:57.291Z · LW · GW

I likely won't be free to organize activities, but I would like to put it out there that if, like last year, anyone's interested in organizing the Aumann Agreement Game, I'm currently working on developing a website that allows (for now, up to four) people to play.  So the logistics of handling that should be easier.  (Probably.)  It'll hopefully be done within the month, but almost definitely (like 95% confidence) before September.

Comment by Jozdien on MIRIx Part I: Insufficient Values · 2021-07-01T14:44:42.391Z · LW · GW

Yeah, CEV itself seemed like a long shot - but my thought process was that maintaining human control wouldn't be enough for step one, both because I think it's not enough at the limit, but also because the human component might inherently be a limiting factor that makes it not very competitive.  But the more I thought about it, the weaker that assumption of inherent-ness seemed, so I agree in that the most this post could be saying is that the timeline gap between something like Task AGI and figuring out step 2 is short - but which I expect isn't very groundbreaking.

Comment by Jozdien on MIRIx Part I: Insufficient Values · 2021-06-17T12:55:58.747Z · LW · GW

I agree that this is probably true, but I wouldn't put it at at > 90% probability for far-future AI.  With computing power greater than Jupiter brains, it probably still wouldn't be practical, but my point in thinking about it was that if it were possible to brute-force for first-generation AGI, then there's a chance for more efficient ways.

Comment by Jozdien on MIRIx Part I: Insufficient Values · 2021-06-17T12:47:17.707Z · LW · GW

It's only once we pick a specific method of implementation that we have to confront in mechanistic detail what we could previously hide under the abstraction of anthropomorphic agency.

I agree.  I was trying to think of possible implementation methods, throwing out various constraints like computing power or competitiveness as it became harder to find any, and the final sticking point was still Goodhart's Law.  For the most part, I kept it in to give an example to the difficulty of meta-alignment (corrigibility in favourable directions).

Comment by Jozdien on MIRIx Part I: Insufficient Values · 2021-06-17T12:41:09.523Z · LW · GW

So if I'm understanding it correctly, it's that maintaining human control is the best option we can formally work toward?  The One True Encoding of Human Values would most likely be a more reliable system if we could create it, but that it's a much harder problem, and not strictly necessary for a good end outcome?

Comment by Jozdien on Against Against Boredom · 2021-05-20T20:26:47.584Z · LW · GW

For me, in your fictional world, humans are to AI what in our world pets are to humans.

If I understand your meaning of this correctly, I think you're anthropomorphizing AI too much.  In the scenario where AI is well aligned to our values (other scenarios probably not having much of a future to speak of), their role might be something without a good parallel to society today; maybe an active benevolent deity without innate desires.

Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.

I think it's possible we would still die at the end of the universe.  But even without AI, I think there would be a future point where we can be reasonably certain of our control over our environment to the extent that barring probably unprovable problems like simulation theory, we can rest easy until then.

Comment by Jozdien on Utopic Nightmares · 2021-05-20T17:38:02.897Z · LW · GW

Boring matters if they consider it a negative, which isn't a necessity (boredom being something we can edit if needed).

Re: resilience, I agree that those are good reasons to not try anything like this today or in the immediate future. But at a far enough point where we understand our environment with enough precision to not have to overly worry about external threats, would that still hold? Or do you think that kind of future isn't possible? (Realistically, and outside the simplified scenario, AGI could take care of any future problems without our needing to trouble ourselves).

Comment by Jozdien on Re: Fierce Nerds · 2021-05-19T21:07:59.781Z · LW · GW

I'm generally biased against someone trying to describe traits of a certain class of people because it's so easy to think you're getting more than you are (horoscopes, for example).  And starting to read the article, I thought that of a couple lines - most people I know, people who are definitely not nerds, are fierce in their element, and reserved otherwise.  Some lines also stood out to me as just the right combination of self-deprecatory and self-empowering to make me want to believe them (Partly perhaps because they're not emotionally mature enough to distance themselves from it...).

That said, the rest of it started to be specific enough to convince me.  I'm more (openly, at least) confident than my friends, I lose a lot more steam working around archaic rules than my classmates (although that might just be a difference in exposure), I think I work on more unorthodox things than people I know, and I definitely laugh a lot more than other people.  My first thoughts still stand, but I think it's a good article.

Comment by Jozdien on Against Against Boredom · 2021-05-19T20:54:43.158Z · LW · GW

The way to recognize local extrema is exactly to walk away from them far enough. If you know of another way, please elaborate, because I'd very much like to sell it myself if you don't mind.

Thinking it back after a couple days, I think my reply with finding maximums was still caught up in indirect measures of achieving hedons.  We have complete control over our sensory inputs, we can give ourselves exactly whatever upper bound there exists.  Less "semi-random walks in n-space to find extrema" and more "redefine the space so where you're standing goes as high as your program allows".

The only way to keep surviving in a dynamic potential landscape, is to keep optimizing, and not tap yourself on the shoulder for a job well done, and just stop.

For what it's worth, that was just to keep in with the fictional scenario I was describing.  In a more realistic scenario of that playing out, we would task AGI with optimizing; we're just relatively standing around anyway.

In that scenario, though: why do we consider growth important?  You talked about surviving, I'm not clear on that - this was assuming a point in the future when we don't have to worry about existential risk (or they're the kind we provably can't solve, like the universe ending) or death of sentient lives.  Yes, growth allows for more sophisticated methods of value attainment, but I also said that it's plausible that we reach so high that there we start getting diminishing returns.  Then, are the benefits of that future potential worth not reaping them to their maximum for a longer stretch of time?

Comment by Jozdien on Utopic Nightmares · 2021-05-18T11:35:44.331Z · LW · GW

However, people enjoy having friends that aren't just clones of themselves.

This is true, yeah, but I think that's more a natural human trait than something intrinsic to sentient life.  It's possible that entirely different forms of life would still be happier with different types of people, but if that happiness is what we value, wouldn't replicating it directly achieve the same effect?

Comment by Jozdien on Utopic Nightmares · 2021-05-18T11:35:17.804Z · LW · GW

If we hold diversity as a terminal value then yes, a diverse population of less-than-optimal people is better.  But don't we generally see diversity less as a terminal value than something that's useful because it approximates terminal values?  

Comment by Jozdien on Utopic Nightmares · 2021-05-16T21:24:43.938Z · LW · GW

I would consider such a technology abhorrent for the same reason I consider taking a drug that would make me feel infinitely happy forever abhorrent.

What reasons are those?  I can understand the idea that there are things worse than death, but I don't see what part of this makes it qualify.

Comment by Jozdien on Against Against Boredom · 2021-05-16T21:22:50.907Z · LW · GW

Yep, that is precisely the point of simulated annealing.  Allowing temporary negative values lets you escape local maxima.

In that future scenario, we'd have a precise enough understanding of emotions and their fulfilment space to recognize local maxima.  If we could ensure within reason that being caught in local maxima isn't a problem, would temporary negative values still have a place?

Comment by Jozdien on Against Against Boredom · 2021-05-16T20:24:36.913Z · LW · GW

My main point, though, is that I would consider eliminating boredom wrong because it optimizes for our feelings  and not our well being .

I'd argue boredom is the element that makes us optimize for  over .  Boredom is why we value even temporary negatives, because of that subsequent boost back up, which indicates optimizing for .  Removing boredom would let you optimize for  instead.

One argument is Chesterton's fence, i.e. until we are quite sure why we dislike boredom so, we ought not mess with it.

I agree.  But this isn't something I propose we do now or even at the moment we have the power to; we can hold off on considering it until we know the complete ramifications.  But we will at some point, and at that point, we can mess with it.

Another is that if humans ever become "content" with boredom, we cut off all possibility of further growth (however small).

Yeah, that is a downside.  But there may come a time at which growth steps diminish enough that we should consider whether the lost value from not directly maxing out our pleasure stats is worth the advancement.

On a side note, I really appreciate that you took the time to write a post in response.  That was my first post on LW, and the engagement is very encouraging.

Comment by Jozdien on Utopic Nightmares · 2021-05-16T20:03:06.958Z · LW · GW

Hence, finding new axioms and using them to prove new sets of statements is an endless problem.  Similar infinite problems exist in computability "Does program X halt?" and computational complexity "What is the Kolmogorov compexity of string X?" as well as topology "Are 2 structures which have properties X, Y, Z... in common homeomorphic?".

Aren't these single problems that deal with infinities rather than each being an infinite sequence of problems?  Would that kind of infinity bring about any sense of excitement or novelty more than discovering say, the nth digit of pi?

It's probably worth noting that my moral opinions seem to be in disagreement with many of the people around here, as I place much less weight on avoidance of suffering and experiencing physical bliss and much more on novelty of experience, helping others and seeking truth than the general feeling I get from people who want to maximize qualies or don't consider orgasmium morally repugnant.

Out of curiosity, if we did run out of new exciting truths to discover and there was a way to feel the exact same thrill and novelty directly that you would have in those situations, would you take it?

Comment by Jozdien on Utopic Nightmares · 2021-05-16T13:22:03.387Z · LW · GW

My point was that we value new experiences. Future forms of life, like humans after we can alter our preferences at root level, might not find that preference as necessary. So we could reach a level where we don't have to worry about bad possibilities, and call it "paradise".

Comment by Jozdien on Utopic Nightmares · 2021-05-16T12:34:19.451Z · LW · GW

We value new experiences now because without that prospect we'd be bored, but is there any reason why new experiences necessarily forms part of a future with entirely different forms of life?  There could be alien species that value experiences the more they repeat them; to them, new experiences may be seen as unnecessary (I think that's unlikely under evolutionary mechanisms, but not under sentient design).

Comment by Jozdien on Utopic Nightmares · 2021-05-16T12:34:02.866Z · LW · GW

I was thinking about less ideal variations more than explicitly harmful ones.  If we're optimizing for a set of values - like happiness, intelligence, virtuousness - through birth and environment, then I thought it unlikely that we'd have multiple options with the exact same maximal optimization distribution.  If there are, the identical people part of it doesn't hold yeah - if there's more than one option, it's likely that there are many, so there might not be identicals at all.

Comment by Jozdien on Utopic Nightmares · 2021-05-15T13:49:08.602Z · LW · GW

I don't think I see how Godel's theorem implies that.  Could you elaborate?  Concept space is massive, but I don't see it being literally unbounded.

Certainly not enough to be worth lobotomizing the entire human race in order to achieve some faux state of "eternal bliss".

If we reach the point where we can safely add and edit our own emotions, I don't think removing one emotion that we deem counterproductive would be seen as negative.  We already actively try to suppress negative emotions today, why would removing it altogether be more significant in an environment where its old positives don't apply?

Either we continue to grow forever, discovering new and exciting things, or we die out.  Any kind of "steady state utopia" is just an extended version of the latter.

Why is a steady state utopia equal to us dying out?  I can see why that would be somewhat true given the preference we give now to the state of excitement at discovery and novelty, but why objectively?

Comment by Jozdien on What Do We Mean By "Rationality"? · 2021-05-14T20:34:25.604Z · LW · GW

It depends from case to case, I would think.  There are instances when you're most probably benefited by trading off epistemic rationality for instrumental, but in cases where it's too chaotic to get a good estimate and the tradeoff seems close to equal, I would personally err on the side of epistemic rationality.  Brains are complicated, forcing a placebo effect might have ripple effects across your psyche like an increased tendency to shut down that voice in your head that talks when you know your belief is wrong on some level (very speculative example), for limited short-term gain.

Comment by Jozdien on Challenge: know everything that the best go bot knows about go · 2021-05-12T20:47:24.197Z · LW · GW

That's evidence for it being harder to know what a Go bot knows than to know what a chess bot does, right?  And if I'm understanding Go correctly, those years were at least a significant part due to computational constraints, which would imply that better transparency tools or making them more human-understandable still wouldn't go near letting a human know what they know, right?

Comment by Jozdien on Challenge: know everything that the best go bot knows about go · 2021-05-11T17:44:26.480Z · LW · GW

I'm not clear on your usage of the word "know" here, but if it's in a context where knowing and level of play have a significant correlation, I think GMs not knowing would be evidence against it being possible for a human to know everything that game bots do.  GMs don't just spend most of their time and effort on it, they're also prodigies in the sport.

Comment by Jozdien on Challenge: know everything that the best go bot knows about go · 2021-05-11T10:08:13.595Z · LW · GW

How comparable are Go bots to chess bots in this?  Chess GMs at the highest level have been using engines to prepare for decades; I think if they're similar enough, that would be a good sample to look at for viable approaches.

Comment by Jozdien on Open and Welcome Thread - May 2021 · 2021-05-04T18:17:15.121Z · LW · GW

80,000 Hours' data suggests that people are the bottleneck, not funding.  Could you tell me why you think otherwise?  It's possible that there's even more available funding in AI research and similar fields that are likely sources for FAI researchers.

Comment by Jozdien on Open and Welcome Thread - May 2021 · 2021-05-03T20:06:01.963Z · LW · GW

Thanks!  2006 is what I remember, and what my older brother says too.  I was 5 though, so the most I got out of it was learning how to torrent movies and Pokemon ROMs until like 2008, when I joined Facebook (at the time to play an old game called FarmVille).

Comment by Jozdien on Thoughts on Re-reading Brave New World · 2021-05-03T10:52:56.594Z · LW · GW

I'm far from calling Brave New World a utopia, but I also couldn't easily describe it as a dystopia.  People are happy with their lives for the most part, but there's no drive to push average levels of happiness up, and death still exists.  The best dystopian argument I can see is that there's no upward trend of good associated with scientific advancement, but even this needn't necessarily be true, because of the islands where only the most unorthodox thinkers are sent presumably without having to worry about their actions there.  I think something approximating a utopia by our standards would likely involve mass genetic equalization (but you know, in an upward direction), controlled environments, and easy access to hedons.

Comment by Jozdien on Open and Welcome Thread - May 2021 · 2021-05-03T08:29:58.805Z · LW · GW

I’m Jose.  I’m 20.  This is a comment many years in the making.

I grew up in India, in a school that (almost) made up for the flaws in Indian academia, as a kid with some talent in math and debate.  I largely never tried to learn math or science outside what was taught at school back then.  I started using the internet in 2006, and eventually started to feel very strongly about what I thought was wrong with the institutions of the world, from schools to religion.  I spent a lot of time then trying to make these thoughts coherent.  I didn’t really think about what I wanted to do, or about the future, in anything more than abstract terms until I was 12 and a senior at my school recommended HPMOR.

I don’t remember what I thought the first time I read it up until where it had reached (I think it was chapter 95).  I do remember that on my second read, by the time it had reached chapter 101, I stayed up the night before one of my finals to read it.  That was around the time I started to actually believe I could do something to change the world (there may have been a long phase where I phrased it as wanting to rule the universe).  But apart from an increased tendency in my thoughts at the time toward refining my belief systems, nothing changed much, and Rationality from AI to Zombies remained on my TBR until early 2017, which is when I first lurked LessWrong.

I had promised myself at the time that I would read all the Sequences properly regardless of how long it took, and so it wasn’t until late 2017 that I finally finished it.  That was a long, and arduous process, and much of which came from many inner conflicts I actually noticed for the first time.  Some of the ideas were ones I had tried to express long ago, far less coherently.  It was epiphany and turmoil at every turn.  I graduated school in 2018; I’d eventually realize this wasn’t nearly enough though, and it was pure luck that I chose a computer science undergrad because of vague thoughts about AI, despite not yet deciding on what I really wanted to do.

Over my first two years in college, I tried to actually think about that question.  By this point, I had read enough about FAI to know it to be the most important thing to work on, and that anything I did would have to come back to that in some way.  Despite that, I still stuck to some old wish to do something that I could call mine, and shoved the idea of direct work in AI Safety in the pile where things that you consciously know and still ignore in your real life go.  Instead, I thought I’d learned the right lesson and held off on answering direct career questions until I knew more, because I had a long history of overconfidence in those answers (not that that’s a misguided principle, but there was more I could have seen at that point with what I knew).

Fast forward to late-2020.  I had still been lurking on LW, reading about AI Safety, and generally immersing myself in the whole shindig for years.  I even applied to the MIRIx program early that year, and held off on starting operations on that after March that year.  I don’t remember what it was exactly that made me start to rethink my priors, but one day, I was shaken by the realization that I wasn’t doing anything the way I should have been if my priorities were actually what I claimed they were, to help the most people.  I thought of myself as very driven by my ideals, and being wrong only on the level where you don’t notice difficult questions wasn’t comforting.  I went into existential panic mode, trying to seriously recalibrate everything about my real priorities.  

In early 2021, I was still confused about a lot of things.  Not least because being from my country sort of limits the options one has to directly work in AI Alignment, or at least makes them more difficult.  That was a couple months ago.  I found that after I took a complete break from everything for a month to study for subjects I hadn’t touched in a year, all those cached thoughts I had that bred my earlier inner conflicts had mostly disappeared.  I’m not entirely settled yet though, it’s been a weird few months.  I’m trying to catch up on a lot of lost time and learn math (I’m working through MIRI’s research guide), focus my attention a lot more in specific areas of ML (I lucked out again there and did spend a lot of time studying it broadly earlier), and generally trying to get better at things.  I’ll hopefully post infrequently here.  I really hope this comment doesn’t feel like four years.

Comment by Jozdien on Best empirical evidence on better than SP500 investment returns? · 2021-04-25T14:16:03.512Z · LW · GW

What would your advice be on other cryptocurrencies, like Ethereum or minor coins that aren't as fad-prone and presumably cheaper to mine?

Comment by Jozdien on Covid 4/22: Crisis in India · 2021-04-24T09:00:46.018Z · LW · GW

I haven’t been tracking India, but I don’t have any reason to think there was a large behavioral change since February that could take us from static to doubling every week. What could this be other than the variant?

I’m putting it at about 85% that the surge in India’s primary cause is that the B.1.617 variant is far more infectious than their previous variant.

I don't have much better data about how much to attribute the surge to the variant because as far as I've seen there isn't any, but in the weeks before the surge began, there was a sizeable contingent of people predicting that cases would go up very badly in April even before news of the variant, because of religious festivals (Kumbh Mela in mid-April saw millions of people in crowds without masks after many of the priests tested positive) and regional elections (standard practice to have huge crowds of people surrounding candidates on the road as they pass by, most places didn't stop this year) happening at the same time.

This article gives a bit of credence to the possibility that some countries had populations with higher prior immunity than others.  I can't say whether this is true or not, but if so, it's possible that's where the new variant differs.  And because India was hit far less hard than people expected, many weren't following mask and distancing protocols by April, which meant new viral loads would have been very opportune.

Comment by Jozdien on Why We Launched LessWrong.SubStack · 2021-04-01T13:22:50.741Z · LW · GW

According to the post The Present State of Bitcoin, the value of 1 BTC is about $13.2.  Since the title indicates that this is, in fact, the present value, I'm inclined to conclude that those two websites you linked to are colluding to artificially inflate the currency.  Or they're just wrong, but the law of razors indicates that the world is too complicated for the simplest solution to be the correct one.