LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
A fair point. I suppose part of my doubt though is exactly: are most of these applications going to automate jobs, or merely tasks? And to what extent does contributing to either advance the know how that might eventually help automating people?
jay-bailey on Ethics and prospects of AI related jobs?I guess my way of thinking of it is - you can automate tasks, jobs, or people.
Automating tasks seems probably good. You're able to remove busywork from people, but their job is comprised of many more things than that task, so people aren't at risk of losing their jobs. (Unless you only need 10 units of productivity, and each person is now producing 1.25 units so you end up with 8 people instead of 10 - but a lot of teams could also quite use 12.5 units of productivity well)
Automating jobs is...contentious. It's basically the tradeoff I talked about above.
Automating people is bad right now. Not only are you eliminating someone's job, you're eliminating most other things this person could do at all. This person has had society pass them by, and I think we should either not do that or make sure this person still has sufficient resources and social value to thrive in society despite being automated out of an economic position. (If I was confident society would do this, I might change my tune about automating people)
So, I would ask myself - what type of automation am I doing? Am I removing busywork, replacing jobs entirely, or replacing entire skillsets? (Note: You are probably not doing the last one. Very few, if any, are. The tech does not seem there atm. But maybe the company is setting themselves up to do so as soon as it is, or something)
And when you figure out what type you're doing, you can ask how you feel about that.
lorxus on Maximal Lottery-Lotteries ExistTo avoid confusion: this post and my reply to it were also on a past version of this post; that version lacked any investigation of dominance criterion desiderata for lottery-lotteries.
dr_s on Ethics and prospects of AI related jobs?I suppose I'm mostly also looking for aspects of this I might have overlooked, or inside perspective about any details from someone who has relevant experience. I think I tend to err a bit on caution on things but ultimately I believe that "staying pure" is rarely a road to doing good (at most it's a road to not doing bad, but that's relatively easy if you just do nothing at all). Some of the problems with automation would have applied to many of the previous rounds of it, and those ultimately came out mostly good, I think, but also it somehow feels This Time It's Different (but then again, I do tend to skew towards pessimism and seeing all the possible ways things can go wrong...).
lorxus on DyslucksiaYeah, I myself subvocalize absolutely everything and I am still horrified when I sometimes try any "fast" reading techniques - those drain all of the enjoyment our of reading for me, as if instead of characters in a story I would imagine them as p-zombies.
I speed-read fiction, too. When I do, though, I'll stop for a bit whenever something or someone new is being described, to give myself a moment to picture it in a way that my mind can bring up again as set dressing.
shoshannah-tekofsky on DyslucksiaThat sounds great! I have to admit that I still get a far richer experience from reading out loud than subvocalizing, and my subvocalizing can't go faster than my speech. So it sounds like you have an upgraded form with more speed and richness, which is great!
shoshannah-tekofsky on DyslucksiaThanks! :D
Attention is a big part of it for me as well, yes. I feel it's very easy to notice when I skip words when reading out loud, and getting the cadence of a sentence right only works if you have a sense of how it relates to the previous and next one.
the-gears-to-ascension on Creating unrestricted AI Agents with a refusal-vector ablated Llama 3 70BNo particular disagreement that your marginal contribution is low and that this has the potential to be useful for durable alignment. Like I said, I'm thinking in terms of not burning days with what one doesn't say.
jay-bailey on Ethics and prospects of AI related jobs?I think that there are two questions one could ask here:
Is this job bad for x-risk reasons? I would say that the answer to this is "probably not" - if you're not pushing the frontier but are only commercialising already available technology, your contribution to x-risk is negligible at best. Maybe you're very slightly adding to the generative AI hype, but that ship's somewhat sailed at this point.
Is this job bad for other reasons? That seems like something you'd have to answer for yourself based on the particulars of the job. It also involves some philosophical/political priors that are probably pretty specific to you. Like - is automating away jobs good most of the time? Argument for yes - it frees up people to do other work, it advances the amount of stuff society can do in general. Argument for no - it takes away people's jobs, disrupts lives, some people can't adapt to the change.
I'll avoid giving my personal answer to the above, since I don't want to bias you. I think you should ask how you feel about this category of thing in general, and then decide how picky or not you should be about these AI jobs based on that category of thing. If they're mostly good, you can just avoid particularly scummy fields and other than that, go for it. If they're mostly bad, you shouldn't take one unless you have a particularly ethical area you can contribute to.
vanessa-kosoy on Linear infra-Bayesian BanditsMy thesis is the same research I intended to do anyway, so the thesis itself is not a waste of time at least.
The main reason I decided to do grad school, is that I want to attract more researchers to work on the learning-theoretic agenda, and I don't want my candidate pool to be limited to the LW/EA-sphere. Most qualified candidates would be people on an academic career track. These people care about prestige, and many of them would be reluctant to e.g. work in an unknown research institute headed by an unknown person without even a PhD. If I secure an actual faculty position, I will also be able to direct grad students to do LTA research.
Other benefits include:
So far it's not obvious whether it's going to pay off, but I already paid the vast majority of the cost anyway (i.e. the time I wouldn't have to spend if I just continued as independent).