Posts

Is a near-term, self-sustaining Mars colony impossible? 2020-06-03T22:43:08.501Z · score: 12 (7 votes)
ESRogs's Shortform 2020-04-29T08:03:28.820Z · score: 7 (1 votes)
Dominic Cummings: "we’re hiring data scientists, project managers, policy experts, assorted weirdos" 2020-01-03T00:33:09.994Z · score: 81 (21 votes)
'Longtermism' definitional discussion on EA Forum 2019-08-02T23:53:03.731Z · score: 17 (6 votes)
Henry Kissinger: AI Could Mean the End of Human History 2018-05-15T20:11:11.136Z · score: 46 (10 votes)
AskReddit: Hard Pills to Swallow 2018-05-14T11:20:37.470Z · score: 17 (6 votes)
Predicting Future Morality 2018-05-06T07:17:16.548Z · score: 25 (10 votes)
AI Safety via Debate 2018-05-05T02:11:25.655Z · score: 40 (9 votes)
FLI awards prize to Arkhipov’s relatives 2017-10-28T19:40:43.928Z · score: 12 (5 votes)
Functional Decision Theory: A New Theory of Instrumental Rationality 2017-10-20T08:09:25.645Z · score: 36 (13 votes)
A Software Agent Illustrating Some Features of an Illusionist Account of Consciousness 2017-10-17T07:42:28.822Z · score: 16 (3 votes)
Neuralink and the Brain’s Magical Future 2017-04-23T07:27:30.817Z · score: 6 (7 votes)
Request for help with economic analysis related to AI forecasting 2016-02-06T01:27:39.810Z · score: 6 (7 votes)
[Link] AlphaGo: Mastering the ancient game of Go with Machine Learning 2016-01-27T21:04:55.183Z · score: 14 (15 votes)
[LINK] Deep Learning Machine Teaches Itself Chess in 72 Hours 2015-09-14T19:38:11.447Z · score: 8 (9 votes)
[Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim 2015-08-19T06:37:21.049Z · score: 7 (8 votes)
[Link] Neural networks trained on expert Go games have just made a major leap 2015-01-02T15:48:16.283Z · score: 15 (16 votes)
[LINK] Attention Schema Theory of Consciousness 2013-08-25T22:30:01.903Z · score: 3 (4 votes)
[LINK] Well-written article on the Future of Humanity Institute and Existential Risk 2013-03-02T12:36:39.402Z · score: 16 (19 votes)
The Center for Sustainable Nanotechnology 2013-02-26T06:55:18.542Z · score: 4 (11 votes)

Comments

Comment by esrogs on The rationalist community's location problem · 2020-09-24T21:56:59.718Z · score: 2 (1 votes) · LW · GW

even accounting for the weather

Taking the Bay's weather to be a negative? Melbourne must really be idyllic...

Comment by esrogs on Why GPT wants to mesa-optimize & how we might change this · 2020-09-23T04:17:03.273Z · score: 4 (2 votes) · LW · GW

Oh, I had actually seen that paper. Forgot that they did that though. Thanks!

Comment by esrogs on Why GPT wants to mesa-optimize & how we might change this · 2020-09-21T20:39:45.175Z · score: 7 (4 votes) · LW · GW

This makes me wonder, how would Monte Carlo tree search do for GPT? And could you do AlphaGo-style IDA?

You'd need an analogue of the value network (or value head). (Where current GPT seems analogous to the policy network.) And then ideally you'd also want some analogue of winning / losing to ground out the evaluation.

Maybe you could set it up like this --

  1. start with a task description like, "write a poem in the style of e.e. cummings about the romance between cryptographers Alice and Bob"
  2. feed the task description (with some boilerplate) into GPT, and have it start generating continuations
  3. do MCTS on the continuations; use your value network (head) to evaluate the continuations vs the task description; update the policy network based on the evaluations
  4. include an "is done" head and evaluate it to decide when to stop
  5. send completed works to humans to provide feedback; the feedback should include separate scores for "good so far" for the value head, and "is a completed work" for the "is done" head.

I'd be curious whether this would enable GPT to significantly improve. Specifically, would you be able to generate longer works with less intervention?

Comment by esrogs on Comparing Utilities · 2020-09-15T04:47:12.656Z · score: 3 (2 votes) · LW · GW

G: A 50/50 gamble between A and B

This bit in the first drawing should say "... between A and C", right?

Comment by esrogs on rohinmshah's Shortform · 2020-09-14T23:01:41.813Z · score: 4 (2 votes) · LW · GW

It's totally possible that there's no way to get to reliably correct answers and you instead want decisions that are good regardless of what the answer is.

Good point!

Comment by esrogs on Open & Welcome Thread - September 2020 · 2020-09-14T19:55:24.013Z · score: 5 (3 votes) · LW · GW

It seems like teaching that fact, and instilling moral uncertainty in general into children

I would guess that teaching that fact is not enough to instill moral uncertainty. And that instilling moral uncertainty would be very hard.

Comment by esrogs on rohinmshah's Shortform · 2020-09-14T19:03:38.283Z · score: 6 (3 votes) · LW · GW

Sounds like you probably disagree with the (exaggeratedly stated) point made here then, yeah?

(My own take is the cop-out-like, "it depends". I think how much you ought to defer to experts varies a lot based on what the topic is, what the specific question is, details of your own personal characteristics, how much thought you've put into it, etc.)

Comment by esrogs on rohinmshah's Shortform · 2020-09-14T18:35:18.575Z · score: 6 (3 votes) · LW · GW

Options 1 & 2 sound to me a lot like inside view and outside view. Fair?

Comment by esrogs on ESRogs's Shortform · 2020-09-14T08:55:41.426Z · score: 2 (1 votes) · LW · GW

Also asked on Twitter here.

Comment by esrogs on ESRogs's Shortform · 2020-09-14T08:55:04.606Z · score: 3 (2 votes) · LW · GW

If GPT-3 is what you get when you do a massive amount of unsupervised learning on internet text, what do you get when you do a massive amount of unsupervised learning on video data from cars?

(In other words, can we expect anything interesting to come from Tesla's Dojo project, besides just better autopilot?)

Comment by esrogs on The Box Spread Trick: Get rich slightly faster · 2020-09-07T17:00:53.011Z · score: 2 (1 votes) · LW · GW

the FDIC does not directly buy the spread; they allow the CD to exist, which the buyers should have access to

I'm not sure how important FDIC insurance is to the story here, but worth noting that it has a 250k per account limit. So I don't think financial institutions would have access to an unlimited amount of it.

Comment by esrogs on The Box Spread Trick: Get rich slightly faster · 2020-09-07T16:52:42.764Z · score: 4 (2 votes) · LW · GW

But if selling the box spreads doesn't change the account value much - if the brokerage marks you as having $100k cash but also a ~$100k liability - then after withdrawing, the value is lower than before you sold the spreads.

This is correct. The brokerage shows cash + a liability.

Comment by esrogs on The Box Spread Trick: Get rich slightly faster · 2020-09-07T01:37:13.258Z · score: 4 (2 votes) · LW · GW

Not sure I've understood your question, but if you withdraw from your account, the value goes down, whether you've sold a box spread or just have a margin loan.

Comment by esrogs on The Box Spread Trick: Get rich slightly faster · 2020-09-07T01:18:29.568Z · score: 3 (2 votes) · LW · GW

Worth noting that you may also want to use this trick if you are using portfolio margin to lever up your portfolio more than 1x. Instead of withdrawing to a CD, you leave the cash in your account, and end up with a lower margin rate than you'd otherwise get.

Comment by esrogs on Isn't Tesla stock highly undervalued? · 2020-08-30T03:39:25.021Z · score: 2 (1 votes) · LW · GW

Hmm, good question. I actually don't have a good sense of how much of that is assets that remain on the balance sheet (e.g. manufacturing equipment) vs stuff like paying their employees to figure out how to make the batteries better, or how to set up the factory more efficiently.

And paying employees to figure stuff out would show up as just costs on the balance sheet, rather than assets, unless you actually patented something, right? (I don't actually know accounting super well.)

EDIT: It's intuitive to me though that when you're growing revenue at 50% annually (as Tesla has since 2013), you're just not going to be able to spend money as efficiently as when you're operating at, or close to, the same scale from one year to the next. (That is, efficiently in terms of short-term profit and loss. From a long-term perspective it might be very efficient, if the spending is enabling future growth.)

I'm not sure exactly where that's most likely to show up on an accounting statement. But I do think it's what you'd expect by default. And it's how startups operate. You spend to grow, and you don't expect to be profitable right away. Tesla should perhaps be thought of as a rare public company that still operates like a high-growth startup.

Comment by esrogs on misc raw responses to a tract of Critical Rationalism · 2020-08-24T09:24:05.123Z · score: 2 (1 votes) · LW · GW

I run the discussion forums where we've been continuously open to debate and questions since before LW existed.

Which discussion forums are you referring to?

Comment by esrogs on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-24T09:10:55.948Z · score: 2 (1 votes) · LW · GW

The blackmailers need to end up sometimes disclosing in order for their threats to be credible, no?

Comment by esrogs on SDM's Shortform · 2020-08-23T23:44:35.896Z · score: 4 (2 votes) · LW · GW

This is a series of scenarios describing a human taking actions in the world, designed to test physical and common-sense reasoning.

Nitpick: why is this part bolded? Surely this was written by you and not GPT-3, right? (It's becoming a new pet peeve of mine when people are not super clear and consistent with their formatting of GPT-3 conversations. I find it often takes me a significant amount of effort to figure out who said what if a convention is not followed religiously within a transcript.)

Comment by esrogs on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-23T08:46:51.640Z · score: 3 (2 votes) · LW · GW

I'm sure I could go on for quite a while. Legalizing blackmail means that people are de-facto incentivized to exploit information when it would harm people, because their payout stops being derived from the public interest

What if blackmail was only made legal in cases where there was a significant public benefit? (See here.)

I believe that would still be a departure from laws today (you're not allowed to pay off whistleblowers, are you? or to accept such payment?), but seems like it might not have the downsides that you and others have highlighted of incentivizing finding ways to hurt others by sharing information in general.

Comment by esrogs on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-23T08:24:55.577Z · score: 2 (1 votes) · LW · GW

Suppose it were easy to split potential blackmail scenarios into whistleblower scenarios (where the value of the information to society is quite positive) and embarrassing-but-useless scenarios (where it is not).

Would you support legalizing blackmail in both classes, or just the first class?

EDIT: I ask because, I think (at least part of) your argument is that if we legalize paying off whistleblowers, then that's okay, because would-be-whistleblowers still have an incentive to find wrongdoing, and the perpetrators still have an incentive to avoid that wrongdoing (or at least hide it, but hiding has costs, so on the margin it should mean doing less). (This reminds me a bit of white hat hackers claiming bug bounties.)

Meanwhile, the anti-blackmail people argue that you don't want people to be incentivized to find ways to harm each other.

So, if you could cleanly separate out the public benefit from the harm, on a case-by-case basis (rather than having to go with simple heuristics like "gossip is usually net beneficially"), it seems like you might be able to get to a synthesis of the two views.

Comment by esrogs on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-23T07:26:11.692Z · score: 4 (2 votes) · LW · GW

Very fair.

Comment by esrogs on Risk is not empirically correlated with return · 2020-08-22T17:37:56.740Z · score: 2 (1 votes) · LW · GW

When one says "riskier assets should have higher expected returns than less risky assets", one imagines comparing two assets that cost the same because the demand for them is the same.

Not sure if you'll remember what you meant 10 years ago, but... what did you mean by "cost the same"?

When you compare the returns you're normalizing by the number of dollars put in, so you're not comparing one share price directly to another or something like that. And if you measure cost relative to assets, or present value of expected future profits, then the claim is that they do not cost the same -- that the riskier asset costs less.

I guess if you think of cost as compared to risk-adjusted return, then yes you're expecting them to cost the same in those terms (and that's why the riskier asset with the same expected return costs less in nominal terms). Is that what was meant, or something else?

Comment by esrogs on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-22T07:07:48.011Z · score: 4 (2 votes) · LW · GW

Similarly, to do scaling or bitter-lesson-style research, you need resources * fanaticism < a constant. This constant seems to be very small, which is why compute had to drop

A fairly minor point, but I don't quite follow the formula / analogy. Don't resources and fanaticism help you do the scaling research? So shouldn't it be a > sign rather than <, and shouldn't we say that the constant is large rather than small?

Comment by esrogs on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-22T06:35:33.642Z · score: 4 (2 votes) · LW · GW

I didn't feel like any comment I would have made would have anything more to say than things I've said in the past.

FWIW, I say: don't let that stop you! (Don't be afraid to repeat yourself, especially if there's evidence that the point has not been widely appreciated.)

Comment by esrogs on What posts on finance would your find helpful or interesting? · 2020-08-22T05:41:29.858Z · score: 8 (4 votes) · LW · GW

I'd be interested in reading a review / summary of PIMCO's An Asset Allocation Primer: Connecting Markowitz, Kelly and Risk Parity (or a post that otherwise covers the same material).

Comment by esrogs on What posts on finance would your find helpful or interesting? · 2020-08-22T05:39:43.596Z · score: 8 (5 votes) · LW · GW

People often don't know what they don't know. ;-) Were there specific concepts you and gilch were wondering whether people already know?

Comment by esrogs on Alex Irpan: "My AI Timelines Have Sped Up" · 2020-08-22T01:40:52.395Z · score: 4 (2 votes) · LW · GW

For this post, I’m going to take artificial general intelligence (AGI) to mean an AI system that matches or exceeds humans at almost all (95%+) economically valuable work.

I'm not sure this is such a good operationalization. I believe that if you looked at the economically valuable work that humans were doing 200 hundred years ago (mostly farming, as I understand), more than 95% of it is automated today. And we don't spend 95% of GDP on farming today.

So I'm not quite sure what the above means. Does it mean 95% of GDP spent on compute? Or unemployment at 95%? Or 95% of jobs that are done today by people are done then by computers? If that last one, then how do you measure it if jobs have morphed s.t. there's neither a human nor a computer clearly doing a job that today is done by a human?

I think that productivity is going to increase. And humans will continue to do jobs where they have a comparative advantage relative to computers. And what those comparative advantages are will morph over time. (And in the limit, if I'm feeling speculative, I think being a producer and a consumer might merge, as one of the last areas where you'll have a comparative advantage is knowing what your own wants are.)

But given that prices will be set based on supply and demand it's not quite obvious to me how to measure when 95% of economically valuable work is done by computers. Because, for a given task that involves both humans and computers, even if computers are doing "more" of the work, you won't necessarily spend more on the computers than the people, if the supply of compute is plentiful. So, in some hard-to-define sense, computers may be doing most of the work, but just measured in dollars they might not be. And one could argue that that is already the case (since we've automated so much of farming and other things that people used to do).

Alternatively, you could operationalize 95% of economically valuable work being done by computers as the total dollars spent on compute being 20x all wages. That's clear enough I think, but I suspect is not exactly what Alex had in mind. And also, I think it may just be a condition that never holds, even when AI is strongly superhuman, depending on how we end up distributing the spoils of AI, and what kind of economic system we end up with at that point.

Comment by esrogs on Alex Irpan: "My AI Timelines Have Sped Up" · 2020-08-22T01:12:11.435Z · score: 2 (1 votes) · LW · GW

Is the "Your distribution" one Alex's updated estimates, or is it your (jungofthewon's) distribution?

Comment by esrogs on Can crimes be discussed literally? · 2020-08-19T02:47:51.245Z · score: 2 (1 votes) · LW · GW

I'm not claiming that disparate enforcement does not exist. Just that it doesn't seem central (heh) to the idea of non-central criminality.

One can have a conception of crime that typically includes theft, assault, etc, and also be in favor of civil disobedience in some cases, while also not being in favor of disparate enforcement. In fact, I would expect that to be the common case.

I guess your last line about the majority's agenda suggests you think that's not the common case. Is that right? Perhaps that's the core of our disagreement.

Comment by esrogs on Can crimes be discussed literally? · 2020-08-19T02:42:05.192Z · score: 4 (2 votes) · LW · GW

Fair enough. Thanks for the reply.

Comment by esrogs on 10 Fun Questions for LessWrongers · 2020-08-18T21:00:24.842Z · score: 3 (2 votes) · LW · GW

#5: Many of these terms I've known and used in conversation long before Overcoming Bias and Less Wrong.

For this one, I tried to answer based on whether I had learned the terms from LW or not. But wasn't 100% sure whether that's what the survey wanted. Would def have appreciated that being stated more clearly.

Comment by esrogs on 10 Fun Questions for LessWrongers · 2020-08-18T05:14:28.217Z · score: 4 (3 votes) · LW · GW

I have taken the survey.

Comment by esrogs on My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda · 2020-08-17T21:47:00.985Z · score: 2 (1 votes) · LW · GW

Ah that sounds reasonable. As a stickler about abbreviations, I'll just add that if you do want to stick with "IDA" and "Iterated Amplification" then it seems a little odd to use the phrase "stands for" to connect the two. (Since the D in "IDA" does not stand for anything in "Iterated Amplification".)

 

EDIT: My inner pedant would be satisfied if this was simply worded as something like, "IDA stands for Iterated Distillation and Amplification, which we will refer to as Iterated Amplification for short." (I'll leave it to you to decide whether my inner pedant is your target audience. ;-) )

Comment by esrogs on Does crime explain the exceptional US incarceration rate? · 2020-08-17T21:03:40.443Z · score: 5 (3 votes) · LW · GW

Another way out for them is that maybe all the countries with similar homicide rates should imprison people as much as the US, but their institutions don’t function well enough.

I'd be interested to see more analysis of this hypothesis. It seems quite likely to me that the US having an unusual combination of violence and competence is at least part of the explanation for the high incarceration rate. Would be curious to know how much.

Maybe a simple follow-up analysis would just be to check the ratios of homicides to people-in-prison-for-homicide for various countries (rather than comparing against incarceration rate more generally). Would be interesting to know on a crime-by-crime basis how the US compares to other countries.

Comment by esrogs on My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda · 2020-08-17T18:34:02.371Z · score: 4 (2 votes) · LW · GW

IDA stands for Iterated Amplification

Should be: Iterated Distillation and Amplification.

Comment by esrogs on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-13T02:34:26.369Z · score: 6 (3 votes) · LW · GW

Yang et al.

Wang et al.?

Comment by esrogs on Investment idea: basket of tech stocks weighted towards AI · 2020-08-13T02:11:20.471Z · score: 5 (3 votes) · LW · GW

I wonder how much of OpenAI they got for their $1B...

Yeah, me too. My guess is around 30%, but my error bars are pretty wide. Would not be shocked if you told me it was 50%, or 10%.

(I think they probably didn't give up more than half the company to Microsoft. And even with an all-star team, it's hard to imagine that they'd be valued as a decacorn when they're pre-product and pre-revenue. So that makes me think their 2019 valuation was somewhere in the 2-10 billion range. Probably on the lower end of that.)

Comment by esrogs on Tips/tricks/notes on optimizing investments · 2020-08-13T01:49:57.701Z · score: 2 (1 votes) · LW · GW

it's otherwise hard or impossible to obtain leverage in a tax-free/deferred account (because they usually don't allow margin)

Another way to get leverage in an IRA is to buy long-dated call options (as recommended in Lifecycle Investing). Would you expect CEFs to be superior?

Comment by esrogs on Investment idea: basket of tech stocks weighted towards AI · 2020-08-12T23:11:35.844Z · score: 9 (3 votes) · LW · GW

Depending on how much of that "10% non-FAANG" MSFT is, I might weight it a little higher, given their relationship with OpenAI. (Also Satya seems good.)

Comment by esrogs on An Alien God · 2020-08-12T06:45:54.384Z · score: 2 (1 votes) · LW · GW

Why is a God that wants to remain invisible a more likely explanation than no God at all?

If you start with the thought, "Maybe God exists; maybe not." and then notice that natural explanations are sufficient for all the phenomena you observe, then shouldn't you move towards the "no God" hypothesis?

Comment by esrogs on Can crimes be discussed literally? · 2020-08-12T06:37:30.264Z · score: 2 (1 votes) · LW · GW

The idea that criminality becomes noncentral - and therefore the idea becomes not worth applying - because it's approved of by the majority is what King describes above as "difference made legal," and as such, the basic paradigm of injustice.

I don't quite follow this. As I understand, MLK says that a "difference made legal" is when a majority enforces laws on a minority that it won't enforce on itself. The part about enforcing the law on the minority seems critical to that definition.

Whereas, disparate enforcement doesn't seem like it would necessarily be part of "non-central criminality". The majority might approve of something that's technically illegal for anyone, regardless of whether they're a member of the majority group. In particular, people who approve of MLK's civil disobedience may approve of the same style of civil disobedience, no matter who does it. Or they may not. It would depend on the person. But disparate enforcement certainly doesn't seem baked into the idea.

in hindsight I think it was pretty disrespectful of you to use King as the example in the Noncentral Fallacy post.

I think you have some valid points in your comment, but it seems a bit harsh to say that it was disrespectful for Scott not to think of them (if indeed he didn't) when he made his post. If you only thought of this in hindsight, was it disrespectful of you not to think of it before either? (Sure, we should hold the author of a post to a higher standard than the reader, but still, I think the point you're making here is actually relatively subtle, in the scheme of things, so it strikes me as an overstatement to call out someone as disrespectful for not thinking of it.)

Comment by esrogs on Can you get AGI from a Transformer? · 2020-08-12T01:11:34.821Z · score: 4 (2 votes) · LW · GW

Then you turn the Transformer back on for the next word, with the previous writing as an additional input, and maybe it takes 30 Transformer layers just to get back to where it was, i.e. having re-internalized that concept from before.

Are the remaining 66 layers not enough to build on the concept? What if we're talking about GPT-N rather than GPT-3, with T >> 96 total layers, such that it can use M layers to re-internalize the concept and T-M layers to build on it?

Aren't our brains having to do something like that with our working memory?

Comment by esrogs on Can you get AGI from a Transformer? · 2020-08-12T01:00:06.619Z · score: 4 (2 votes) · LW · GW

I recognize that this kind of probabilistic programming stuff is not as "hot" as DNNs right now, but it's not neglected either; it's a pretty active area of CS research, moving forward each year.

Would the generative models not themselves be deep neural networks?

Comment by esrogs on is gpt-3 few-shot ready for real applications? · 2020-08-12T00:32:03.516Z · score: 2 (1 votes) · LW · GW

I'd assume a common use pattern will end up being something like: use few-shot, release, collect data, fine-tune, rinse-repeat.

This makes sense to me, and I wonder if we might even see some additional steps, like distilling to a smaller, cheaper model after fine-tuning, or even (at some point, for some problems) generating regular code.

Comment by esrogs on Raising funds to establish a new AI Safety charity · 2020-08-08T21:09:33.345Z · score: 4 (2 votes) · LW · GW

It seems to me like banning organizational announcement will make it much harder to get new initiatives of the ground.

Incidentally, anyone in this space trying to get a new initiative off the ground may want to apply to SurvivalAndFlourishing.org's first funding round. (We'll be providing funding, as well as fiscal sponsorship some administrative support. Applications due by October 1st.)

[Edited to clarify that we won't provide full fiscal sponsorship. We will provide some administrative support via SEE (who is SAF's fiscal sponsor). Project seeking long-term fiscal sponsorship may want to apply directly to SEE (perhaps after bootstrapping via SAF). See more details on our announcement page.]

Comment by esrogs on The EMH Aten't Dead · 2020-07-31T22:33:54.739Z · score: 2 (1 votes) · LW · GW

I tend to think of the market as a random walk during day trading, largely driven by viral memetic fashion trends on the scale of weeks to months, and then often only "weighing" effectively on the scale of months to years.

Is this meant to be in contrast to what a believer in the EMH would think? It sounds pretty similar to me.

Is your point just that the information that the market is pricing in on short timescales more about the demand to hold various assets than new fundamental information about a company? I suppose if you think of the EMH as saying that the market only moves on fundamentals then that would be a contrast. But I guess I tend to think of the EMH as saying that all the fundamentals are priced in. Not that nothing else is priced in.

Comment by esrogs on Tagging Open Call / Discussion Thread · 2020-07-29T21:55:35.077Z · score: 4 (2 votes) · LW · GW

That's helpful context. Makes sense, thanks!

Comment by esrogs on Tagging Open Call / Discussion Thread · 2020-07-29T21:54:08.030Z · score: 5 (3 votes) · LW · GW

The Concepts link is still not going anywhere for me :-/. When I inspect on Chrome, it shows up like this:

Comment by esrogs on Tagging Open Call / Discussion Thread · 2020-07-29T19:18:36.024Z · score: 4 (2 votes) · LW · GW

We'd like to build a small community around taggers – the people who maintain the ontology of LessWrong's library ensuring that desired information can always be found.

Maybe this is a dumb question but, is this actually needed?

Can we get what we want with people just randomly adding tags when they notice? Do we need to have people specializing on this?

I'd expect that a bunch of work would be needed up front to get the tag system into a good state, but I'd think most of that work has been done already (by the LW team, and others). And then going forward I'd expect much less work to be required. Am I missing something?

Comment by esrogs on Tagging Open Call / Discussion Thread · 2020-07-29T19:06:37.100Z · score: 2 (1 votes) · LW · GW

Also, maybe it was supposed to be "It's a good idea to start..."?