App-Based Disease Surveillance After COVID-19 2020-04-10T18:52:52.941Z · score: 1 (2 votes)
How should I dispose of a dangerous idea? 2019-12-18T03:49:45.477Z · score: -16 (11 votes)
Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading 2017-03-25T19:29:42.499Z · score: 1 (2 votes)


Comment by redman on Why isn’t assassination/sabotage more common? · 2020-06-10T05:08:48.763Z · score: 1 (1 votes) · LW · GW

Using the suggested framework, those would be class 2 not class 3. accident or successful class 3 assassination? As I understand it, analysis of these situations can be aided by wearing the correct headgear:

Comment by redman on Why isn’t assassination/sabotage more common? · 2020-06-05T03:26:11.972Z · score: 8 (3 votes) · LW · GW at least one group of people appear to have accepted at least some of your argument.

Furthermore, assassinations fall into three categories:

Where the assassin takes credit afterwards (for intimidation, bragging to supporters, etc), where a third party is blamed (to prevent reprisals being directed at the source), and where it is unclear that an assassination was performed (wow IBM got screwed hard by that plane crash).

From the perspective in the OP, it is clear that there is a detection challenge. The most useful categories (to an assassin) are the third and the second, the least useful is the first. An external observer will see only the first category, and a potential subset of the second category, but is unlikely to see many members of the third category.

Maybe they're very common, and you're just not seeing the obvious.

Comment by redman on The Greatest Host · 2020-05-13T03:51:19.453Z · score: 2 (2 votes) · LW · GW

And the absolute most attractive job for a psychopath is 'determiner of who is and is not neuropsychologically fit'.

If you're a shitty human there's money to be made as a child psychologist leveraging that. Abuses are common and it's not hard to issue a pitch like the following: "pay me 30k and I won't tell the court you're an unfit parent and send your kids to the foster care system".

Comment by redman on Prospecting for Conceptual Holes · 2020-04-26T12:22:24.798Z · score: 1 (1 votes) · LW · GW

Did you actually learn to speak piraha? Everyone I know totally refused to participate, so I dropped the idea.

Comment by redman on Solar system colonisation might not be driven by economics · 2020-04-22T09:29:28.371Z · score: 3 (2 votes) · LW · GW

Cmon dark side of the moon space telescope and weapons test range.

Comment by redman on Databases of human behaviour and preferences? · 2020-04-22T09:27:30.297Z · score: 3 (2 votes) · LW · GW gl

Comment by redman on What are some fun ways to spend $100,000? · 2020-04-22T08:56:06.489Z · score: 1 (3 votes) · LW · GW

A 100k potlatch is easy.

If male, do a dangerous looking activity that demonstrates your mastery of some activity with / in front of a group of your closest friends, then bring them to a wild party with plentiful dopamine agonists and easy sex with attractive women (cocaine and hookers).

If female, pay young and attractive females to do your bidding, dress yourself up to be as pretty as you can, and go somewhere where you can be seen by as many (ideally high status) people as possible.

Repeat until out of money.

Try to avoid alcohol, strip clubs, slot machines, and canned hunting, as they are cheap and shitty imitations.

Enron's inner circle did company retreats with atv riding followed by wild parties. Larry Ellison owns a fighter jet and pays a 25k noise fine whenever he takes it out at 3am.

I initially wrote a lot more, with activity recommendations, but really this covers it.

If you do want specific advice, it's available, just invite me to the party.

You know, for science.

Comment by redman on App-Based Disease Surveillance After COVID-19 · 2020-04-12T20:23:04.264Z · score: 1 (1 votes) · LW · GW

I would be surprised if you could not figure out if two people are screwing with moderate confidence using nothing but demographic data and location based metadata dumped into a ML algorithm. The price of false positives is a few unnecessary tests, and is therefore super low, so it doesn't even have to be that good of a system.

Tinder data could be purchased to build out the initial algorithm, and if there are still challenges, volunteers could be solicited for validation data.

Mixing in public social media (instagram) and actual communications content might help, but after validation of the location system, probably isn't necessary, but could be analyzed using robots rather than human review, which is apparently acceptable for other purposes.

Is it morally justified to use location metadata (gps), public social media (instagram), communications metadata (contact lists), and communication content to enumerate close contacts that may have spread respiratory viruses? If so, how could it be wrong to use the exact same dataset to fight other diseases with massive social burdens.

I mean sure, some people might cry about their privacy, but the data isn't theirs, courts have established that it belongs to the communication companies, all of whom are apparently on board with metadata assisted surveillance for security and now public health.

Google and Apple are building the Bluetooth tracker, the Chinese gps app with color coding for exposure risk is a thing, facebook checked instagram to see if people in Italy are social distancing. Nobody is crying about any of these things. This is just a proposal to use the same datasets for the same reason.

Anyone who argues can be labelled pro disease and pushee out of the public debate, just like anyone who complains about flu tracking software can be asked, 'do you want old people to die'?

The initial system could be instrumented with a color coding scheme, and an app. When people go to dr offices, part of the basic vitals check at the start of a visit is the doctor running a database check and suggesting testing for various conditions based on the color code. The app to check your own color code status could be downloaded by interested users. 'Show your color' would become something people just ask each other during intimate encounters.

Most jurisdictions already require that positive tests for certain pathogens (STIs are on this list) be reported to a central authority by doctors, this is a long-standing thing and nobody with an opinion that matters questions it:

Governments could implement this proposal without much public debate by just rolling out a corona app, adding features for different classes of respiratory disease, then adding features for the rest of the 'reportable pathogens' that are transmitted by different means. The model could be developed in house using already available data (reported tests and location metadata).

We can look back at this post in five years and see how things have moved. Good luck stopping it if you think this is morally repugnant as you apparently do.

Comment by redman on Law school taught me nothing · 2020-04-12T11:51:34.795Z · score: 1 (1 votes) · LW · GW

So three years with a good anki deck would be more valuable than sitting classes in terms of remembering the useful stuff?

Comment by redman on Transportation as a Constraint · 2020-04-09T11:30:21.632Z · score: 5 (3 votes) · LW · GW

This and the other 4 stories from a mathematician turned sci-fi author have aged well. Hope you enjoy them as much as I did when I read them.

Comment by redman on What Surprised Me About Entrepreneurship · 2020-04-06T12:31:43.644Z · score: -6 (4 votes) · LW · GW

Hy is amazing, and I want to learn more about your small data approach. I do not work in quant finance

Comment by redman on What will the economic effects of COVID-19 be? · 2020-03-27T16:13:29.189Z · score: 3 (4 votes) · LW · GW

We need a rapid test to identify people with immunity, so they can go back to work.

Quarantine is worth it, hospitals are overwhelmed, but it is failing, and will continue to fail. The sooner we can identify people who have gotten it and recovered, then put those people to work in high exposure occupations, the sooner we can restart the economy.

The classes of treatment needed here are as follows:

Rapid pcr test: expensive, and needed for surveillance of key workers, as well as contact tracing. We have this, but it won't scale.

Vaccine: this enables eradication, but is a minimum of 18 months away, and the effort may fail

Post exposure prophylaxis: something given before or immediately after exposure that stops the disease in its tracks (healthcare workers need this, if antimalarials do the job, yay we know those are safe and effective prophylactically)

Symptomatic relief: something given when early symptoms show, which pregents the development of catastrophic symptoms (the malaria drug will hopefully fit this)

Catastrophic care: more and better ventilators and ways of managing ards/cytokine storm. Gl with this, we wanted it before thia crisis.

Rapid antibody test: identifies patients who are exposed. Two weeks after a positive test, if the patient hasn't been admitted to a hospital, it will be safe to say that that particular patient will not require that level of care and is probably no longer contagious.

We need the rapid antibody test, and we need about a billion of them, do rolling tests, if someone has a positive test and thinks they had symptoms > 1 week prior, return them to work and tell them to avoid anyone with a negative test for a week, if they can.

Comment by redman on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-22T00:30:51.426Z · score: 1 (1 votes) · LW · GW

Is there a working definition for anti-rationalist?

Comment by redman on The questions one needs not address · 2020-03-21T23:58:58.000Z · score: 1 (1 votes) · LW · GW

I think that people have to use abstractions and beliefs taken on faith just to exist in the world. I also think that if you are not really disciplined about stating your 'I just assume blank to be true' beliefs, you will end up with a bunch of unstated assumptions worming their way into your psyche that will lead you to weird and unhealthy places (would SSC categorize this as 'Moloch'?)

Puritanical sexual beliefs (those practiced by 1600s Puritans in Massachusetts Bay) are in my opinion a good example of potentially healthy, but utterly irrational dogmas. To summarize (I have a source somewhere):

Married sex is a sacrament, unmarried sex is a grave sin. (Married being a social state that is easy for two people to enter but hard for them to leave)

Conceiving children is important and good.

Both parties much achieve orgasm during the act of intercourse to conceive a child.

Lack of sexual satisfaction is grounds for divorce by either party.

The details of 'sex' are explicitly left undefined.

One of those beliefs (orgasm and conception) is objectively false, but may be socially useful. The others are simply communally agreed upon truths.

Rationalism that leads to nihilistic hedonism and acrasia seems like a bad idea, even if life is pointless and the universe is actively hostile. I think I'm in step with this community's ethos when I assert that most people accidentally end up with a variety of false beliefs. I think I break with the rest of this community in my assertion that maintaining carefully chosen, but objectively false, beliefs is a good idea.

Life has been way better since becoming an adherent of

You should try it!

Comment by redman on More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them · 2020-03-19T07:38:56.778Z · score: 5 (4 votes) · LW · GW

From a strictly lolbertarian perspective, good vaccines are a shitty business to be in, shitty vaccines are a great business to be in.

Real vaccine: capital intensive, may or may not succeed despite best efforts, side effects will probably be present, requires that you produce insane amounts and successfully market to every potential customer or it doesn't work. In the best case where you actually achieve maximum distribution, the pathogen is gone, and you'll never sell another one, so if you didn't profit in the first rush, you'll never see your money again.

Alex jones colloidal silver: I tell you it works, if you're not dead in a year, you're obviously another satisfied customer, here buy another bullshit vaccine from me.

Second product is better for the seller than the first, capital investment is zero, marketing cost can find efficiencies in cost per customer, repeat business is probable.

If you can come up with a business model that makes good vaccines profitable in the current environment, absent aggressive government subsidies, you should start that business and shout your model from the rooftops, because most people in biotech would (angrily) agree with my summary.

Source: have thrown this at many biotech executives and government officers involved in vaccine procuremrnt. Have gotten head nodding.

Comment by redman on How can we protect economies during massive public health crises? · 2020-03-19T07:25:31.181Z · score: 1 (1 votes) · LW · GW

Here's the squeeze. Jobs slow down, people are told to quarantine, people who are paycheck to paycheck fail to make rent. People renting to them have mortgages, they don't get rent, they miss their mortgage payment.

That happens enough, bank is now a landlord, bank does not want to be a landlord, house rots, tenant is booted.

Same for a business, business operates on margin, customers stop paying, margin debts not paid, bank now owns failing business. Bank does firesale, functioning business is now a pile of auctioned off crap.

Monetary policy tools (zero interest rate overnight loans, no reserve requirement) don't trickle down to the masses. I can't get a zero interest loan, neither can anyone who is paycheck to paycheck, but I sure can get extortionate rates from a payday lender or a credit card! I also can't negotiate my existing rates to zero interest.

If someone wants a new loan for a new venture, now is a great time, maybe.

If you are the Fed and want to intervene to protect banks margins, do what you're doing. The banks now own a bunch of small businesses and houses. If you want to help small business owners and homeowners, maybe buy soon to be delinquent debt from the banks at a deep discount, forgive portions of it, sell it back to the banks at a profit later. Is this quantitative easing?

If you want to protect everyone...I have this idea and need to be told why it is dumb (seriously, not an economist, pretty sure this sucks just don't see why)

Use IRS estimates of income from the previous year, create 'universal economic manipulation fund'. Every month, everyone, based on tax bracket gets either a check or a bill, the amount is unknown prior to the end of the month. In fat years, everyone pays, in bad times when the Fed needs to "drop $100s from helicopters" the check is big, biggest at the bottom. Nobody can rely on it as a source of living expenses and become 'welfare or UBI dependent', but a sudden windfall gets spent instantly by people at the bottom and lets people do things like make rent, pay utility bills, and go grocery shopping.

Again, I'm sure this is stupid, I just don't see why. If it isn't stupid, please call someone wth access to Mnuchin and tell him.

Comment by redman on Coronavirus Justified Practical Advice Summary · 2020-03-19T06:26:25.559Z · score: 1 (1 votes) · LW · GW

Plan: continue to avoid contact with others as practicable, if sick, treat at home exactly as I would any other flu-like illness (rest, electrolytes, etc), begin using pulse oximeter if sickness progresses to shortness of breath, if number hits 90 or less, put on n95 mask and go to hospital.

Comment by redman on Assorted thoughts on the coronavirus · 2020-03-19T06:16:24.139Z · score: 1 (1 votes) · LW · GW

I made minimal lifestyle changes, made no unusual purchases, and did not participate in any of the shopping rushes.

I will continue grocery shopping as needed for perishable goods (which I expect to get cheaper) during off-peak hours (mostly empty means no need for me to burn a N95 mask--I have plenty), and my job has limited human contact and is unlikely to go bankrupt or otherwise cease to exist during the pandemic.

Unfortunately, as I now realize, I am a weirdo who is 'prepared' for this sort of thing at all times, and when this craziness ends, I should probably make a concerted effort to get out more.

Anyone else in the same boat?

Comment by redman on What are good ways of convincing someone to rethink an impossible dream? · 2020-03-19T05:48:42.093Z · score: 28 (11 votes) · LW · GW

I have done this successfully, though I am not a success story myself, so I must accept that I can be seen as either a wise person dissuading people from stupid ideas, just as much as I can be seen as an idiot with no vision who would have told the beatles that the guitar is on the way out. This process takes a decent amount of emotional energy and probably isn't worth it in most cases.

Bring forward more enthusiasm for their ridiculous idea than they have, suggest concrete actions that they can take which will provide real feedback. They will either shrink from doing them (and be annoyed that you smile and ask them 'so have you ______ yet' without fail every single time they see you), or actually go and try it, hopefully failing early.

Here are some examples:

Guy has shitty movie idea he keeps pitching to everyone he knows (none of whom know anything about making movies), uses this to dominate conversations. I bought him a copy of 'Save the Cat' (didn't work), asked him what he was doing on a specific weekend ('nothing') and enthusiastically told him 'the (named) pitchfest is that weekend in burbank, plane tickets are $200 and the hotel is cheap AF, the whole trip costs less than I've seen you spend on stupid shit, you can really make this movie dream happen!!!!!!'

'thats an awesome app idea, let's get it working in an excel spreadsheet and see how it goes'

'thats an awesome product idea, grab a domain, put up a blank page, and spend $500(0 for the richer people) on google display network ads to drive traffic to an ad for the idea, see what your click numbers look like'

'oh yeah people would definitely pay for this art, throw up an ad on fiverr and see if you get any bites'

In every case, they either do it (rare, some people would rather have the identity as 'someone with ideas too good for the world' than having to actually risk failing at something and maybe losing that identity), or don't do it.

Three possible outcomes:

They do it, fail, and stop talking about it.

They don't do it, and stop talking about it because every time you bring it up, they get annoyed that they're being called out for being more talk than walk.

They do it and succeed, in which case, you were the person who believed in them and now a valued friend (also, you'll probably want them to keep talking about it, because the world has shown you that your model wasn't quite right)

I personally have some actually creative ideas (metric: can't find the idea expressed anywhere on the internet and experts in the relevant field say they have not seen it before), more 'almost' creative ideas (has been stated by a kook somewhere in the fringes), and a lot of misguided ideas (experts in the field have seen similar ideas many times from people new to the field) most are absolutely awful and none have made me rich. The ones directly related to my area of expertise are generally better than the ones which are not. The above advice for dealing with others mirrors the way I deal with my own ideas.

The ones I don't have the resources to test are available to anyone who cares to ask and possesses said resources btw.

I'm an undiscovered geniu...oh no.

Comment by redman on March Coronavirus Open Thread · 2020-03-11T04:04:20.761Z · score: 6 (7 votes) · LW · GW

In the usa, much of the workforce is paycheck to paycheck and does not have paid leave or short term disability, and health issues are a common cause for bankruptcy. So the following is applicable to a lot of people who probably are not in this (rationalist/lesswrong) community:

If you don't work, you don't get paid, so you don't make rent. If you get quarantined by the state after a positive test, you don't go to work, you don't get paid, and you don't make rent. If you don't make rent, you probably will not have a place to live. If you end up in the hospital, you will probably go bankrupt, and may not have a place to live when you get out. Therefore with the incentives in front of you, take the following advice: 'do as you would normally, go to work no matter how you feel, do not under any circumstances get a coronavirus test, as that might provoke some authority to put you in a position where you cannot get paid.' This is particularly relevant if you live in a state that decides to be aggressive and punitive about quarantining.

Walmart appears to have realized this and is taking measures to adjust the incentives, but it's probably too little too late.

I also expect red states to adopt punitive legislation and pundits representing those communities to not understand why it makes things worse (I've seen right wing blog comments that go something like this: hurr in the days of bubonic plague communities in Italy bricked up houses around infected families, we r not hard enough nowdays durrr).

For the rest of us, recognize that when you interact with a gig worker or any other member of the public with those incentives, they have a high risk of exposure from the community, are unlikely to use PPE (not part of the uniform, not affordable, etc), and regardless of whether they are showing symptoms, will probably work until either prohibited from doing so, or physically unable to due to symptoms.

I'd prefer to live in a community that took effective large scale action (lock down access to vulnerable groups, mass test the healthy, and create strong incentives to self-isolate), but I don't so whatever.

Comment by redman on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-11T02:37:17.730Z · score: 3 (2 votes) · LW · GW

There is a sci-fi sourcebook called 'GURPS Ogre' about an AI dominated future that follows a similar line of reasoning, I think OP might enjoy it, and pdfs can be found online.

I also think that the story of Napoleon's conquest and the reasons for its' success might also be informative to your thesis, as disease is not (at least as far as I know) nearly as much of a factor.

Comment by redman on New article from Oren Etzioni · 2020-02-26T21:11:35.857Z · score: 3 (2 votes) · LW · GW

We passed 'limited variations of the turing test' some time ago:

'Convince a human that he is interacting with a human' is a low bar. Furthermore, the fully self driving cars are available, just not at an acceptable level of reliability. If we set the bar for reliability as 'no worse than a texting teenager with a basic license', it's probably easily attainable today.

How about we apply performance metrics that would be impossible for a human to achieve to robot drivers and doctors, then move the goalposts every time it looks like they might be hit. This way, we can protect the status quo from disruption while pretending we're "just being cautious about existential risk"

Comment by redman on Absent coordination, future technology will cause human extinction · 2020-02-21T01:35:56.118Z · score: 1 (1 votes) · LW · GW

Separate paragraphs, intended to be separate issues.

A 7 on the INES every fifty years means an accident that requires an exclusion zone and long term containment. The chernobyl sarcophagus needs to be maintained, and the accident is not 'over'. Humans have committed to managing a problem (radioactive waste) that will be around longer than the human race has existed to the present point (100,000 years into the future, current radwaste will be a hazard). We are doing fine so far, whether that holds remains to be seen.

I read somewhere that there is enough 'fossil carbon' that if all of it is burned, it will be enough to cause a runaway, venus like greenhouse effect that destroys the biosphere and renders the earth uninhabitable. The timeframe for this I saw is '500ish years'. Stephen Hawking said something similar and was panned for it:

There's an anthropic bias here. 'We are not dead, so therefore we have not already drawn a black ball'. If we had, we would not be around to discuss it, so therefore, we are unlikely to ever be in a position where we look backwards and can say unambiguously 'yep, that was definitely a black ball, we are irreparably screwed'.

Comment by redman on Absent coordination, future technology will cause human extinction · 2020-02-04T09:08:23.352Z · score: -7 (5 votes) · LW · GW

If the current statistics hold of 1 chernobyl/fukushima/mayak level disaster every fifty years, we already drew a black ball.

If business as usual with carbon dioxide pollution continues unabated until earth is uninhabitable in 500 years, we also already drew a black ball.

If the time it takes for a black ball to kill us is more than a few generations it's really hard to plan around fixing it.

Comment by redman on What Money Cannot Buy · 2020-02-03T18:39:29.273Z · score: 3 (2 votes) · LW · GW

Anonymity helps.

By just being known as rich or occupying a prominent position, you will always attract people who want a piece, and will try to figure out what it is that you need in a friend or subcontractor and attempt to provide it, often extremely successfully. I mean, as Eliezer has said (paraphrasing, hopefully faithfully), the kinds of people you find at 'high status' conventions are just a better class of people than the hoi polloi.

With a degree of anonymity, it becomes somewhat straightforward to search for things like the farmer's cowpox cure, because professional purveyors of things to the wealthy do not waste their time crafting pitches for nobodies.

But then, you also have the separate problem as a nobody that 'somebodies' do not return your calls.

Comment by redman on Create a Full Alternative Stack · 2020-02-01T00:57:53.186Z · score: 1 (1 votes) · LW · GW

I read this and thought of organized religion. Unable to figure out why though.

Comment by redman on human psycholinguists: a critical appraisal · 2020-01-16T20:20:27.795Z · score: 1 (1 votes) · LW · GW

Thank you for this!

It seems that my ignorance is on display here, the fact that these papers are new to me shows just how out of touch with the field I am. I am unsurprised that 'yes it works, mostly, but other approaches are better' is the answer, and should not be surprised that someone went and did it.

It looks like the successful Facebook AI approach is several steps farther down the road than my proposal, so my offer is unlikely to provide any value outside of the intellectual exercise for me, so I'm probably not actually going to go through with it--by the time the price drops that far, I will want to play with the newer tools.

Waifulabs is adorable and awesome. I've mostly been using style transfers on still life photos and paintings, I have human waifu selfie to anime art on my to do list but it has been sitting there for a while.

Are you planning integration with DeepAnime and maybe WaveNet so your perfect waifus can talk? Though you would know if that's a desirable feature for your userbase better than I would...

On the topic, it looks like someone could, today, convert a selfie of a partner into an anime face, train wavenet on a collection of voicemails, and train a generator using an archive of text message conversations, so that they could have inane conversations with a robot, with an anime face reading the messages to them with believable mouth movements.

I guess the next step after that would be to analyze the text for inferred emotional content (simple approaches with NLP might get really close to the target here, pretty sure they're already built), and warp the voice/eyes for emotional expression (I think WaveNet can do this for voice, if I remember correctly?

Maybe a deepfake type approach that transforms the anime girls using a palatte of a set of representative emotion faces? I'd be unsurprised if this has already been done, though maybe it's niche enough that it has not been.

This brings to mind an awful idea: In the future I could potentially make a model of myself and provide it as 'consolation' to someone I am breaking up with. Or worse, announce that the model has already been running for two weeks.

I suspect that today older style, still image heavy anime could probably be crafted entirely using generators (limited editing of the writing, no animators or voice actors), is there a large archive of anime scripts somewhere that a generator could train on, or is that data all scattered across privately held archives?

What do you think?

Comment by redman on human psycholinguists: a critical appraisal · 2020-01-16T17:47:40.361Z · score: 1 (1 votes) · LW · GW

When TWDNE went up, I asked 'how long will I have to read and mash refresh before I see a cute face with a plot I would probably be willing to watch while bored at 2am' The answer was 'less than 10minutes', and this is either commentary on the effectiveness of the tool, or on my (lack of?) taste.

I have a few pieces of artwork I've made using StyleGAN that I absolutely love, and absolutely could not have made without the tool.

When I noticed a reply from 'gwern', I admit was mildly concerned that there would be a link to a working webpage and a paypal link, I'm pretty enthusiastic about the idea but have not done anything at all to pursue it.

Do you think training a language model, whether it is GPT-2 or a near term successor entirely on math papers could have value?

Comment by redman on In Defense of the Arms Races… that End Arms Races · 2020-01-16T03:28:10.297Z · score: 5 (3 votes) · LW · GW

Here's an example from nature on snake venom that 'won' an evolutionary arms race.

From the abstract: "Examination of the prothrombin target revealed endogenous blood proteins are under extreme negative selection pressure for diversification, this in turn puts a strong negative selection pressure upon the toxins as sequence diversification could result in a drift away from the target. Thus this study reveals that adaptive evolution is not a consistent feature in toxin evolution in cases where the target is under negative selection pressure for diversification."

There are implications here for arms races generally. When you target something 'core' to the target that cannot be easily randomized to develop a diverse and therefore adaptive strategy, it is possible to 'win' an evolutionary arms race in the long term.

Essentially Eliezer's blind idiot god writes itself into a corner when it can no longer randomize a section under attack, and just sort of fails.

Comment by redman on How would we check if "Mathematicians are generally more Law Abiding?" · 2020-01-14T05:08:00.774Z · score: 2 (2 votes) · LW · GW

Carefully define mathematician. Working definition: one who has obtained a degree in mathematics at some level (is an undergrad a mathematician, or do you need a phd? Do physics degree holders count as mathematicians? What about accounting and finance degrees?).

List number of non degree holders in the usa, list number of degree holders in the usa, list number of mathematics degree holders at the threshold, calculate ratio.

List number of incarcerated people, list number with degrees. Calculate ratio. List number of incarcerated degree holders with math degrees, calculate ratio to degree holders.

From these ratios, you should be able to see if mathematicians are proportionate, under, or overrepresented in incarcerated populations relative to both similarly educated and the general population.

Second approach: submit surveys to known math degree holders and known holders of similar levels of education. Ask 'do you do things you feel to be unethical on a regular basis?' and 'do you do things that a typical person would feel to be unethical, if they understood it, on a regular basis?' along with some lie scales (to determine whether the person is lying to the test to improve their image; these scales are commonly used on psychological tests).

Check power of your statistics.

Between those two methods, you should get a reasonable answer. I haven't googled and won't do it myself, but I think this project, at least approach 1, is doable. Without approach two, in the case mathematicians are better at not going to prison than the general population, the results of approach 1 will incorrectly make it look like Eliezer is right.

I do not know Eliezer, but have read a decent amount of his work, though not this. I offer the following counterpoint:

A mathematician who has chosen to use his math talents to sell used cars has probably calculated what he views to be prices that maximize his profits, taking into account anything you the consumer could do to impose costs for selling an overpriced lemon.

With the mathematician, 'market for lemons' economics are in play, and probably well executed, and therefore, I should avoid negotiating with him, as it is likely to go badly for me. A non mathematician may have made errors or been lazy in his pricing, creating an opportunity for deals.

If Eliezer considers himself to be a mathematician, this assertion is inherently even more suspect, as it is a member of a group ascribing positive characteristics to himself on the basis of his group membership. (I'm a Mathematician you can trust me, because Mathematicians don't lie, because they're Mathematicians....and I can prove it using the language of Mathematicians, which is known as Mathematics, something I've studied more than you... you're still skeptical? What do you have against Mathematicians you lunatic?!)

On the other hand, a pure mathematician who is dumping his car on craigslist is a mathematician who may not be happy about having to be a used car salesman, and is in addition to being as honest as anyone else, likely to find the 'applied' process of figuring out an asking price for the car distatesful. If the buyer is lucky, the mathematician will not need to be talked out of an elaborate payment scheme, calculated the value of the car lazily (find the book value, round up to the nearest $100), and actually has the paperwork so the buyer can hand over cash and the whole business can be concluded quickly.

Comment by redman on Repossessing Degrees · 2020-01-14T04:32:28.332Z · score: 12 (4 votes) · LW · GW

In the usa, professional, drivers, and recreational licenses are revoked for non-payment of child support:

It isn't a perfect analogy but it is 'revocation of a credential due to failure to pay a debt'. I hear it works awesome at getting people to pay child support, doesn't expand the prison population, and is on balance a good thing for society.

As I understand it, the initial purpose for student loans is to ensure that the professional classes with long training times are staffed with motivated indentured servants (if only the idle rich could afford to train as surgeons, they would not be able to have skilled surgeons attend to them during their idleness). This initial purpose has been perverted by the entrance into the education market of bad goods (useless degrees that do not actually provide a profession) as a way of exploiting unsophisticated buyers with access to cheap credit.

These unsophisticated buyers would probably respond to a degree repo with 'oh you mean I can't say I went to devry? Oh no... the horror...I guess when my buddy wants to hire me I'll have to tell him my degree got repod and that he'll have to placate HR in order to bring me on staff'.

In my experience, the degree got the first job, which got the second job, and has never gotten me any meaningful status boost after that.

Comment by redman on human psycholinguists: a critical appraisal · 2020-01-14T03:36:12.346Z · score: 1 (1 votes) · LW · GW

So in your analogy, would the 'seed text' provided to gpt-2 be analogous to a single keyframe, provided to an artist, and gpt-2s output be essentially what happens when you provide an interpolator (I know nothing about the craft of animation and am probably using this word wrong) a 'start' frame but no 'finish' frame?

I would argue that an approach in animation, where a keyframe artist is not sure exactly where to go with a scene, so he draws the keyframe, hands it to interpolative animators with the request to 'start drawing where you think this is going', and looks at the results for inspiration for the next 'keyframe' will probably result in a lot of wasted effort by the interpolators, and is probably inferior (in terms of cost and time) to plenty of other techniques available to the keyframe artist; but also that it has a moderate to high probability of eventually inspiring something useful if you do it enough times.

In that context, I would view the unguided interpolation artwork as 'original' and 'interesting', even though the majority of it would never be used.

Unlike the time spent by animators interpolating, running trained gpt-2 is essentially free. So, in absolute terms, this approach, even if it produces garbage the overwhelming majority of the time, which it will, is moderate to very likely to find interesting approaches with a low, but reasonable for human reviewers, probability (meaning, the human must review dozens of worthless outputs, not hundreds of millions like the monkeys on typewriters).

I suspect that a mathematician with the tool I proposed could type in a thesis, see what emerges, and have a moderate to high probability of eventually encountering some text that inspires something like the following thought: 'well, this is clearly wrong, but I would not have thought to associate this thesis with that particular technique, let me do some work of my own and see if there is anything to this'.

I view the output in that particular example to be 'encountering something interesting', and the probability if it occurring at least once if my proposed tool were to be developed to be moderate to high, and that the cost in terms of time spent reviewing outputs would not be high enough to make the approach have negative value to the proposed user community.

I price the value of bringing this tool into existence in terms of the resources available to me personally as 'worth a bit less than $1000 usd'.

Comment by redman on human psycholinguists: a critical appraisal · 2020-01-14T02:01:02.380Z · score: 1 (1 votes) · LW · GW

The disagreement is about whether 'remixing' can result in 'originality'.

We are in agreement about the way gpt-2 works, and the types of outputs it produces, just disagreeing about whether they meet our criteria for 'interesting' or 'original'. I believe that our definitions of those two things necessarily include a judgement call about the way we feel about 'orginality' and 'insight' as a human phenomenon.

Some attempts to explicate this agreement to see if I understand your position:

I argue that this track, which is nothing but a mashup of other music, stands as an interesting creative work in its' own right. I suspect that you disagree, as it is just 'remixing':

I would also believe that gpt-2, properly trained on the whole of the Talmud (and nothing else), with the older stuff prioritized, could probably produce interesting commentary, particular if specific outputs are seeded with statements like 'today this <thing> happened so therefore'.

I think you would ascribe no value to such commentary, due to the source being a robot remixer, rather than a scholar, regardless of the actual words in the actual output text.

If I remember the gpt-2 reddit thread correctly, most comments were trash, some of them made reading the rest of it worthwhile to me.

Just like a 'real' reddit thread.

Comment by redman on human psycholinguists: a critical appraisal · 2020-01-12T16:26:52.262Z · score: 2 (2 votes) · LW · GW

My standard for interesting poetry is clearly different from (inferior to?) yours. If I understand you correctly, I predict that you think artwork created with StyleGAN by definition cannot have artistic merit on its own.

So we appear to be at an impasse. I do not see how you can simultaneously dismiss the value of the system for generating things with artistic merit (like poetry, mathematics, or song lyrics), and simultaneously share the anxieties of the developers about its' apparent effectiveness at generating propaganda.

AI systems have recently surprised people by being unusually good at strange things, I think opimism for a creative profession like pure math is warranted. In short, the potential payoff (contributions to pure math) is massive, the risk is just an amount of money that in this industry is actually fairly small and the egos of people who believe that 'their' creative field (math) could not be conquered by ML models that can only do 'derivative' things.

I assert that at some point in the next two years, there will exist an AI engine which when given the total body of human work in mathematics and a small prompt (like the one used in gpt-2), is capable of generating mathematical works that humans in the field find interesting to read, provided of course that someone bothers to try.

If the estimated cost for actually training the model I described above, and thus ending this discussion, drops below $1000, and it has not been done, I will simply do it.

Comment by redman on human psycholinguists: a critical appraisal · 2020-01-11T22:42:01.683Z · score: 1 (1 votes) · LW · GW

I assert that if gpt-2 can write interesting looking poetry, it can probably do interesting mathematics.

I think that there is a wide space between 'boring and useless' and 'groundbreakingly insightful', and that this particular system can generate things in that space.

I think my view here is 'less than cautious' optimism. I am not sure what it takes to justify the expenditure to openai to test this assertion. It sounds like a fairly expensive project (data collection, training time), so better people than I will have to decide to throw money at it, and that decision will be made using criteria that are opaque to me.

Comment by redman on human psycholinguists: a critical appraisal · 2020-01-11T13:20:39.363Z · score: 1 (1 votes) · LW · GW

More awesome than my puny mind can imagine.

I'd like the raw model to be trained on raw copies of as many mathematical papers and texts as possible, with 'impact factor' used as weights.

I'd also, while I'm dreaming, like to see it trained on only the math, without the prose, and a second model trained to generate the prose of math papers solely from the math contained within.

I think math papers are a better source than reddit news articles because pure mathematics is systematic, and all concepts, at least in theory, can be derived from first principles covered in other places.

Ideally, the system would generate papers with named authors and citation lists that help guide the operators to humans capable of reviewing the work.

If you believe that one single useful mathematical insight could be found with my proposed approach, it's borderline criminal to not devote effort to getting it built.

Comment by redman on How should I dispose of a dangerous idea? · 2020-01-10T19:25:38.737Z · score: 1 (1 votes) · LW · GW

Maybe some sort of prohibition on collecting if it can be proven that you chose to publicize it would be a good idea.

I argue that in your scenario, the reverse patent is unambiguously a good thing, as it bought us 40 years.

The same ugly incentives apply to pollution, 'I can get money now, or leave an unpolluted world to posterity', you don't need too many people to think that way to get some severe harm.

Comment by redman on human psycholinguists: a critical appraisal · 2020-01-08T09:28:53.793Z · score: 1 (1 votes) · LW · GW

Ability to write is probably independent of other skills. Just look at James Joyce's aphasia as reflected in Finnegan's wake. Who would expect anything of intellectual value to come out of a language generator trained on internet news articles?

I wonder how gpt-2 does if it is trained on the contents of arxiv.

Comment by redman on 1987 Sci-Fi Authors Timecapsule Predictions For 2012 · 2020-01-08T09:01:07.090Z · score: 1 (1 votes) · LW · GW

Orson Scott Card and Wolverton 2+3 were pretty solid

Comment by redman on How should I dispose of a dangerous idea? · 2020-01-08T08:51:39.804Z · score: 1 (1 votes) · LW · GW

I think I have a general solution. It requires altruism, but at the social rather than the individual level.

This concept of a reverse patent office as described is based on the idea that if a toxic meme emerges, it would be more harmful if it had emerged in the past, and that any delay is good.

The reverse patent office accepts encrypted submissions from idea generators, and observes new ideas in the world.

When the reverse patent office observes a harmful idea in existence in the world, and assesses it as worth 'having been delayed' based on pre-existing criteria, submitters who previously conceived of the toxic meme submit decryption instructions. Payout is given based on the amount of time that has passed since initial receipt of the encrypted submission, using a function that gives compounding interest as time passes (thus not creating an incentive to 'harvest' a harmful idea by disclosing it after submission).

Does this solution work? Who would fund such an organization? What would be its criteria?

Comment by redman on How should I dispose of a dangerous idea? · 2019-12-21T18:40:14.093Z · score: 1 (1 votes) · LW · GW

Thank you! I was hoping that someone was aware of some clever solution to this problem.

I believe that AI is at least as inherently unsafe as HI, 'Human Intelligence'. I do think that our track record with managing the dangers of HI is pretty good, in that we are still here, which gives me hope for AI safety.

I wonder how humans would react to a superintelligent AI stating 'I have developed an idea harmful to humans, and I am incentivized to publicize that idea. I don't want to do harm to humans, can you please take a look at my incentives and tell me if my read of them is correct? I'll stick with inaction until the analysis is complete.'

Is that a best-case scenario for a friendly superintelligence?

Comment by redman on How should I dispose of a dangerous idea? · 2019-12-21T18:19:43.263Z · score: 2 (2 votes) · LW · GW

To use the zombie-words example I raised in a previous comment.

Imagine a "human shellcode compiler", which requires a large amount of processing power and can generate a phrase that a human who hears it will instantly obey, and no countermeasures are available other than 'not hearing the phrase'. Theoretically, this could have good applications if very carefully controlled ("stop using heroin!").

Imagine someone runs this to make a command like 'devour all the living human flesh you can find'. The compiler is salvageable, this particular compiled command is not.

I believe my idea to be closer to the second example than the first, though not nearly to the same level of harm. Based on the qualia computing post linked elsewhere, my most ethical option is 'be quiet about this one and hope I find a better idea to sell'.

Comment by redman on How should I dispose of a dangerous idea? · 2019-12-21T18:11:19.852Z · score: 2 (2 votes) · LW · GW

Thank you for this, it is the type of response I was looking for, and I now have a new blog to read regularly.

Comment by redman on How should I dispose of a dangerous idea? · 2019-12-21T18:10:01.101Z · score: 1 (1 votes) · LW · GW

Thank you for this, I believe you have described my intent accurately.

To clarify, everything before 'per request from the comments...' was the original post.

Comment by redman on How should I dispose of a dangerous idea? · 2019-12-19T23:35:33.056Z · score: 1 (1 votes) · LW · GW

So in your read of the downvotes, the most common interpretation of the OP was 'you community members should pay me, an outsider, to be virtuous, or else' rather than 'hi fellow rationalists, does anyone know of resources that would allow me to profit from the practice of our shared values'?

Comment by redman on How should I dispose of a dangerous idea? · 2019-12-19T22:28:20.655Z · score: 3 (2 votes) · LW · GW

Unfortunately, you really nailed the issue. Out of an abundance of caution, I won't use your violent analogy of a bio-weapon here, as that could be construed as furthering the 'blackmail' misinterpretation of my writing.

To use the analogy I added to the OP, there may in theory be good reasons to market things to vulnerable populations (like children), and there may in theory be good reasons to study nicotine marketing (market less harmful products to existing users), but someone with knowledge of both fields who realizes something like 'by synthesizing existing work on nicotine marketing with existing work on marketing things to children, I have identified a magic formula that will double the number of smokers in the next generation' has discovered a dangerous idea.

If for example, this person is employed at a marketing agency that took work from a client who sells nicotine products, his manager will make a strong appeal to his selfishness ('so what have you been working on?')

As altruists, we would like that idea to remain unknown, how do we as altruists appeal to that person's selfishness without demanding disclosures to some entity that promises not to actually do anything with the idea?

The Unabomber had a proposed solution to this problem--people he judged to be producing ideas that were harmful to whatever it was that he cared about received bombs in the mail, thus appealing to engineers' desire to not get hurt in bombings. I understand that there is a country in the middle east which has historically taken the same approach.

Perhaps I should view the 'delete this' command and suggestion that I was violating a social norm that is often punished by violent men (posting a threat in a public forum bad decision wut wut) in the most upvoted comment on this thread as an endorsement of that 'negative reinforcement' strategy by this community?

Comment by redman on How should I dispose of a dangerous idea? · 2019-12-19T16:02:39.373Z · score: 4 (3 votes) · LW · GW

Updated, I left the original wording as intact as possible. The 'emptiness' of the personal anecdote I think is important because it demonstrates the messaging challenge faced by someone in this position. If the torches and pitchforks are out in 'this' community, imagine how the general public would react.

"I have an idea that makes the world a worse place. I could potentially profit somewhat personally by bringing it to life, but this would be unethical. How badly do I need the money?" Is, in my opinion, probably a fairly common thought in many fields. Ethics can sometimes be expensive, and the prevailing morality, at least in the USA, is 'just make the money'. Fortunately, in my own case, I do not have visions of large sums of money or prestige on the other side of disclosure, so I am not being tempted very strongly.

Farmers are regularly paid not to grow certain crops, and this makes economic sense somehow. How could someone in my position be incentivized to avoid disclosure of harmful ideas, without requiring that disclosure?

Arguably, an alternative to dealing with the social opprobium of making a pitch like mine would be to rationalize disclosure, argue that the idea is not harmful but is in some way helpful, say that people who say otherwise have flawed arguments, and attempt to maximize profit while minimizing the harms to myself and my own community.
Like an award-winning pornographer who makes a strenuous effort to keep his children and family away from his work.

Comment by redman on How should I dispose of a dangerous idea? · 2019-12-19T05:12:48.725Z · score: 1 (1 votes) · LW · GW

To say that special circumstances exist, and that my research was thorough would be an unpersuasive appeal to my own authority.

I assert that this is closer to a professional engineer noticing a new, non-obvious application of a technology that he is well acquainted with. This happens daily and in theory is required in order to receive a patent.

This idea prompted the desire for a 'reverse patent', where someone who generates a strictly harmful idea is somehow economically incentivized to avoid disclosure. Unfortunately, disclosure to a 'reverse patent office' would still be disclosure, and therefore harmful.

If the downvotes and comments are any indication, the community that is concerned with the idea of an 'artificial' intelligence coming up with some unanticipated engineering breakthrough that harms humanity, then accidentally or intentionally turning it loose is pretty hostile to a 'natural' intelligence asserting that it has done the same, looking for a way other than altruism to motivate others in the same position to keep quiet.

Comment by redman on How should I dispose of a dangerous idea? · 2019-12-18T21:22:02.372Z · score: 1 (1 votes) · LW · GW

True! Unfortunately, I do think that in this particular case, it is both unique and undiscovered, as it is a very weird synthesis of unrelated fields, and I can say with some confidence that the list of people with knowledge and experience equalling or exceeding mine in all of the relevant disciplines is likely small enough that one of us would have to be 'first', and in this case, it seems to be me.

If it were as widely known as, for example, the idea of the assassination market, it would already have been used, the fact that it has not been discussed in any of the relevant fields, and has not been implemented, suggests that it is in fact novel.

This is an appeal to my expertise and relies on mystique, so I wouldn't find it very persuasive, unfortunately.

Comment by redman on How should I dispose of a dangerous idea? · 2019-12-18T21:13:29.449Z · score: 9 (3 votes) · LW · GW

I am holding a lot of dangerous knowledge and am encumbered by a variety of laws and non-disclosure agreements. This is not actually uncommon. So arguably, I am already being paid to keep my mouth shut about a variety of things, but these are mostly not original thoughts. This specific idea is, in my best judgement, both dangerous, and unencumbered by those laws and NDAs.

The assertion that my default position is 'altrusitic silence' means that this is not 'posting a threat on a public forum'. It would be a real shame if a large variety of things that are currently not generally known were to become public. While I would indeed like to be paid not to make them public (and, as previously stated, in some cases already am), this should not be taken as an assertion to the reader, that if they fail to provide me with some tangible benefit, that I will do something harmful.

This is in a broader sense, a question: 'If there exists an idea which is simply harmful, for example, a phrase which when spoken aloud turns a human into a raging canninal, such that there is no value whatsoever to increasing the number of people aware of the idea, how can people who generate such ideas be incentivized to not spread them?'

Maybe the best thing to do is to look for originators of new ideas perceived as dangerous, and encourage them to drink hemlock tea before they can hurt anyone else.