Posts

A possible tax efficient swap mechanism for charity 2014-10-05T12:21:20.307Z
POLL : Aging research booster 2012-11-02T05:22:12.144Z
Max Autonomy 2012-07-25T07:22:17.541Z
Imperfect Levers 2010-11-17T19:12:41.564Z
Ranking the "competition" based on optimization power 2010-10-17T16:55:51.772Z
The Importance of Goodhart's Law 2010-03-13T08:19:29.974Z

Comments

Comment by blogospheroid on Moloch's Toolbox (1/2) · 2017-11-06T07:38:16.524Z · LW · GW

The link is broken, I think + Didn't Alex Tabarrok do one better by creating the dominant assurance contract?

Comment by blogospheroid on MIRI's 2016 Fundraiser · 2016-11-02T09:29:27.206Z · LW · GW

Ouch! I donated $135 (and asked my employer to match as well) on Nov 2, India time. I had been on a brief vacation and just returned. Now I re-read and found it is too late for the fundraiser. Anyway, please take this as positive reinforcement for what it is worth. You're doing a good job. Take the money as part of fundraiser or off-fund raiser donations, whatever is appropriate.

Comment by blogospheroid on [Stub] The problem with Chesterton's Fence · 2016-01-12T08:07:05.695Z · LW · GW

This basically boils down to the root of the impulse to remove a chesterton's fence, doesn't it?

Those who believe that these impulses come from genuinely good sources (eg. learned university professors) like to take down those fences. Those who believe that these impulses come from bad sources (eg. status jockeying, holiness signalling) would like to keep them.

The reactionary impulse comes from the basic idea that the practice of repeatedly taking down chesterton's fences will inevitably auto-cannibalise and the system or the meta-system being used to defend all these previous demolitions will also fall prey to one such wave. The humans left after that catastrophe will be little better than animals, in some cases maybe even worse, lacking the ability and skills to survive.

Comment by blogospheroid on Bragging thread, December 2015 · 2015-12-10T07:29:27.555Z · LW · GW

Donated $100 to SENS. Hopefully, my company matches it. Take that, aging, the killer of all!

Comment by blogospheroid on [LINK] The Bayesian Second Law of Thermodynamics · 2015-08-24T09:34:11.501Z · LW · GW

I'm not a physicist, but aren't this and the linked quanta article on Prof. England's work bad news? (great filter wise)

If this implies self-assembly is much more common in the universe, then that makes it worse for the latter proposed filters (i.e. makes them EDIT higher probability)

Comment by blogospheroid on MIRI's 2015 Summer Fundraiser! · 2015-08-20T06:23:10.718Z · LW · GW

I donated $300 which I think my employer is expected to match. So $600 to AI value alignment here!

Comment by blogospheroid on [link] FLI's recommended project grants for AI safety research announced · 2015-07-02T04:57:20.946Z · LW · GW

I feel for you. I agree with salvatier's point in the linked page. Why don't you try to talk to FHI directly? They should be able to get some funding your way.

Comment by blogospheroid on California Drought thread · 2015-05-08T09:26:10.525Z · LW · GW

Letting market prices reign everywhere, but providing a universal basic income is the usual economic solution.

Comment by blogospheroid on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-04T06:59:50.798Z · LW · GW

Guys everyone on reddit/Hpmor seems to be talking about a spreadsheet with all solutions listed. Could anyone please post the link as a reply to this comment. Pretty please with sugar on top :)

Comment by blogospheroid on Open thread, Jan. 19 - Jan. 25, 2015 · 2015-01-20T11:50:57.031Z · LW · GW

A booster for getting AI values right is the 2 sidedness of the process. Existential risk and benefit.

To illustrate - You solve poverty, you still have to face climate change, you solve climate change, you still have to face biopathogens, you solve biopathogens, you still have to face nanotech, you solve nanotech, you still have to face SI. You solve SI correctly, the rest are all done. For people who use the cui bono argument, I think this answer is usually the best one to give.

Comment by blogospheroid on Stupid Questions January 2015 · 2015-01-01T08:46:02.853Z · LW · GW

Is anyone aware of the explanation behind why technetium is radioactive while molybdenum and ruthenium, the two elements astride it in the periodic table are perfectly normal? Searching on google on why certain elements are radioactive are giving results which are descriptive, as in X is radioactive, Y is radioactive, Z is what happens when radioactive decay occurs, etc. None seem to go into the theories which have been proposed to explain why something is radioactive.

Comment by blogospheroid on A forum for researchers to publicly discuss safety issues in advanced AI · 2014-12-14T01:58:45.286Z · LW · GW

Forum for Exploratory Research in General AI

Comment by blogospheroid on Approval-directed agents · 2014-12-14T01:49:20.026Z · LW · GW

I think this is a very important contribution. The only internal downside of this might be that the simulation of the overseer within the ai would be sentient. But if defined correctly, most of these simulations would not really be leading bad lives. The external downside is overtaking by other goal oriented AIs.

The thing is, I think in any design, it is impossible to tear away purpose from a lot of the subsequent design decisions. I need to think about this a little deeper.

Comment by blogospheroid on Stupid Questions December 2014 · 2014-12-09T09:44:00.189Z · LW · GW

How do they propose to move the blackholes? Nothing can touch a blackhole, right?

Comment by blogospheroid on December 2014 Bragging Thread · 2014-12-02T14:09:19.303Z · LW · GW

Donated $300 to SENS foundation just now. My company matches donations, so hopefully a large cheque is going there. Fightaging is having a matching challenge for SENS, so even more moolah goes to anti-aging research. Hip Hip Hurray!

Comment by blogospheroid on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-25T16:28:30.494Z · LW · GW

Weird fictional theoritical scenario. Comments solicited.

In the future, mankind has become super successful. We have overcome our base instincts and have basically got our shit together. We are no longer in thrall to Azathoth (Evolution) or Mammon (Capitalism).

We meet an alien race, who are way more powerful than us and they show their values and see ours. We seek to cooperate on the prisoner's dilemma, but they defect. In our dying gasps, one of us asks them "We thought you were rational. WHY?..."

They reply " We follow a version of your meta-golden rule. Treat your inferiors as you would like to be treated by your superiors. In your treatment of super intelligences that were alive amongst you, the ones you call Azathoth and Mammon, we see that you really crushed them. I mean, you smashed them to the ground and then ran a road roller, twice. I am pretty certain you cooperated with us only because you were afraid. We do to you what you did to them"

What do we do if we could anticipate this scenario? Is it too absurd? Is the idea of extending our "empathy" to the impersonal forces that govern our life too much? What if the aliens simply don't see it that way?

Comment by blogospheroid on Neo-reactionaries, why are you neo-reactionary? · 2014-11-21T11:19:32.773Z · LW · GW

So, is my understanding correct that your FAI is going to consider only your group/cluster's values?

Comment by blogospheroid on Neo-reactionaries, why are you neo-reactionary? · 2014-11-20T04:35:44.455Z · LW · GW

Yes, that too.

Poland had used a version of that when arguing with the European union about the share in some commision, I'm not remembering what. It mentioned how much Poland's population might have been had they not been under attack from 2 fronts, the nazis and the communists.

Comment by blogospheroid on Neo-reactionaries, why are you neo-reactionary? · 2014-11-19T17:44:31.630Z · LW · GW

Not doing so might leave your AI to be vulnerable to a slower/milder version of this. Basically, if you enter a strictly egalitarian weighting, you are providing vindication to those who thoughtlessly brought out children into the world and disincentivizing, in a timeless , acausal sense, those who're acting sensibly today and restricting reproduction to children they can bring up properly.

I'm not very certain of this answer, but it is my best attempt at the qn.

Comment by blogospheroid on Neo-reactionaries, why are you neo-reactionary? · 2014-11-19T16:01:00.289Z · LW · GW

I went from straight Libertarianism to Georgism to my current position of advocacy of competitive government. I believe in the right to exit and hope to work towards a world where exit gets easier and easier for larger numbers. My current anti-democratic position is informed by the amateur study of public choice theory and incentives. My formalist position is probably due to an engineering background and liking things to be clear.

When the fundamental question arises - what keeps a genuine decision maker, a judge or a bureaucrat in government (of a polity way beyond the dunbar number) honest, then the 3 strands of neo-reaction appear as three possible answers - Either the person believes in a higher power (religious traditionalism) or they feel that the people they are making a decision for are an extended family (ethnic nationalism) or they personally profit from it (Techno-commercialism). Or a mix of the three, which is more probable.

There are discussions in NRx about whether religious traditionalism should even be given a place here, since it is mostly traditional reaction, but that is deviating from the main point. Each of these strands holds something sacred - a theocracy holds the diety supreme, an ethno state holds the race supreme, a catallarchy holds profit supreme. And I think you really can't have a long term governing structure which doesn't hold something really sacred. There has to be a cultural hegemony within which diversities which do not threaten the cultural hegemony can flourish. Even Switzerland, the land of 3 nations democratically bound together has a national military draft which ties its men in brotherhood.

A part of me is still populist, I think, holding out for algorithmic governance to be perfected and not having to rely on human judgement which could be biased. But time and time again, human judgement based organizations have defeated, soundly, procedure based organizations. Apple is way more valuable than Toyota. The latter is considered the pinnacle of process based firms. The former was famously run till recently, by a mercurial dictator. So, human judgement has to be respected, which means clear sovereignty of the humans in question, which means something like the neo-cameralism of Moldbug, until the day of FAI.

Comment by blogospheroid on 2014 Less Wrong Census/Survey · 2014-11-05T13:45:27.192Z · LW · GW

Done. Foof that was long...

Comment by blogospheroid on Fixing Moral Hazards In Business Science · 2014-10-19T04:56:58.637Z · LW · GW

I have been thinking of a lot of incentivized networks and was almost coming to the same conclusion, that the extra cost and the questionable legality in certain jurisdictions may not be worth the payoff, and then the Nielsen scandal showed up on my newsfeed. I think there is a niche, just not sure where would it be most profitable. Incidentally Steve Waldman also had a recent post on this - social science data being maintained in a neutral blockchain.

About the shipping of products and placebos to people, I see a physical way of doing it, but it is definitely not scalable.

Let's say there is a typical batch of identical products to be tested. They've been moved to the final inventory sub-inventory, but not yet to the staging area where they are to be shipped out. The people from the testing service arrive with a bunch of duplicate labels for the batch and the placebos and replace 1/2 the quantity with placebo. Now, only the testing service knows which item is placebo and which is product.

This requires 2 things from the system - the ability to trace individual products and the ability to print duplicate labels. the latter should be common except for places which might have some legal issues for continuous numbering. Ability to trace individual products is there in a lot of discrete mfg. but a whole lot of process manufacturing industries have only traceability by batch/lot.

Comment by blogospheroid on Fixing Moral Hazards In Business Science · 2014-10-19T02:45:42.165Z · LW · GW

Hi David,

This is a worthwhile initiative. All the very best to you.

I would advise that this data be maintained on a blockchain like data structure. It will be highly redundant and very difficult to corrupt, which I think is one of the primary concerns here.

http://storj.io, http://metadisk.org/

Comment by blogospheroid on Contrarian LW views and their economic implications · 2014-10-10T06:17:33.694Z · LW · GW

Yes, I think so. Something I won't be able to do as a non-US investor.

Comment by blogospheroid on Contrarian LW views and their economic implications · 2014-10-09T11:36:56.880Z · LW · GW

Invest in Quixey when they go in for the next round of funding, perhaps.

Comment by blogospheroid on A possible tax efficient swap mechanism for charity · 2014-10-08T05:51:52.011Z · LW · GW

Thanks, Toby. I expected that the legal risks would be quite an issue. Point noted. I had not expected this to be a new idea as well, after all it seemed too simple. I guess a more informal means is good for now. Hope the EA forum has such a place to make this discussion.

Comment by blogospheroid on A possible tax efficient swap mechanism for charity · 2014-10-05T19:17:42.645Z · LW · GW

I think most charities are tax deductible only in their own countries. Oxford's cross country deductiblity is more the exception than the rule. To be specific, I'll not get a tax deduction in India if I contributed to fhi. But if I swap with an englishman who wanted to contribute to the ramakrishna mission or child relief and you (indian charities) then we both benefit.

I agree on potential regulatory issues. That's why I wanted more opinions.

Comment by blogospheroid on The Great Filter is early, or AI is hard · 2014-08-30T11:34:55.104Z · LW · GW

I'd like to repeat the comment I had made at "outside in" for the same topic, the great filter.

I think our knowledge of all levels – physics, chemistry, biology, praxeology, sociology is nowhere near the level where we should be worrying too much about the fermi paradox.

Our physics has openly acknowledged broad gaps in our knowledge by postulating dark matter, dark energy, and a bunch of stuff that is filler for – "I don’t know". We don't have physics theories that explain the smallest to the largest.

Coming to chemistry and biology, we’ve still not demonstrated abiogenesis. We have not created any new base of life other than the twisty strands mother nature already prepared and gave us everywhere. We don't know the causes of our mutations to predict them to any extent. We simply don’t know enough to fill in these gaps.

Coming to basic sustenance, we don’t know what are the minimum requirements for a self-contained multi generational habitat. The biosphere experiments were not complete in any manner.

We don’t know the code for intelligence. We don’t know the code for preventing our own bodily degradation.

We don’t know how to balance new knowledge acquisition and sustainability run a society. Our best centres of knowledge acquisition are IQ shredders (a term meant to highlight the fact that the most successful cities attract the highest IQ people and reduce their fertility compared to if they had remained back in small towns/rural areas) and not sustainable environmentally either. Patriarchy and castes work great in in static societies. We don’t know their equivalent in a growing knowledge society.

There are still many known ways in which we can screw up. Lets get all these basics right, repeatedly right and then wonder with our new found knowledge, according to these calculations, there is a X% chance that we should have been contacted. Why are we apparently alone in the universe?

Comment by blogospheroid on Open thread, July 21-27, 2014 · 2014-07-28T12:48:31.868Z · LW · GW

If a storm like the one described in the link had actually hit, then would people really be concerned with these fine differences?

Comment by blogospheroid on Open thread, July 21-27, 2014 · 2014-07-25T05:48:36.816Z · LW · GW

This just showed up on my google reader.

http://in.reuters.com/article/2014/07/25/electricity-solarstorms-kemp-idINL6N0PZ5D120140725

My immediate thought was about this storm actually hitting in 2012. The mayan apocalypse was predicted on that year. The civilizational challenge to rebuild would have been substantial. But even more, the epistemic state of the civilization that recovered would almost have been permanently compromised. It would appear to most people that an ancient prophecy of a civilization that was brutally crushed was actually true.

What would we be thinking then? How would the rationalists in our adjacent universe be updating their priors? How much thought and effort would be put into reading and understanding ancient prophecies? Could you dismiss modern seers and prophets? Who would you trust?

Comment by blogospheroid on Calling all MIRI supporters for unique May 6 giving opportunity! · 2014-05-06T12:50:56.244Z · LW · GW

Gave 3 small $10 donations over the last 3 hrs.

Weird question - why is MIRI classified as a > 2M$ charity. Did it actually pull in that much last year? I'm , for some reason, not able to open intelligence.org and check it myself..

Comment by blogospheroid on [LINK] Joseph Bottum on Politics as the Mindkiller · 2014-02-28T02:48:55.326Z · LW · GW

The points he makes would be familiar to those who've read Moldbug.

Comment by blogospheroid on AALWA: Ask any LessWronger anything · 2014-01-13T07:27:24.170Z · LW · GW

Haven't read your book so not sure if you have already answered this.

what is your assessment of miri's current opinion that increasing the global economic growth rate is a source of existential risk?

How much risk is increased for what increase in growth?

Are there safe paths? (Maybe catch up growth in india and china is safe??)

Comment by blogospheroid on [LINK] Why I'm not on the Rationalist Masterlist · 2014-01-06T06:13:00.658Z · LW · GW

I agree with Romeo Steven's comment that the issues seem orthogonal. As an example, (caveat YMMV), Steve Sailer believes in HBD. However, he frequently cites lower growth in african american wages as a reason to shut the american borders down to low skilled workers.

However, in today's environment, I'm not sure how many top-rated charities are HBD believing. A neoreactionary charity aiming at improving Africa might do many things differently. And being a relatively new ideology, most policies would not have substantial support of data. Hence, atleast in the current scenario, you would not find many people that were HBD aware and contributed greatly to african charities. However, it is not intellectually inconsistent.

Comment by blogospheroid on [LINK] Why I'm not on the Rationalist Masterlist · 2014-01-06T06:08:21.603Z · LW · GW

I agree with Romeo Steven's comment that the issues seem orthogonal. As an example, (caveat YMMV), Steve Sailer believes in HBD. However, he frequently cites lower growth in african american wages as a reason to shut the american borders down to low skilled workers.

However, in today's environment, I'm not sure how many top-rated charities are HBD believing. A neoreactionary charity aiming at improving Africa might do many things differently. And being a relatively new ideology, most policies would not have substantial support of data. Hence, atleast in the current scenario, you would not find many people that were HBD aware and contributed greatly to african charities. However, it is not intellectually inconsistent.

Comment by blogospheroid on MIRI's Winter 2013 Matching Challenge · 2013-12-18T17:32:10.565Z · LW · GW

Paid 300$ with my employer matching it, but the employers contribution may come in only at around Jan 15. Hope that isn't too late.

Comment by blogospheroid on AI Policy? · 2013-11-12T08:53:52.797Z · LW · GW

David Brin believes that high speed trading bots are a high probability route to human indifferent AI. If you agree with him, then laws governing the usage of high speed trading algorithms could be useful. There is a downside in terms of stock liquidity, but how much that will affect overall economic growth is still a research area.

Comment by blogospheroid on AI ebook cover design brainstorming · 2013-09-27T04:22:57.495Z · LW · GW

Not Exactly a march of progress line, but something like a chimp and einstein on one corner and a server rack on the far end. Similar to the line diagram used in the sequences to illustrate how much of a difference we;re looking at. We are looking at appealing to numerate people, so it should not be overkill to have a graph.

Comment by blogospheroid on Thought experiment: The transhuman pedophile · 2013-09-19T04:40:09.872Z · LW · GW

Ah.. Now you understand the frustrations of a typical Hindu who believes in re-incarnation. ;)

Comment by blogospheroid on Help us name a short primer on AI risk! · 2013-09-19T04:34:12.387Z · LW · GW

Flash Crash of the Universe : The Perils of designed general intelligence

The flash crash is a computer triggered event. The knowledgeable amongst us know about it. It indicates the kind of risks expected. Just my 2 cents.

My second thought is way more LW specific. Maybe it could be a chapter title.

You are made of atoms : The risks of not seeing the world from the viewpoint of an AI

Comment by blogospheroid on Open thread, August 26 - September 1, 2013 · 2013-09-02T06:02:01.664Z · LW · GW

I seek help on a problem that I stumbled upon when thinking about a rational teleporter's story.

As typical of such protagonists, he finds that he can teleport and teleport a human's mass sideways with him, seemingly unharmed. As befits a rational protagonist, he experiments and finds out that he can teleport animals and after a demonstration to a very reluctant brother, he realises that he can teleport a human being, unharmed. After a crazy week of teleporting, he realises that he needs approximately 3 minutes to recuperate after a teleport to really do the next time well. He also realises that he can't move more than 2 people.

He is a nice person in general and instead of turning bad, wants to start a teleportation business. He has decided on an approximately 8 hour work day for himself.

I immediately thought that his niche would be in very high-end people transportation. To be really conservative, he just wants to teleport people from one international port to another to another, so that they continue to pass via the same security procedures as befits international flight. He announces that he will do domestic teleportation (to non-international port destinations) only after an international teleportation and an immediate immigration passage is given to him and his client. These are listed so that nations don't lock him up as a threat.

So, how to maximise revenue from teleportation? The simplest answer can be that he auctions 7 minute time slots on a website, where the clients can enter their Source and Destination. But the issue with that is that all "reset/return" teleports (From destination of Slot N to Source of Slot N+1) are going empty. Then, i thought that there might be a secondary auction where he auctions off these empty slots. But these secondary auctions also need time and the more time you give for the secondary auction, the less attractive the primary auctions become. He is competing against business jets which are available at a very quick notice.

Any ideas on how to resolve the issue of maximising revenue? Is this a straightforward Operations Research problem that I can look up? What mathematical process/jargon am I missing here?

Comment by blogospheroid on Open thread, August 26 - September 1, 2013 · 2013-09-02T05:57:50.369Z · LW · GW

I'm not sure if you have already tested for this. Please have the test for hyperthyroidism done. My wife had a problem with a finger ache and after many tests, we eventualy zero-ed in on hyperthyroidism.

Comment by blogospheroid on Do Earths with slower economic growth have a better chance at FAI? · 2013-06-14T05:04:59.329Z · LW · GW

I'm not sure that humane values would survive in a world that rewards cooperation weakly. Azathoth grinds slow, but grinds fine.

To oversimplify, there seem to be 2 main factors that increase cooperation, 2 basic foundations for law. Religion and Economic growth. Of this, religion seems to be far more prone-to-volatility. It is possible to get some marginally more intelligent people to point out the absurdity of the entire doctrine and along with the religion, all the other societal values collapse.

Economic growth seems to be a far more promising foundation for law as the poor and the low in status can be genuinely assured that they will get a small share of a growing pie. If economic growth slows down too much, it's back to values ingrained by evolution.

Comment by blogospheroid on Earning to Give vs. Altruistic Career Choice Revisited · 2013-06-07T07:52:18.025Z · LW · GW

if development of newer institutions is what you are interested in, you can choose to contribute to charter cities or seasteading. That would be an intermediate risk-reward option between a low risk option like AMF and high risk high reward one like MIRI/FHI.

Comment by blogospheroid on Mathematicians and the Prevention of Recessions · 2013-05-28T09:02:45.176Z · LW · GW

I had thought of another way that mathematicians could contribute to global welfare using mostly math skills.

A lot of newcomers to bitcoin often mentioned that it looked to them that the calculations look wasted. Those who have read about bitcoin know that this is not true as the calculations are used to secure the network. The calculations really don't have other uses.

The essence of a bitcoin like problem is - tough to crack, but easy to verify once the solution is in. A talented mathematician/chemist team could team up to try to map protein folding or some such problem to a bitcoin-like algorithm where regular increments proceed to solving a bigger problem.

The problem could involve a simple rope like structure and finding out the least energy state. Once cracked, then another segment is added and the next block requires solution with the added segment.

Or a 3d game of life simulation where you have to create von-neumann machines of a certain size. Then once that is cracked, you have to proceed towards creating a bigger von neumann machine.

Hopefully some of the learnings from the random designs that get generated from these networks can be used to crack actual protein folding or nanotechnology and take mankind to the next level.

Comment by blogospheroid on [LINK] Evidence-based giving by Laura and John Arnold Foundation · 2013-05-19T18:39:35.892Z · LW · GW

for 'evidence based giving' givewell doesn't show up in the first 2 pages of google results, but it does show up on the first page for the terms 'evidence based philanthropy'.

Comment by blogospheroid on Bitcoins are not digital greenbacks · 2013-04-23T10:47:52.162Z · LW · GW

Right now, the best velocity measure seems to be coin days destroyed. But it is gameable. It is not being gamed in bitcoin because nothing is dependent on it.

The closest GDP measure in a cryptocurrency of the structure of bitcoin seems to be sum of transaction fees. It can be gamed by early adopters, but that is true of almost every measure

Comment by blogospheroid on A Rational Altruist Punch in The Stomach · 2013-04-02T10:58:22.097Z · LW · GW

I guess after a point, the network takes care of itself, with self interest guiding the activities of participants. Of course, I could be wrong.

Comment by blogospheroid on A Rational Altruist Punch in The Stomach · 2013-04-02T10:53:02.439Z · LW · GW

I agree to a certain extent. I just pointed out one thing, probably the only thing, that is fairly immune from the law , is expected to last fairly long and rewards its participants.

I did mention, something like a blockchain, a peer to peer network that rewards its participants. Contrarians and even reactionaries can use something like this to preserve and persist their values across time.

Comment by blogospheroid on A Rational Altruist Punch in The Stomach · 2013-04-01T09:21:36.942Z · LW · GW

The bitcoin blockchain looks like it will almost last forever, since there are many fanatics that would keep the flame lit even if there was a severe crackdown.

So, an answer for the extreme rational altruist seems to lie in how to encode the values of their trust in something like a bitcoin blockchain, a peer to peer network that rewards participants in some manner, giving them the motive to keep the network alive.