Posts
Comments
Superintelligence FAQ [1] as well.
Along the same lines, I found this analogy by concrete example exceptionally elucidative.
While merely anti-bacterial, Nano Silver Fluoride looks promising. (Metallic silver applied to teeth once a year to prevent cavities).
Yudkowsky has written about The Ultimatum Game. It has been referenced here 1 2 as well.
When somebody offers you a 7:5 split, instead of the 6:6 split that would be fair, you should accept their offer with slightly less than 6/7 probability. Their expected value from offering you 7:5, in this case, is 7 * slightly less than 6/7, or slightly less than 6.
Maybe add posts in /tag/ai-evaluations
to /robots.txt
Sure, but it does not preclude it. Moreover, if the costs of the actions are not borne by the altruist (e.g. by defrauding customers, or extortion), I would not consider it altruism.
In this sense, altruism is a categorization tag placed on actions.
I do see how you might add a second, deontological definition ('a belief system held by altruists'), but I wouldn't. From the post, "Humane" or "Inner Goodness" seem more apt in exploring these ideas.
I do not see the contradiction. Could you elaborate?
- 55-60% chance there will be "signs of life" in 2030 (4:06:20)
- "When we've got our learning disabled toddler, we should really start talking about the safety and ethics issues, but probably not before then" (4:35:36)
- These things will take thousands of GPUs, and will be data-center bound
- "The fast takeoff ones are clearly nonsense because you just can't open TCP connections above a certain rate" (4:36:40)
Broadly, he predicts AGI to be animalistic ("learning disabled toddler"), rather than a consequentialist laser beam, or simulator.
I found this section, along with dath ilani Governance, and SCIENCE! particularly brilliant.
This concept is introduced in Book 1 as the solution to the Ultimatum Game, and describes fairness as Shapely value.
When somebody offers you a 7:5 split, instead of the 6:6 split that would be fair, you should accept their offer with slightly less than 6/7 probability. Their expected value from offering you 7:5, in this case, is 7 * slightly less than 6/7, or slightly less than 6.
_
Once you've arrived at a notion of a 'fair price' in some one-time trading situation where the seller sets a price and the buyer decides whether to accept, the seller doesn't have an incentive to say the fair price is higher than that; the buyer will accept with a lower probability that cancels out some of the seller's expected gains from trade. [1]
Eliezer: What do you want the system to do?
Bob: I want the system to do what it thinks I should want it to do.
Eliezer: The Hidden Complexity of Wishes
Gwern has a fantastic overview of time-lock encryption methods.
A compute-hard real-time in-browser solution that doesn't rely on exotic encryption appears infeasible. (You'd need a GPU, and hours/days worth of compute for years of locking). For LW, perhaps threshold aggregate time-lock encryption would suffice (though vulnerable to collusion/bribery attacks, as noted by Gwern).
I agree with Quintin Pope, a public hash is simple and effective.
Vitalik's Optimism retro-funding post mentions a few instances where secret ballots are used today, and which could arguably be improved by these cryptographic primitives:
- The Israeli Knesset uses secret votes to elect the president and a few other officials
- The Italian parliament has used secret votes in a variety of contexts. In the 19th century, it was considered an important way to protect parliament votes from interference by a monarchy.
- Discussions in US parliaments were less transparent before 1970, and some researchers argue that the switch to more transparency led to more corruption.
- Voting in juries is often secret. Sometimes, even the identities of jurors are secret.
In general, the conclusion seems to be that secret votes in government bodies have complicated consequences; it's not clear that they should be used everywhere, but it's also not clear that transparency is an absolute good either.
If we cannot prove who anyone actually voted for, we can't prove who actually won at all.
Using zero-knowledge proofs it is possible to prove that votes were counted correctly, without revealing who anyone voted for. See MACI [1], which additionally provides inability to prove your own vote to a third party.
if the two agents are able to accurately predict each others' actions and reason using FDT, then it is possible for the two agents to cooperate
Couldn't you equally require QV participants pre-commit to non-collusion?
In The Case against Education: Why the Education System Is a Waste of Time and Money, Bryan Caplan uses Earth data to make the case that compulsory education does not significantly increase literacy.
My reading is that he claims compulsory education had little effect in Britain and the US, where literacy was already widespread.
When Britain first made education compulsory for 5-to-10-year-olds in 1880, over 95% of 15- year-olds were already literate. [1]
There's an interesting footnote where he references a paper on economic returns of compulsory education, which cites many sources (p14) finding little to no economic return from schooling reform (though limited to Europe).
Follow the white rabbit
The source makes explicit reference to refined starches:
c All foods are assumed to be in nutrient-dense forms; lean or low-fat and prepared with minimal added sugars; refined starches, saturated fat, or sodium
Though to be clear, I do not endorse the 'system' as proposed. I do not believe that it adequately reflects nuance in health effects of food consumption, nor do I believe it accurately represents modern food health science (where are their sources?).
For example, the hard-line stance against saturated fats is questionable [1] [2] [3]. Not explicitly mentioning glycemic index is another obvious failure, for which I assume 'added sugar' is a proxy.
There are gut-microbiome differences across carbohydrates with similar GI [4], but I do not have enough information to recommend one sugar over another.
Yes I count most (by GI) flour as equivalent to sugar [1]. As for keeping high GI carbs under 10%, I have insufficient information. To keep all carbs under 10% would be ketogenic, which while not specifically recommended (unless trying to lose weight), has shown interesting results in the literature [2].
Pancakes contain significant quantities of carbohydrates (sugar), with glycemic index comparable to that of table sugar. Those pancakes look like they're closer to 3 sweets than 1 (sorry kids).
For those looking to learn more, erowid.org is an excellent starting point.
I think it balances prescribed burns with other methods of fire-suppression (fire-breaks, thinning), and incentivizes local coordination among neighbors.
Hold land-owners liable for fire-damage caused to their abutting neighbors.
I recommend Ample (lifelong subscriber). It has high quality ingredients (no soy protein), fantastic macro ratios (5/30/65 - Ample K), and an exceptional founder.
Since time is the direction of increased entropy, this feels like it has some deep connection to the notion of agents as things that reduce entropy (only locally, obviously) to achieve their preferences
Reminded me of Utility Maximization = Description Length Minimization.
It's hard for me to credibly believe that this harm happened due to the algorithm, that no humans at Google were clearly aware of what was going on, when Googlers were being sent out to events to pitch to this market
Never attribute to malice that which is adequately explained by stupidity. It sounds like the fraud involved was extremely sophisticated, as it was hiding behind state negligence. Google now requires these advertisers to be licenced by a reputable third party.
The problem I see here isn't just that the Ads team gets paid for participation in criminal activity, but they have no incentive to really stop profitable illegal activity
In 2011 Google settled a negligence case regarding illegal pharmaceutical sales for $500 million.
Scams and malware end up running rampant whenever ads are involved, that's where the money for the business segment is coming from
I find it hard to imagine this is true within reputable ad networks, though I agree that such content is endemic to online advertising.
You have not produced evidence that billboards are generally 'criminal mind control', only that they violate norms for shared spaces for people like Banksy. Ultimately this boils down to local political disagreement, rather than some clever ploy by The Advertisers to get into your brain.
You owe the companies nothing. Less than nothing, you especially don't owe them any courtesy. They owe you.
This is strictly true in the sense that advertisement is negative cost and negative value, but that is exactly why it is used as a tool for producing otherwise difficult to coordinate public goods.
To quote David Friedman:
Consider one example of the public good problem: radio and television broadcasts. By producing and broadcasting an entertaining program, I provide a benefit to everyone who listens to it. Since I cannot control who listens to it I cannot, as in the case of ordinary production, collect my share of that benefit by charging for it. The public in question is a large and disorganized one so it is clear, on theoretical grounds, that programs cannot be privately produced.
Yet they are. Some clever person thought up the idea of combining a public good with positive production cost and positive value with a public good of negative cost and negative value and giving away the package: program plus advertisements. As long as the net value is greater than zero and the net cost less than zero, people listen to the program and the broadcaster covers his costs.
I was interested in her claim that the Bullet Cluster is evidence against dark matter.
The scientists estimated the probability for a Bullet-Cluster-like collision to be about one in ten billion, and concluded: that we see such a collision is incompatible with the concordance model. And that’s how the Bullet Cluster became strong evidence in favor of modified gravity.
Technically, the market I should make corresponds to what I think other people's probabilities are likely to be given they can see my market. I might give a wider market because only people that think they're getting a good deal with trade with me
Technically, market making is betting on price volatility by providing liquidity. To illustrate, I'll use a simple automated market maker.
Yes * No = Const
This means I will accept any trade of Yes/No tokens, so long as the product remains constant. Slippage is proportional to the quantity of tokens available. Profit is made via trading fees.
There is a risk that the underlying assets diverge more quickly than what slippage/fees can cover, which can cause losses. There are various tricks to mitigate these effects, such that profit can be guaranteed within certain bounds.
The point being I'm no longer betting on Alice's height, but instead betting that predictors will trade against the current height.
Each of these functions takes ~30s to run, so it ends up being more efficient to put them in one job instead of multiple.
This is a perfect example of the AWS Batch API 'leaking' into your code. The whole point of a compute resource pool is that you don't have to think about how many jobs you create.
It sounds like you're using the wrong tool for the job (or a misconfiguration - e.g. limit the batch template to 1 vcpu).
The benefit of the pass-through approach is that it uses language-level features to do the validation
You get language-level validation either way. The assert
statements are superfluous in that sense. What they do add is in effect check_dataset_params()
, whose logic probably doesn't belong in this file.
The failure you're talking about here is tripping a try clause.
No, I meant a developer introducing a runtime bug.
The reason to be explicit is to be able to handle control flow.
def run_job(make_dataset1: bool, make_dataset2: bool):
if make_dataset1 && make_dataset2:
make_third_dataset()
If your jobs are independent, then they should be scheduled as such. This allows jobs to run in parallel.
def make_datasets_handler(job):
for dataset in job.params.datasets:
schedule_job('make_dataset', {dataset})
def make_dataset_handler(job):
{name, params} = job.params.dataset
constructors.get(name)(**params)
Passing random params to functions and hoping for failure is a terrible idea great for breaking code ('fuzzing').
The performance difference of explicit vs pass-through comes down to control flow. Your errors would come out just as fast if you ran check_dataset_params()
up front.
the first gives us a 5x faster feedback loop
A good way to increase feedback rate is to write better tests. Failure in production should be the exception, not the norm.
I dropped out of high school. It's not a place for smart people.
Some highlights from my .vimrc
" Prevent data loss
set undofile
" Flush to disk every character (Note: disk intensive, e.g. makes large copy-pastes slow)
set updatecount=1
" Directory browsing usability
let g:netrw_liststyle = 3 " tree list view
let g:netrw_banner = 0
" Copy for X11
vnoremap <C-c> "cy <Bar> :call system('xclip -selection clipboard', @c)<CR><CR>
Also worth checking out CoC (language server)
An interactive demo of the prisoners dilemma.
Twitch has recently begun experimenting with predictions for streamers using their channel-points currency.
The history of central banking (and large scale monetary policy generally), is fascinating. This lecture I found particularly enlightening (George Selgin): https://www.youtube.com/watch?v=JeIljifA8Ls
Noteworthy remarks:
- Even before central banking, government regulation required banks purchase junk assets (causing failures)
- Nonuniform currency price slippage (when each bank issued its own notes) may have been < 1%
- The National Bank Act taxed private bank notes at 10%, effectively destroying private currency circulation
- National Bank notes were backed by US debt. As the debt shrank, currency became scarce, leading to crisis.
- Canadian banking at the time (not centralized until 1935) was both deregulated and did not suffer currency crises (see 17:54 for graph)
Now, it turns out that the Fed was designed not by politicians, or bureaucracy, but instead by special interest groups (namely Wall St.). See The Meeting at Jekyll Island
I have removed the good/bad duality entirely, as I found it confusing.
https://www.lesswrong.com/posts/M2LWXsJxKS626QNEA/the-trouble-with-good
Puzzle 1:
score: 180
To use a more realistic example, it's hard for me to agree that a billionaire values their tenth vacation home more than a homeless person who is in danger of freezing in the winter.
I don't see "value" as a feeling. A freezing person might desire a warm fire, but their value of it is limited by what can be expressed.
That said, a person is a complex asset, and so the starving person might trade in their "apparent plight" (e.g. begging).
For example, the caring seller of the last sandwich might value alleviating "apparent plight" more than millions of shares of AMZN. Whether they do or don't exactly determines the value of an individuals suffering against some other asset, in terms of the last sandwich.
Tap again directly on your prediction to remove it.
What if instead of producing new things to value, people change the things they value. Perhaps increased homogeneity of value creates more efficient economies of scale.
If I understand correctly, then Rocket Pool fits the bill. It is a network (with mild centralization) that allows people to buy and sell shares of a validator pool. Risk is spread across the network in case of node failure.
Note on 1, the withdrawal key is separate from the validator key, such that one can validate but not withdraw.
Edit: Though I agree on 2, that in the long term the fees such networks will be able to charge will decline significantly.
There will not be a secondary market for Eth2 stakes
Actually, Coinbase just announced intent to deliver this secondary market. A tokenized Eth2 stake may then also be traded on DeFi exchanges. https://blog.coinbase.com/ethereum-2-0-staking-rewards-are-coming-soon-to-coinbase-a25d8ac622d5
It's not 'free', just very very cheap. If food at the mall was as cheap to produce as ketchup, they would probably just make the food free to bring in business.
It's based on an observation of the continual efficient pricing pressure of competitive markets combined with technological innovation which reduces the real cost of food.
And when I go spend my money I impose a cost on the world
You impose no such cost, as those willing to exchange your money for their services do so profitably.
Is working good for the rest of society?
Suppose you do some work and earn $100. The question from the rest of society’s perspective is whether we got more benefit than the $100 we paid you.
We can get more than $100 if e.g. you spend your $100 on a Netflix subscription...
If you receive $100 for work, that means you have already provided at least $100 in value to society. That society might gain additional benefit from how you spend your money is merely coincidental.
Digital Rights Enforcement Agencies
Given a desire for digital rights in the face of Crypto-Anarchy, market-based polycentric law might yield a solution.
David Friedman's model for market-law involves defense agencies and arbitrators who mediate between those agencies. The system is stable as a repeated game, wherein the cost of fighting other agencies is higher than the cost of peaceful negotiation.
In the digital world, a 'defense agency' might look like a professional hacking group. This group would maintain a public identity and offer it's services to clients that won their case in court. Occasionally groups fight each other, in epic Neuromancer-esque style.
Friedman's theory uses social-norms (e.g. property rights) as the basis for efficient negotiation between agents. Therefore we might predict that a demonstrably neutral arbitration protocol could form the basis for this new market-based law.
I was imagining a utility function for fiat with a singular limit at t=15yr, such that any bet paying out fiat is worthless. Think hyperinflation caused by it being obvious that we are facing imminent doom.
I don't see how stocks necessarily correlate with the prediction you're making.
Bet an asset instead of money.
Alternatively, you could bet that market odds will change significantly before then.
Concretely, you could use ETH denominated Augur for a long-term bet, or USDC for a short-term bet on odds.
I am excited by the self-governance aspect, and the opportunity to live under a more personalized set of social norms.
The structure of monastery is specifically appealing because it greatly reduces 'distance' between individuals. See Going Critical.
I have some more concrete ideas about a shared ranch (with fast internet) out somewhere beautiful.