Posts
Comments
>> Is it possible that philosophers just don’t know about the concept? Maybe it is so peculiar to math and econ that “expected value” hasn’t made its way into the philosophical mainstream.
I believe the concept of expected value is familiar to philosophers and is captured in the doctrine of rule utilitarianism: we should live by rules that can be expected to maximize happiness, not judge individual actions by whether they in fact maximize happiness. (Of course, there are many other ethical doctrines.)
Thus, it's a morally good rule to live by that you should bring puppies in from the cold – while taking normal care not to cause traffic accidents and not to distract children playing near the road, and provided that you're reasonably sure that the puppy hasn't got rabies, etc. – the list of caveats is open-ended.
After writing the above I checked the Stanford Encyclopedia of Philosophy. There are a huge number of variants of consequentialism but here's one relevant snippet:
https://plato.stanford.edu/entries/consequentialism-rule/#FulVerParRulCon
Rule-consequentialist decision procedure: At least normally, agents should decide what to do by applying rules whose acceptance will produce the best consequences, rules such as “Don’t harm innocent others”, “Don’t steal or vandalize others’ property”, “Don’t break your promises”, “Don’t lie”, “Pay special attention to the needs of your family and friends”, “Do good for others generally”.
This is enough to show that expected value is familiar in ethics.
>>Each block has a reference code. If you paste that reference code elsewhere, the same block appears
>>It's hard to reliably right-click the tiny bullet point (necessary for grabbing the block reference)
I never need to do this. If you type "(())" [no quotes] at the destination point and then start typing in text from the block you're referencing, blocks containing that text will appear in a window. Keep typing until you can see the desired block and click on it to insert it.
If you type the trigger "/block", the menu that appears contains four fun things you can do with blocks.
I'm guessing you want respondents to put in serious research - you're not looking for people's unreflective attitudes - sorry, intuitions?
* The question addressing Gwern's post about Tool AIs wanting to be Agent AIs.
When Søren posed the question, he identified the agent / tool contrast with the contrast between centralized and distributed processing, and Eric denied they are the same contrast. He then went on to discuss the centralized / distributed contrast. He regards it as of no particular significance. In any system, even within a neural network, different processes are conditionally activated according to the task in hand and don't use the whole network. These different processes within the system can be construed as different services.
Although there is mixing and overlapping of processes within the human brain, this is a design flaw rather than a desirable feature.
I thought there was some mutual misunderstanding here. I didn't find the tool / agent distinction being addressed in our discussion.
* The question addressing his optimism about progress without theoretical breakthroughs (related to NNs/DL).
Regarding breakthroughs versus incremental progress: Eric reiterated his belief that we are likely to see improvements in doing particular tasks but a system that – in his examples – is good at counting leaves on a tree is not going to be good at navigating a Mars rover, even if both are produced by the same advanced learning algorithm. I couldn't identify any crisp arguments to support this.
This is a fine job, Dmitry.
If it's possible to edit your post, it would be a good idea to link to your improved spreadsheet at
I'm fillling in the booking form now. I intend to stay for the four days.
Chris Cooper
Similar question: I'm at Pinkman's - packed. How will you make yourself known? Giant paperclip on table?