Posts
Comments
Someone on Hacker News had the idea of putting COVID patients on an airplane to increase air pressure (which is part of how ventilators work, due to Fick's law of diffusion).
Could this genuinely work?
Hey, I'm a student at the University of Copenhagen in Bioinformatics/Computer Science and I'd like to help any way I can. If there's anything I can do to help let me know.
Is there no way to actually delete a comment? :)
never mind this was stupid
Where did the term on the top of page three of this paper after "a team's chance of winning increases by" come from?
Where did the term on the top of page three of this paper after "a team's chance of winning increases by" come from?
Will it be feasible in the next decade or so to actually do real research into how to make sure AI systems don't instantiate anything with any non-negligible level of sentience?
Two random questions.
1) what is the chance of AGI first happening in Russia? Are they laggards in AI compared to the US and China?
2) is there a connection between fuzzy logic and the logical uncertainty of interest to MIRI or not really?
Any value in working on a website with resources on the necessary prerequisites for AI safety research? The best books and papers to read, etc. And maybe an overview of the key problems and results? Perhaps later that could lead to an ebook or online course.
I agree - great idea!
Thoughts on Timothy Snyder's "On Tyranny"?
Anything not too technical about nanotechnology? (Current state, forecasts, etc.)
Well, "The set of all primes less than 100" definitely works, so we need to shorten this.
More specifically, what should the role of government be in AI safety? I understand tukabel's intuition that they should have nothing to do with it, but if unfortunately an arms race occurs, maybe having a government regulator framework in place is not a terrible idea? Elon Musk seems to think a government regulator for AI is appropriate.
I really recommend the book Superforecasting by Philip Tetlock and Dan Gardner. It's an interesting look at the art and science of forecasting, and those who repeatedly do it better than others.
Wow, I hadn't thought of it like this. Maybe if AGI is sufficiently ridiculous in the eyes of world leaders, they won't start an arms race until we've figured out how to align them. Maybe we want the issue to remain largely a laughingstock.
Sure. The ideas aren't fleshed out yet, just thrown out there:
http://lesswrong.com/r/discussion/lw/oyi/open_thread_may_1_may_7_2017/
Stuart, since you're an author of the paper, I'd be grateful to know what you think about the ideas for variants that MrMind suggested in the open thread, as well as my idea of a government regulator parameter.
One idea I had was to introduce a parameter indicating the actions of a governmental regulatory agency. Does this seems like a good variant?
Hi all,
A friend and I (undergraduate math majors) want to work on either exploring a variant or digging deeper into the model introduced in this paper:
Any ideas?
#1 is the smallest.
It becomes uncomfortable for me to stay in bed more than about half an hour after waking up.
Suppose it were discovered with a high degree of confidence that insects could suffer a significant amount, and almost all insect lives are worse than not having lived. What (if anything) would/should the response of the EA community be?
This is a cool idea! My intuition says you probably can't completely solve the normal control problem without training the system to become generally intelligent, but I'm not sure. Also, I was under the impression there is already a lot of work on this front from antivirus firms (i.e. spam filters, etc.)
Also, quick nitpick: We do for the moment "control our computers" in the sense that each system is corrigible. We can pull the plug or smash it with a sledgehammer.
I'd like to see the end of state lotteries, although I know that's not gonna happen.
Haha, yea I agree there are some practical problems.
I just think in the abstract ad absurdum arguments are a logical fallacy. And of course most people on Earth (including myself) are intuitively appalled by the idea, but we really shouldn't be trusting our intuitions on something like this.
Why?
I have said before that I think consciousness research is not getting enough attention in EA, and I want to add another argument for this claim:
Suppose we find compelling evidence that consciousness is merely "how information feels from the inside when it is being processed in certain complex ways", as Max Tegmark claims (and Dan Dennett and others agree). Then, I argue, we should be compelled from a utilitarian perspective to create a superintelligent AI that is provably conscious, regardless of whether it is safe, and regardless whether it kills us humans (or worse), if we know it will try to maximize the subjective happiness of itself and the subagents it creates.
The above isn't my argument (Sam Harris mentioned someone else arguing this) but I am claiming this is one reason why consciousness research is ethically important.
No, at least not yet. That's a good point. But Facebook is a private company, so filtering content that goes against their policy need not necessarily violate the constitution, right? I don't know the legal details, though, I could be completely wrong.
I agree there is a big danger of slipping down the free speech slope if we fight too hard against fake news, but I also think we need to consider a (successful) campaign effort of another nation to undermine the legitimacy of our elections as an act of hostile aggression, and in times of war most people agree some measured limitation of free speech can be justified.
Wow, that had for some reason never crossed my mind. That's probably a very bad sign.
Perhaps I was a bit misleading, but when I said the net utility of the Earth may be negative, I had in mind mostly fish and other animals that can feel pain. That was what Singer was talking about in the beginning essays. I am fairly certain net utility of humans is positive.
Thanks for your reply, username2. I am disheartened to see that "You're crazy" is still being used in the guise of a counterargument.
Why do you think the net utility of the world is either negative or undefined?
This is fantastic
Let me also add that while a sadist can parallelize torture, it's also possible to parallelize euphoria, so maybe that mitigates things to some extent.
Quick note: I put a 1 for the driving question because I don't drive
This is brilliant
Can someone explain why UDT wasn't good enough? In what case does UDT fail? (Or is it just hard to approximate with algorithms)?
So wait, why is FDT better than UDT? Are there situations where UDT fails?
Well, suppose it increases awareness of the threat of AGI, if we can prove that consciousness is not some mystical, supernatural phenomenon. Because it would be more clear that intelligence is just about information processing.
Furthermore, the ethical debate about creating artificial consciousness in a computer (mindcrime issues, etc.) would very shortly become a mainstream issue, I would imagine.
Is neuroscience research underfunded? If so, I've been thinking more and more that trying to understand human consciousness has a huge expected value, and maybe EA should pay it more attention.
I read somewhere NK is collapsing, according to a top-level defector. Maybe it's best to wait things out.
Thanks for this topic! Stupid questions are my specialty, for better or worse.
1) Isn't cryonics extremely selfish? I mean, couldn't the money spent on cryopreserving oneself be better spend on, say, AI safety research?
2) Would the human race be eradicated if there is a worst-possible-scenario nuclear incident? Or merely a lot of people?
3) Is the study linking nut consumption to longevity found in the link below convincing?
http://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2173094
And if so, is it worth a lot of effort promoting nut consumption in moderation?
Actually, as a tournament player I feel I can help explain the slowness:
The article suggests that this isn't due to increased computational speed or focus, but I think that's wrong. Playing slowly doesn't imply thinking slowly. In a chess game, you have a certain amount of time overall, and often when the position is very complicated players will spend half an hour delving into variations and sub-variations. If it's hard to concentrate, they may just rely on low-calc alternatives, and play faster.
I'm not surprised. But I also don't see much utility from this study; most people already believe that coffee helps them focus.