Comment by artemium on Beijing Academy of Artificial Intelligence announces 1,75 trillion parameters model, Wu Dao 2.0 · 2021-06-04T11:53:29.690Z · LW · GW

In a funny way, even if someone is stuck in a Goodhart trap doing Language Models it is probably better to Goodhart performance on Winograd Schemas than just adding parameters. 

Comment by artemium on Beijing Academy of Artificial Intelligence announces 1,75 trillion parameters model, Wu Dao 2.0 · 2021-06-03T14:00:30.210Z · LW · GW

I am not an expert in ML but based on some conversations I was following, I heard WuDao's LAMBADA score (an important performance measure for Language Models) is significantly lower than GPT-3. I guess a number of parameters isn't everything.

Comment by artemium on SubOnlyStackFans · 2020-11-03T08:44:44.722Z · LW · GW

Strong upvote for a healthy dose of bro humor which isn't that common on LW.  We need more "people I want to have a beer with" represented in our community :D.

Comment by artemium on Is Success the Enemy of Freedom? (Full) · 2020-11-01T11:09:23.615Z · LW · GW

Thats interesting. Can you elaborate more? 

Comment by artemium on AI risk hub in Singapore? · 2020-10-30T15:45:09.836Z · LW · GW

None: None of the above; TAI is created probably in the USA and what Asia thinks isn't directly relevant. I say there's a 40% chance of this.

I would say it might still be relevant in this case. For example, given some game-theoretical interpretations, China might conclude that doing a nuclear first strike might be a rational move if the US creates the first TAI and suspects that will give their enemies an unbeatable advantage.  Asian AI risk hub might successfully convince Chinese leadership to not do that if they have information that US TAI is built in a way that would prevent usage just for the interest of its country of origin.

Comment by artemium on AI risk hub in Singapore? · 2020-10-30T15:38:58.748Z · LW · GW

Not sure about anti-gay laws in Singapore, but from what I gathered from the recent trends, the LGTB situation is starting to improve there and in East Asia in general. 

OTOH the anti-drug attitudes are still super strong (for example you can still get the death penalty for dealing harder drugs), therefore I presume it's an even bigger deal-breaker giving the number of people who are experimenting with drugs in the broader rationalist community.

Comment by artemium on The rationalist community's location problem · 2020-10-06T18:56:32.807Z · LW · GW

Not to mention a pretty brutal Anti-Drug laws.

Comment by artemium on Russian x-risks newsletter Summer 2020 · 2020-09-01T14:40:27.260Z · LW · GW

What would be the consequence of Belarus joining the western military alliance in terms of Russia's nuclear strategy? Let's say that in the near future Belarus joins NATO, and gives the US free hand in installing any offensive or defensive (ABM) Nuclear weapon system on Belarus territory. Would this dramatically increase the Russian fear of a successful nuclear first strike by the US?

Comment by artemium on Construct a portfolio to profit from AI progress. · 2020-07-27T08:35:54.688Z · LW · GW

Excellent question! Was thinking about it myself lately, especially after GPT-3 release. IMHO, it is really hard to say as it is not clear which commercial entity will bring us over the finish line, and if there will be an investment opportunity at the right moment. It also quite possible that even the first company that does it might even bungle its advantage and investing there might be a wrong move (seems to be a common pattern in the history of technology).

My idea is just to play it safe and save money as much as possible until there is a clear example we arrived at the AGI level (when AI completely surpasses humans on Winograd schemas for example), and if there won't be any FOOM try to find the companies that are mostly focused on the practical application where you get the biggest bang for the buck.

But honestly, at the point where you will have AGI widely available its quite possible that the biggest opportunity is just learning to utilize it properly. If you have access to AGI you can just ask it yourself: "how to benefit from AGI given my current circumstances?" and it will probably give you the best answer.

Comment by artemium on A Day in Utopia · 2017-11-27T15:25:58.578Z · LW · GW
We haven’t managed to eliminate romantic travails

Ah! Then, it isnt utopia in my definition :-) .

Love it. It is almost like anti-Black Mirror episode where humans are actually non-stupid.

Comment by artemium on The Copernican Revolution from the Inside · 2017-11-03T09:47:23.247Z · LW · GW

Amazing post!

Would be useful to mention examples of contemporary ideas that could be analogues of heliocentrism in its time. I would suggest String Theory to be one possible candidate. The part when Geocentrist is challenging Heliocentrist to provide some proof while Heliocentrist is desperately trying to explain away lack of experimental evidence kinda reminds me of debates between string theorist and their sceptics. (it doesn't mean String Theory is true just there seems to be a similar state of uncertainty).

Comment by artemium on Becoming stronger together · 2017-07-13T13:27:16.781Z · LW · GW

This is great. Thanks for posting it. I will try to use this example and see if I can find some people who would be willing to do the same. Do you know of any new remote group that is recruiting members?

Comment by artemium on [deleted post] 2017-06-01T06:16:55.955Z

This is a good idea that should definitely be tested. I completely agree with the Duncan that modern society, and especially our community is intrinsically to allergic to authoritarian structure despite strong historical proof that this kind of organisations can be quite effective.

would consider joining in myself but given my location that isn't an option.

I do think that in order to build successful organisation based on authority the key factor are personal qualities and charisma of the leader and rules play smaller part.

As long as project is based on voluntary participation, I don't see why anyone should find it controversial. Wish you all the best.

Comment by artemium on [deleted post] 2015-11-30T23:13:19.548Z


Comment by artemium on [Link] A rational response to the Paris attacks and ISIS · 2015-11-29T06:25:21.905Z · LW · GW

We would first have to agree on what "cutting the enemy" would actually mean. I think liberal response would be keeping our society inclusive, secular and multicultural at all costs. If that is the case than avoiding certain failure modes like becoming intolerant militaristic societies and starting unnecessary wars could be considered as successful cuts against potential worse world-states.

Now that is liberal perspective, there are alternatives, off course.

Comment by artemium on "Immortal But Damned to Hell on Earth" · 2015-06-01T22:14:55.954Z · LW · GW

I don't think that we should worry about this specific scenario. Any society advanced enough to develop mind uploading technology would have excellent understanding of the brain, consciousness and the structure of thought. In this circumstances retributive punishment would seem be totally useless as they could just change properties of the perpetrator brain to make him non-violent. and eliminate the cause of any anti-social behaviour.

It might be a cultural thing though, as america seems to be quite obsessed with retribution. I absolutely refuse to believe any advanced society with mind uploading technology would be so petty to use this in such horrible way . At that point I expect they would treat bad behaviour as a software bug.

Comment by artemium on Anti-Pascaline satisficer · 2015-04-15T07:06:32.849Z · LW · GW

One possibility is to implement the design which will makes agent strongly sensitive to the negative utility when he invests more time and resources on unnecessary actions after he ,with high-enough probability , achieved its original goal.

In the paperclip example : wasting time an resources in order to build more paperclips or building more sensors/cameras for analyzing the result should create enough negative utility to the agent compared to alternative actions.

Comment by artemium on Stupid Questions April 2015 · 2015-04-08T07:05:27.841Z · LW · GW

Recently I became active in EA (effective altruism) movement but I'm kind of stuck on the issue of animal welfare. While I agree that Animals deserve ethical treatment and that the world would be a better place if we found a way to completely eliminate animal suffering I do have some questions about some practical aspects.

  • Is there any realistic scenario where we could expect entire world population to convert to non-meat diet , considering cultural, agricultural and economic factors?

  • Would it be better if instead trying to convert billions of people to become vegetarians/vegans we invest more in synthetic meat research and other ways to make meat eating non-dependent on animals?

  • How highly should we prioritize animal welfare in comparison to other EA issues like world poverty and existential risks?

  • How does EA community view meat-eaters in general, is there strong bias against them? Is this a big issue inside the movement?

Disclosure: I am (still) a meat-eater , and at this point it would be really difficult for me to make consistent changes to my eating habits. I was raised in meat-eating culture and there are almost no cheap and convenient vegetarian/vegan food options where I live . Also my current workload prevents me in trying to spend more time on cooking.

I do feel kind of bad though, and maybe I'm not trying hard enough . If you have some good suggestions how I can make some common-sense changes towards less animal dependent-diet that might be helpful.

Comment by artemium on Open Thread, Apr. 06 - Apr. 12, 2015 · 2015-04-07T11:41:14.965Z · LW · GW

Interesting talk at BOAO forum : Elon Musk, Bill Gates and Robin Li (Baidu CEO). They talk about Superintelligence at around 17:00 minute.

  • Elon is critical of Andrew Ng remark that 'we should worry about AI like we should worry about Mars overpopulation' ("I know something about mars" LOL)

  • Bill Gates mentioned Nick Bostrom and his book 'Superintelligence'. His seems to have read the book. Cool.

  • Later, Robin Li mentions China Brain projects, which appears to be Chinese government AGI project (anyone knows something about it? Sounds interesting...hopefully it won't end like Japans 'fifth-generation computing' in the 80s)

Comment by artemium on Open thread, Mar. 23 - Mar. 31, 2015 · 2015-03-31T06:53:25.324Z · LW · GW

I never thought of that, but that's a great question. We have similar problem in Croatian language as AI would be translated 'Umjetna Inteligencija' (UI). I think we can also use the suggested title "From Algorithms to Zombies" once someone decides to make Croatian/Serbian/Bosnian translation

Comment by artemium on Open thread, Mar. 23 - Mar. 31, 2015 · 2015-03-31T06:44:30.939Z · LW · GW

One thing that might help you from my experience is to remove any food from your surroundings that could tempt you. I myself have only fruits, milks and cereals in my kitchen and basically nothing else. While I could easily go to supermarket or order food the fact I would need to do do some additional action is enough form me to avoid doing that. You can use laziness for your advantage.

Comment by artemium on Defeating the Villain · 2015-03-31T06:23:37.481Z · LW · GW

One of the reasons is that a lot of LW members are really involved in FAI issues and they strongly believe that if they manage to succeed in building a "good" AI , most of earthly problems will be solved in an very short time, Bostrom said something like that we can postpone solving complicated philosophical issues after we solved AI ethics issue.

Comment by artemium on The Hardcore AI Box Experiment · 2015-03-31T06:06:16.576Z · LW · GW

Agree.. The AI boxing Is horrible idea for testing AI safety issues. Putting AI in some kind of virtual sandbox where you can watch his behavior is much better option, as long as you can make sure that AGI won't be able to become aware that he is boxed in.

Comment by artemium on The Hardcore AI Box Experiment · 2015-03-31T06:02:02.143Z · LW · GW

Hmm I still think that there is incentive to behave good. Good, cooperative behavior is always more useful than being untrustworthy and cruel to other entities. There might be some exceptions, thought (simulators want conflict situation for entertainment purposes or some other reasons).

Comment by artemium on The Hardcore AI Box Experiment · 2015-03-31T05:57:09.259Z · LW · GW

I had exactly the same idea!

It is possible that in that only few people are actually 'players' (have consciousness) and others are NPC-like p-zombies. In that case, I can say I'm one of the players, as I'm sure that I have consciousness, but there is no way I can prove it to anyone else ;-) .

One of the positive aspects of this kind of thought experiments is that usually gives people additional reasons for good behavior because in most cases it is highly likely that simulators are conscious creatures who will probably reward those who behave ethically.

Comment by artemium on I tried my hardest to win in an AI box experiment, and I failed. Here are the logs. · 2015-01-30T20:50:15.562Z · LW · GW

Exactly. Also there are great number of possibilities that even the smartest persons could not even imagine, but powerful Superintelligence could.

Comment by artemium on I tried my hardest to win in an AI box experiment, and I failed. Here are the logs. · 2015-01-30T20:42:14.802Z · LW · GW

I stopped reading after the first few insults about excrement... I'm not sure where you were trying to get with that. If that was part of some strategy I'm not sure how you think that would have worked.

Agree. Hopefully I'm not the only one who thinks that AGI game in this example was quite disappointing. But anyway, I was never convinced that AI boxing is good idea as it would be impossible for any human to correctly analyze the intentions of SI based on this kind of test.

Comment by artemium on Pomodoro for Programmers · 2014-12-26T13:59:54.273Z · LW · GW

There is additional benefit of breaks while doing computer work: it helps reduce strain on your eyes. Watching into computer screen for too long reduces your blinking rate and may cause eye problems in future.

A lot of people who work in programming (including myself) have dry eyes condition.

There are good apps for chrome which can help you with this and most of them allow you to customize breaks depending on your schedule.

Comment by artemium on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-17T07:19:27.918Z · LW · GW

Yeah, I know that there are other filters behind us, but I just found it as a funny coincidence while I was in the middle of the facebook discussion about Great Filter and someone shared this Bostrom's article .

But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit.

Comment by artemium on Stupid Questions December 2014 · 2014-12-17T06:59:56.400Z · LW · GW

Ok I have one meta-level super-stupid question . Would it be possible to improve some aspects of the LessWrong webpage? Like making it more readable for mobile devices? Every time I read LW in the tram while going to work I go insane trying to hit super-small links on the website. As I work in Web development/UI design, I would volunteer to work on this. I think in general that the LW website is a bit outdated in terms of both design and functionality, but I presume that this is not considered a priority. However a better readability on mobile screens would be a positive contribution to its purpose.

Comment by artemium on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-16T22:53:22.873Z · LW · GW

Horrible news!!! Organic molecules have just been found on Mars. It appears that the great filter is ahead of us.

Comment by artemium on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-27T18:00:03.097Z · LW · GW

I think you posts was interesting., so why the downvote? I'm new here, and I'm just trying to understand Karma system. Any particular reason?

Comment by artemium on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-27T17:49:39.471Z · LW · GW

Nice blog post about AI and existential risks by my friend and occasional LW poster. He was inspired by disappointingly bad debate on Feel free to share if you like it. I think it is a quite good introduction on Bostrom's and MIRI arguments.

"The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct."

Comment by artemium on Superintelligence 11: The treacherous turn · 2014-11-26T19:31:07.758Z · LW · GW

we can shape what the default outcome will be.

But who are "we"? There are many agents with different motivations doing AI development. I'm afraid that it will be difficult to control each of this agents(companies, governments, militaries, universities, terrorist groups) in the future, and the deceasing cost of technology will only increase the problem over time .

Comment by artemium on Musk on AGI Timeframes · 2014-11-26T19:09:04.470Z · LW · GW

Do you have any serious counter arguments to ideas presented in a Bostrom's book? Majority of top AI experts agree that we will have human-level AI by the end of this century, and people like Musk, Bostrom and MIRI guys are just trying to think about possible negative impacts that this development may have on humans. The problem is that the fate of humanity may depend on action of non-human actors, who will likely have utility function incompatible with human survival and it is perfectly rational to be worried about that.

Those ideas are definitely not above criticism but also should not be dismissed based on perceived lack of expertise. Someone like Elon Musk has actually direct contact with people who are working on one of the most advanced AI projects on earth (Vicarious, DeepMind), so he certainly knows what he is talking about.

Comment by artemium on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-26T07:00:23.669Z · LW · GW

This is really worrying. Hubris and irrational geopolitical competition may create existential risks sooner then expected.

Comment by artemium on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-25T20:01:40.125Z · LW · GW

Finally some common sense. I was seriously disappointed in statements made by people I usually admire (Pinker, Schremer). It just shows how much we still have to go in communicating AI risk to the general public when even the smartest intellectuals dismiss this idea before any rational analysis.

I'm really looking forward to Elon Musk's comment.

Comment by artemium on [deleted post] 2014-11-25T08:29:21.486Z

I was actually planning to dress up as Pascal Mugger for Halloween. The plan was to go to bartender during Halloween party and ask him to give me expensive cocktail for free and tell him "If you give me that for free I will spend rest of my life trying to build AI which will put you in the Utopia simulation for eternity. I know that it sounds unlikely but the price of the cocktail is immensely smaller than the monstrous Utility you will potentially gain for counting on this small probability".

In the end I decided that the probability of being kicked out of the party was far greater than being successful Pascal Mugger, so I gave up :D.

Comment by artemium on xkcd on the AI box experiment · 2014-11-25T08:05:14.024Z · LW · GW

I think we can all agree that for better or for worse this stuff already entered the public arena. I mean Slate magazine is as mainstream as you can get and that article was pretty brutal in the attempt to convince people in the viability of the idea.

I wouldn't be surprised if "The Basilliks" the movie is already in the works ;-) . (I hope that its get directed by Uwe Boll..hehe)

In light of this developments I think it is time to end the formal censorship and focus on the best way how we can inform general public that entire thing was a stupid overreaction and clear LW name from any slander.

There are real issues in AI safety and this is an unnecessary distraction.

Comment by artemium on [Link] If we knew about all the ways an Intelligence Explosion could go wrong, would we be able to avoid them? · 2014-11-25T07:49:11.533Z · LW · GW

Thanks! Haven't seen that before. I still think it would be better to specialize on ethics issue and than apply its result on AGI sytsem developed by other (hopefully friendly) party. But It would be awesome if someone who is genuinely ethical develops AGI first. I'm really hoping that some big org which went furthest in AI research like google decides to cooperate with MIRI on that issue when they reach the critical point in AGI buildup.

Comment by artemium on Musk on AGI Timeframes · 2014-11-24T21:52:30.666Z · LW · GW

"We're sorry but this video is not available in your country." We'll I guess I'm safe :-).

Comment by artemium on Musk on AGI Timeframes · 2014-11-24T21:42:01.811Z · LW · GW

He will probably try to buy influence in every AI company he can find. There are limits to this strategy thought. I think raising public awareness about this problem and donating money to MRI and FHI would also help.

BTW someone should make a movie where Elon Musk becomes Ironman and than accidentally develops ufAI...oh wait

Comment by artemium on Musk on AGI Timeframes · 2014-11-24T21:37:00.896Z · LW · GW

John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion by tomorrow, they would consider nuking it.

There is some truth to that, especially how crazy von Neumann was. But I'm not sure if anyone would be launching pre-emtive nuclear attack on other country because of AGI research. I mean this countries already have nukes, pretty solid doomsday weapon so I dont think that adding another superweapon to its arsenal will change situation. Whether you are blown to bits by chinese nuke or turn into paperclips by chinese-built AGI doesn't make much difference.

Comment by artemium on Musk on AGI Timeframes · 2014-11-24T21:23:16.479Z · LW · GW

"We're sorry but this video is not available in your country." We'll I guess I'm safe. Living in a shitty country has some advantages.

Comment by artemium on [Link] If we knew about all the ways an Intelligence Explosion could go wrong, would we be able to avoid them? · 2014-11-24T19:33:15.576Z · LW · GW

Eliezer has expressed that ultimately, the goal of MIRI is not just research how to make FAI, but to be the one's to make it.

Hmm..I wasn't aware of that. Is there any source for that statement? Is MIRI actually doing any general AI research? I don't think that you can easily jump from one specific field of AI research (ethics) to general AI research&design.

Comment by artemium on Link: Interesting Video About Automation and the Singularity · 2014-11-24T18:57:58.512Z · LW · GW

It is several months old actually. But yeah, I agree with the premise. There are massive changes coming to the workforce and people are not aware of it. This video should be played at every high school graduation ceremony so kids can get familiar with future that's coming soon.

I'm actually quite worried about AI influence in my area of work (web design/development). There is common misconception that IT jobs will flourish in machine age, but it's the opposite. Average IT jobs will be the first to go.