Posts

Comments

Comment by artemium on Bitter lessons about lucid dreaming · 2024-10-17T11:08:52.491Z · LW · GW

Perhaps Randolph Carter was right about losing access to dreamlands after your twenties:

When Randolph Carter was thirty he lost the key of the gate of dreams. Prior to that time he had made up for the prosiness of life by nightly excursions to strange and ancient cities beyond space, and lovely, unbelievable garden lands across ethereal seas; but as middle age hardened upon him he felt these liberties slipping away little by little, until at last he was cut off altogether. No more could his galleys sail up the river Oukranos past the gilded spires of Thran, or his elephant caravans tramp through perfumed jungles in Kled, where forgotten palaces with veined ivory columns sleep lovely and unbroken under the moon.

Btw, have you heard about PropheticAI?  They are working on device that is supposed to help you with lucid dreaming?

Comment by artemium on Daniel Kokotajlo's Shortform · 2024-07-11T07:39:15.619Z · LW · GW

Still think it will be hard to defend against determined and competent adversaries committed to sabotaging the collective epistemic. I wonder if prediction markets can be utilised somehow? 

Comment by artemium on How are you preparing for the possibility of an AI bust? · 2024-06-24T07:43:48.245Z · LW · GW

I am not sure if dotcom 2000 market crash is the best way to describe a "fizzle". The upcoming Internet Revolution at the time was a correct hypothesis its just that 1999 startups were slightly ahead of time and tech fundamentals were not ready yet to support it, so market was forced to correct the expectations. Once the tech fundamentals (internet speeds, software stacks, web infrastructure, number of people online, online payments, online ad business models etc...) became ready in mid 2000s the Web 2.0 revolution happened and tech companies became giants we know today.

I expect most of the current AI startups and business models will fail and we will see plenty of market corrections, but this will be orthogonal to ground truth about AI discoveries that will happen only in a few cutting edge labs which will be shielded from temporary market corrections.

But coming back to the object level question: I really don't have a specific backup plan, I expect even the non-AGI level AI based on the advancement of the current models will significantly impact various industries so will stick to software engineering for forceable future.

Comment by artemium on How are you preparing for the possibility of an AI bust? · 2024-06-24T07:43:31.944Z · LW · GW
Comment by artemium on China-AI forecasts · 2024-03-01T09:11:25.905Z · LW · GW

My dark horse bet is on 3d country trying desperately to catch up to US/China just when they will be close to reaching agreement on slowing down progress. Most likely: France. 

Comment by artemium on China-AI forecasts · 2024-02-26T08:57:43.117Z · LW · GW

Why so? My understanding is that, if AGI will arrives in 2026 it will be based on the current paradigm of training increasingly large LLMs on massive clusters of advanced GPUs. Given that US has banned selling advanced GPUs to China, how do you expect them to catch up that soon?

Comment by artemium on EY in the New York Times · 2023-06-10T13:30:55.946Z · LW · GW

To add to this point, author in question is infamous for doxxing Scott Alexander and writing a hit piece on rationalist community before.

https://slatestarcodex.com/2020/09/11/update-on-my-situation/

 

Comment by artemium on [deleted post] 2022-11-28T12:54:57.462Z

I was also born in a former socialist country -Yugoslavia, which was notable for the prevalence of worker-managed firms in its economy. This made it somewhat unique among other socialist countries that used a more centralized approach with state ownership over entire industries. 

While it is somewhat different than worker-owned cooperatives in modern market economies it does offer a useful data point. The general conclusion is that they work a bit better than a typical state-owned firm, but are still significantly worse in their economic performance compared to the median private company. This is the reason why despite having plenty of experience with worker-managed firms almost all ex-YU countries today have economies dominated by fully private companies and no one is really enthusiastic about repeating the worker-managed experiment.

Comment by artemium on A Few Terrifying Facts About The Russo-Ukrainian War · 2022-10-03T12:03:14.152Z · LW · GW

Also agree about not promoting political content on LW but would love to read your writings on some other platform if possible.

Comment by artemium on A Few Terrifying Facts About The Russo-Ukrainian War · 2022-10-03T11:54:24.056Z · LW · GW

If it reaches that point, the goal for Russia would not be to win but to ensure another side loses too, and this outcome might be preferable (to them) to a humiliating conventional defeat that might permanently end Russian sovereignty. In the end, the West has far more to lose than Russia and the stakes aren't that high for us and they know it. 

Comment by artemium on A Few Terrifying Facts About The Russo-Ukrainian War · 2022-10-03T11:49:30.925Z · LW · GW

No. I think everything else is in crappy shape cause the Nuclear arsenal was always a priority for the Russian defense industry and most of the money and resources went there. I've noticed that the meme "perhaps Russian nukes don't work" is getting increasingly popular which can have pretty bad consequences if the meme spreads and emboldens escalation.

It is like being incentivized to play Russian roulette because you hear bullets were made in a country that produced some other crappy products.

Comment by artemium on The AI Countdown Clock · 2022-05-16T19:44:59.108Z · LW · GW

Looks awesome!  Maybe there could be extended UI that tracks the recent research papers (sorta like I did here) or SOTA achievements. But maybe that would ruin the smooth minimalism of the page. 

Comment by artemium on New GPT3 Impressive Capabilities - InstructGPT3 [1/2] · 2022-03-14T10:05:49.260Z · LW · GW

You can also play around with open-source versions that offer surprisingly comparable capability to OpenAI models. 

Here is the GPT-6-J from EleutherAI that you can use without any hassle:  https://6b.eleuther.ai/

They also released a new, 20B model but I think you need to log in to use it: https://www.goose.ai/playground

Comment by artemium on Convoy Crackdown · 2022-02-22T09:41:11.488Z · LW · GW

I think there could be a steelman why this post is LW-relevant (or at least possible variants of the post). If this Canadian precedent becomes widely adopted in the West everyone should probably do some practical preparation to ensure the security of their finances.

P.S: I live in Sweden which is an almost completely cashless society, so a similar type of government action would be disastrous. 

Comment by artemium on Why did Europe conquer the world? · 2021-12-29T13:51:11.649Z · LW · GW

You can add Black Death to the list. Popular theory is that disease killed so many people (around 1/3 of Europe's population) that few remaining workers could negotiate higher salaries which made work-saving innovations more desirable and planted the seeds of industrial development.

 

Comment by artemium on Russian x-risks newsletter fall 2021 · 2021-12-03T16:52:03.548Z · LW · GW

This is very underrated newsletter, thank you for writing this. Events in KrioRus are kind of crazy. I cannot imagine a business where it is more essential to convince customers of robustness in the long run than cryonics and yet...ouch. 

Also, Russia deployed lasers Peresvet which blind American satellites used to observe nuclear missiles.

I thought Peresvet is more of a tactical weapon?

https://en.wikipedia.org/wiki/Peresvet_(laser_weapon) 

Are there any updates on nuclear powered missile, Burevestnik

Comment by artemium on What would we do if alignment were futile? · 2021-11-15T09:03:36.454Z · LW · GW

Even worse, that kind of move would just convince the competitors that AGI is far more feasible, and incentivize them to speed up their efforts while sacrificing safety.

If blocking Huwaei failed to work a couple of years ago with an unusually pugnacious American presidency, I doubt this kind of move would work in the future where the Chinese technological base would be probably stronger.

Comment by artemium on Beijing Academy of Artificial Intelligence announces 1,75 trillion parameters model, Wu Dao 2.0 · 2021-06-04T11:53:29.690Z · LW · GW

In a funny way, even if someone is stuck in a Goodhart trap doing Language Models it is probably better to Goodhart performance on Winograd Schemas than just adding parameters. 

Comment by artemium on Beijing Academy of Artificial Intelligence announces 1,75 trillion parameters model, Wu Dao 2.0 · 2021-06-03T14:00:30.210Z · LW · GW

I am not an expert in ML but based on some conversations I was following, I heard WuDao's LAMBADA score (an important performance measure for Language Models) is significantly lower than GPT-3. I guess a number of parameters isn't everything.

Comment by artemium on SubOnlyStackFans · 2020-11-03T08:44:44.722Z · LW · GW

Strong upvote for a healthy dose of bro humor which isn't that common on LW.  We need more "people I want to have a beer with" represented in our community :D.

Comment by artemium on Is Success the Enemy of Freedom? (Full) · 2020-11-01T11:09:23.615Z · LW · GW

Thats interesting. Can you elaborate more? 

Comment by artemium on AI risk hub in Singapore? · 2020-10-30T15:45:09.836Z · LW · GW

None: None of the above; TAI is created probably in the USA and what Asia thinks isn't directly relevant. I say there's a 40% chance of this.

I would say it might still be relevant in this case. For example, given some game-theoretical interpretations, China might conclude that doing a nuclear first strike might be a rational move if the US creates the first TAI and suspects that will give their enemies an unbeatable advantage.  Asian AI risk hub might successfully convince Chinese leadership to not do that if they have information that US TAI is built in a way that would prevent usage just for the interest of its country of origin.

Comment by artemium on AI risk hub in Singapore? · 2020-10-30T15:38:58.748Z · LW · GW

Not sure about anti-gay laws in Singapore, but from what I gathered from the recent trends, the LGTB situation is starting to improve there and in East Asia in general. 

OTOH the anti-drug attitudes are still super strong (for example you can still get the death penalty for dealing harder drugs), therefore I presume it's an even bigger deal-breaker giving the number of people who are experimenting with drugs in the broader rationalist community.

Comment by artemium on The rationalist community's location problem · 2020-10-06T18:56:32.807Z · LW · GW

Not to mention a pretty brutal Anti-Drug laws.

Comment by artemium on Russian x-risks newsletter Summer 2020 · 2020-09-01T14:40:27.260Z · LW · GW

What would be the consequence of Belarus joining the western military alliance in terms of Russia's nuclear strategy? Let's say that in the near future Belarus joins NATO, and gives the US free hand in installing any offensive or defensive (ABM) Nuclear weapon system on Belarus territory. Would this dramatically increase the Russian fear of a successful nuclear first strike by the US?

Comment by artemium on Construct a portfolio to profit from AI progress. · 2020-07-27T08:35:54.688Z · LW · GW

Excellent question! Was thinking about it myself lately, especially after GPT-3 release. IMHO, it is really hard to say as it is not clear which commercial entity will bring us over the finish line, and if there will be an investment opportunity at the right moment. It also quite possible that even the first company that does it might even bungle its advantage and investing there might be a wrong move (seems to be a common pattern in the history of technology).

My idea is just to play it safe and save money as much as possible until there is a clear example we arrived at the AGI level (when AI completely surpasses humans on Winograd schemas for example), and if there won't be any FOOM try to find the companies that are mostly focused on the practical application where you get the biggest bang for the buck.

But honestly, at the point where you will have AGI widely available its quite possible that the biggest opportunity is just learning to utilize it properly. If you have access to AGI you can just ask it yourself: "how to benefit from AGI given my current circumstances?" and it will probably give you the best answer.

Comment by artemium on A Day in Utopia · 2017-11-27T15:25:58.578Z · LW · GW
We haven’t managed to eliminate romantic travails

Ah! Then, it isnt utopia in my definition :-) .

Love it. It is almost like anti-Black Mirror episode where humans are actually non-stupid.

Comment by artemium on The Copernican Revolution from the Inside · 2017-11-03T09:47:23.247Z · LW · GW

Amazing post!

Would be useful to mention examples of contemporary ideas that could be analogues of heliocentrism in its time. I would suggest String Theory to be one possible candidate. The part when Geocentrist is challenging Heliocentrist to provide some proof while Heliocentrist is desperately trying to explain away lack of experimental evidence kinda reminds me of debates between string theorist and their sceptics. (it doesn't mean String Theory is true just there seems to be a similar state of uncertainty).

Comment by artemium on Becoming stronger together · 2017-07-13T13:27:16.781Z · LW · GW

This is great. Thanks for posting it. I will try to use this example and see if I can find some people who would be willing to do the same. Do you know of any new remote group that is recruiting members?

Comment by artemium on [deleted post] 2017-06-01T06:16:55.955Z

This is a good idea that should definitely be tested. I completely agree with the Duncan that modern society, and especially our community is intrinsically to allergic to authoritarian structure despite strong historical proof that this kind of organisations can be quite effective.

would consider joining in myself but given my location that isn't an option.

I do think that in order to build successful organisation based on authority the key factor are personal qualities and charisma of the leader and rules play smaller part.

As long as project is based on voluntary participation, I don't see why anyone should find it controversial. Wish you all the best.

Comment by artemium on [deleted post] 2015-11-30T23:13:19.548Z

fixed.

Comment by artemium on [Link] A rational response to the Paris attacks and ISIS · 2015-11-29T06:25:21.905Z · LW · GW

We would first have to agree on what "cutting the enemy" would actually mean. I think liberal response would be keeping our society inclusive, secular and multicultural at all costs. If that is the case than avoiding certain failure modes like becoming intolerant militaristic societies and starting unnecessary wars could be considered as successful cuts against potential worse world-states.

Now that is liberal perspective, there are alternatives, off course.

Comment by artemium on "Immortal But Damned to Hell on Earth" · 2015-06-01T22:14:55.954Z · LW · GW

I don't think that we should worry about this specific scenario. Any society advanced enough to develop mind uploading technology would have excellent understanding of the brain, consciousness and the structure of thought. In this circumstances retributive punishment would seem be totally useless as they could just change properties of the perpetrator brain to make him non-violent. and eliminate the cause of any anti-social behaviour.

It might be a cultural thing though, as america seems to be quite obsessed with retribution. I absolutely refuse to believe any advanced society with mind uploading technology would be so petty to use this in such horrible way . At that point I expect they would treat bad behaviour as a software bug.

Comment by artemium on Anti-Pascaline satisficer · 2015-04-15T07:06:32.849Z · LW · GW

One possibility is to implement the design which will makes agent strongly sensitive to the negative utility when he invests more time and resources on unnecessary actions after he ,with high-enough probability , achieved its original goal.

In the paperclip example : wasting time an resources in order to build more paperclips or building more sensors/cameras for analyzing the result should create enough negative utility to the agent compared to alternative actions.

Comment by artemium on Stupid Questions April 2015 · 2015-04-08T07:05:27.841Z · LW · GW

Recently I became active in EA (effective altruism) movement but I'm kind of stuck on the issue of animal welfare. While I agree that Animals deserve ethical treatment and that the world would be a better place if we found a way to completely eliminate animal suffering I do have some questions about some practical aspects.

  • Is there any realistic scenario where we could expect entire world population to convert to non-meat diet , considering cultural, agricultural and economic factors?

  • Would it be better if instead trying to convert billions of people to become vegetarians/vegans we invest more in synthetic meat research and other ways to make meat eating non-dependent on animals?

  • How highly should we prioritize animal welfare in comparison to other EA issues like world poverty and existential risks?

  • How does EA community view meat-eaters in general, is there strong bias against them? Is this a big issue inside the movement?

Disclosure: I am (still) a meat-eater , and at this point it would be really difficult for me to make consistent changes to my eating habits. I was raised in meat-eating culture and there are almost no cheap and convenient vegetarian/vegan food options where I live . Also my current workload prevents me in trying to spend more time on cooking.

I do feel kind of bad though, and maybe I'm not trying hard enough . If you have some good suggestions how I can make some common-sense changes towards less animal dependent-diet that might be helpful.

Comment by artemium on Open Thread, Apr. 06 - Apr. 12, 2015 · 2015-04-07T11:41:14.965Z · LW · GW

Interesting talk at BOAO forum : Elon Musk, Bill Gates and Robin Li (Baidu CEO). They talk about Superintelligence at around 17:00 minute.

https://www.youtube.com/watch?v=NG0ZjUfOBUs&feature=youtu.be&t=17m

  • Elon is critical of Andrew Ng remark that 'we should worry about AI like we should worry about Mars overpopulation' ("I know something about mars" LOL)

  • Bill Gates mentioned Nick Bostrom and his book 'Superintelligence'. His seems to have read the book. Cool.

  • Later, Robin Li mentions China Brain projects, which appears to be Chinese government AGI project (anyone knows something about it? Sounds interesting...hopefully it won't end like Japans 'fifth-generation computing' in the 80s)

Comment by artemium on Open thread, Mar. 23 - Mar. 31, 2015 · 2015-03-31T06:53:25.324Z · LW · GW

I never thought of that, but that's a great question. We have similar problem in Croatian language as AI would be translated 'Umjetna Inteligencija' (UI). I think we can also use the suggested title "From Algorithms to Zombies" once someone decides to make Croatian/Serbian/Bosnian translation

Comment by artemium on Open thread, Mar. 23 - Mar. 31, 2015 · 2015-03-31T06:44:30.939Z · LW · GW

One thing that might help you from my experience is to remove any food from your surroundings that could tempt you. I myself have only fruits, milks and cereals in my kitchen and basically nothing else. While I could easily go to supermarket or order food the fact I would need to do do some additional action is enough form me to avoid doing that. You can use laziness for your advantage.

Comment by artemium on Defeating the Villain · 2015-03-31T06:23:37.481Z · LW · GW

One of the reasons is that a lot of LW members are really involved in FAI issues and they strongly believe that if they manage to succeed in building a "good" AI , most of earthly problems will be solved in an very short time, Bostrom said something like that we can postpone solving complicated philosophical issues after we solved AI ethics issue.

Comment by artemium on The Hardcore AI Box Experiment · 2015-03-31T06:06:16.576Z · LW · GW

Agree.. The AI boxing Is horrible idea for testing AI safety issues. Putting AI in some kind of virtual sandbox where you can watch his behavior is much better option, as long as you can make sure that AGI won't be able to become aware that he is boxed in.

Comment by artemium on The Hardcore AI Box Experiment · 2015-03-31T06:02:02.143Z · LW · GW

Hmm I still think that there is incentive to behave good. Good, cooperative behavior is always more useful than being untrustworthy and cruel to other entities. There might be some exceptions, thought (simulators want conflict situation for entertainment purposes or some other reasons).

Comment by artemium on The Hardcore AI Box Experiment · 2015-03-31T05:57:09.259Z · LW · GW

I had exactly the same idea!

It is possible that in that only few people are actually 'players' (have consciousness) and others are NPC-like p-zombies. In that case, I can say I'm one of the players, as I'm sure that I have consciousness, but there is no way I can prove it to anyone else ;-) .

One of the positive aspects of this kind of thought experiments is that usually gives people additional reasons for good behavior because in most cases it is highly likely that simulators are conscious creatures who will probably reward those who behave ethically.

Comment by artemium on I tried my hardest to win in an AI box experiment, and I failed. Here are the logs. · 2015-01-30T20:50:15.562Z · LW · GW

Exactly. Also there are great number of possibilities that even the smartest persons could not even imagine, but powerful Superintelligence could.

Comment by artemium on I tried my hardest to win in an AI box experiment, and I failed. Here are the logs. · 2015-01-30T20:42:14.802Z · LW · GW

I stopped reading after the first few insults about excrement... I'm not sure where you were trying to get with that. If that was part of some strategy I'm not sure how you think that would have worked.

Agree. Hopefully I'm not the only one who thinks that AGI game in this example was quite disappointing. But anyway, I was never convinced that AI boxing is good idea as it would be impossible for any human to correctly analyze the intentions of SI based on this kind of test.

Comment by artemium on Pomodoro for Programmers · 2014-12-26T13:59:54.273Z · LW · GW

There is additional benefit of breaks while doing computer work: it helps reduce strain on your eyes. Watching into computer screen for too long reduces your blinking rate and may cause eye problems in future.

A lot of people who work in programming (including myself) have dry eyes condition.

There are good apps for chrome which can help you with this and most of them allow you to customize breaks depending on your schedule.

Comment by artemium on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-17T07:19:27.918Z · LW · GW

Yeah, I know that there are other filters behind us, but I just found it as a funny coincidence while I was in the middle of the facebook discussion about Great Filter and someone shared this Bostrom's article .

But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit.

Comment by artemium on Stupid Questions December 2014 · 2014-12-17T06:59:56.400Z · LW · GW

Ok I have one meta-level super-stupid question . Would it be possible to improve some aspects of the LessWrong webpage? Like making it more readable for mobile devices? Every time I read LW in the tram while going to work I go insane trying to hit super-small links on the website. As I work in Web development/UI design, I would volunteer to work on this. I think in general that the LW website is a bit outdated in terms of both design and functionality, but I presume that this is not considered a priority. However a better readability on mobile screens would be a positive contribution to its purpose.

Comment by artemium on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-16T22:53:22.873Z · LW · GW

Horrible news!!! Organic molecules have just been found on Mars. It appears that the great filter is ahead of us.

Comment by artemium on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-27T18:00:03.097Z · LW · GW

I think you posts was interesting., so why the downvote? I'm new here, and I'm just trying to understand Karma system. Any particular reason?

Comment by artemium on Open thread, Nov. 24 - Nov. 30, 2014 · 2014-11-27T17:49:39.471Z · LW · GW

Nice blog post about AI and existential risks by my friend and occasional LW poster. He was inspired by disappointingly bad debate on Edge.org. Feel free to share if you like it. I think it is a quite good introduction on Bostrom's and MIRI arguments.

"The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct."

http://nthlook.wordpress.com/2014/11/26/why-fear-ai/