Comment by kgalias on LW2.0 now in public beta (you'll need to reset your password to log in) · 2017-09-26T13:20:06.716Z · score: 0 (0 votes) · LW · GW

Will there be a way to merge accounts?

Comment by kgalias on LW 2.0 Open Beta Live · 2017-09-24T19:25:44.534Z · score: 1 (1 votes) · LW · GW

When was the last data migration from LW 1.0? I'm getting an "Invalid email" message, even though I have a linked email here.

Comment by kgalias on LW 2.0 Open Beta Live · 2017-09-21T16:21:10.521Z · score: 0 (0 votes) · LW · GW

For me it just returns "invalid email", though I can see my email in http://lesswrong.com/prefs/update/.

Comment by kgalias on A Month's Worth of Rational Posts - Feedback on my Rationality Feed. · 2017-05-22T07:52:09.742Z · score: 0 (0 votes) · LW · GW

Agreed. Six links daily seems like way too much.

Comment by kgalias on Net Utility and Planetary Biocide · 2017-04-27T09:53:34.589Z · score: 0 (0 votes) · LW · GW

Regarding your last point: is a hellish world preferable to an empty one?

Comment by kgalias on Two super-intelligences (evolution and science) already exist: what could we learn from them in terms of AI's future and safety? · 2016-03-09T21:37:43.705Z · score: 0 (6 votes) · LW · GW

A related point: http://thoughtinfection.com/2014/04/19/capitalism-is-a-paperclip-maximizer/

Comment by kgalias on LessWrong Help Desk - free paper downloads and more (2014) · 2016-02-06T15:58:14.504Z · score: 1 (1 votes) · LW · GW

Does anyone know if and where can I find "IB Mathematics Standard Level Course Book: Oxford IB Diploma Programme" (I need this one specifically)?

https://global.oup.com/education/product/9780198390114/?region=uk

Comment by kgalias on MIRI Research Guide · 2014-11-08T01:45:20.563Z · score: 2 (2 votes) · LW · GW

Thanks! This will be helpful.

Comment by kgalias on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-08T17:35:57.171Z · score: 2 (2 votes) · LW · GW

I don't have time to evaluate which view is less wrong.

Still, I was somewhat surprised when I saw your first comment.

Comment by kgalias on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-08T12:13:42.277Z · score: 3 (3 votes) · LW · GW

Is this what you have in mind?

Sugar does not cause hyperactivity in children.[230][231] Double-blind trials have shown no difference in behavior between children given sugar-full or sugar-free diets, even in studies specifically looking at children with attention-deficit/hyperactivity disorder or those considered sensitive to sugar.[232]

wikipedia

Comment by kgalias on SRG 4: Biological Cognition, BCIs, Organizations · 2014-10-08T11:07:25.118Z · score: 1 (1 votes) · LW · GW

Sugar alone makes it more difficult to concentrate for many people, as well as having many other deleterious effects.

What do you mean?

Comment by kgalias on Superintelligence Reading Group 3: AI and Uploads · 2014-10-07T18:24:06.130Z · score: 0 (0 votes) · LW · GW

Sorry for the pause, internet problems at my place.

Anyways, it seems you're right. Technically, it might be more plausible for AI to be coded faster (higher variance), even though I think it'll take more time than emulation (on average).

Comment by kgalias on Superintelligence Reading Group 3: AI and Uploads · 2014-10-03T15:46:42.445Z · score: 1 (1 votes) · LW · GW

I agree.

Why does this make it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?

Comment by kgalias on Superintelligence Reading Group 3: AI and Uploads · 2014-10-03T09:49:15.452Z · score: 3 (3 votes) · LW · GW

Why do we assume that all that is needed for AI is a clever insight, not the insight-equivalent of a long engineering time and commitment of resources?

Comment by kgalias on Superintelligence Reading Group 3: AI and Uploads · 2014-10-03T08:25:20.754Z · score: 2 (2 votes) · LW · GW

How is theoretical progress different from engineering progress?

Is the following an example of valid inference?

We haven't solved many related (and seemingly easier) (sub)problems, so the Riemann Hypothesis is unlikely to be proven in the next couple of years.

In principle, it is also conceivable (but not probable), that someone will sit down and make a brain emulation machine.

Comment by kgalias on Superintelligence Reading Group 3: AI and Uploads · 2014-09-30T22:10:39.980Z · score: 3 (3 votes) · LW · GW

Hello! My name is Christopher Galias and I'm currently studying mathematics in Warsaw.

I figured that using a reading group would be helpful in combating procrastination. Thank you for doing this.

Comment by kgalias on Superintelligence Reading Group 3: AI and Uploads · 2014-09-30T16:41:21.664Z · score: 2 (2 votes) · LW · GW

This is the part of this section I find least convincing.

Comment by kgalias on Superintelligence Reading Group 2: Forecasting AI · 2014-09-29T23:11:53.541Z · score: 1 (1 votes) · LW · GW

To be clear, you are saying that a thing will seem frivolous if it does have a relevant franchise, but hasn't happened in real life?

Yes, that was my (tentative) claim.

We would need to know whether the examples were seen as frivolous after they came into being, but before the technology started being used.

Comment by kgalias on Polymath-style attack on the Parliamentary Model for moral uncertainty · 2014-09-26T15:39:35.921Z · score: 3 (3 votes) · LW · GW

Can't we use a hierarchy of ordinal numbers and a different ordinal sum (e.g. maybe something of Conway's) in our utility calculations?

That is, lying would be infinitely bad, but lying ten times would be infinitely worse.

Comment by kgalias on Superintelligence Reading Group 2: Forecasting AI · 2014-09-26T13:14:13.975Z · score: 3 (3 votes) · LW · GW

OK, but war happens in real life. For most people, the only time they hear of AI is in Terminator-like movies.

I'd rather compare it to some other technological topic, but which doesn't have a relevant franchise in popular culture.

Comment by kgalias on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T18:52:43.893Z · score: 2 (2 votes) · LW · GW

As a possible failure of rationality (curiosity?) on my part, this week's topic doesn't really seem that interesting.

Comment by kgalias on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T18:51:29.067Z · score: 1 (1 votes) · LW · GW

What topic are you comparing it with?

When you specify that, I think the relevant question is: does the topic have an equivalent of a Terminator franchise?

Comment by kgalias on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-22T16:37:54.859Z · score: 1 (1 votes) · LW · GW

No need to apologize - thank you for your summary and questions.

Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.

No disagreement here.

Comment by kgalias on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T23:27:20.363Z · score: 2 (2 votes) · LW · GW

I'm just trying to make sure I understand - I remember being confused about the Flynn effect and about what Katja asked above.

How does the Flynn effect affect our belief in the hypothesis of accumulation?

Comment by kgalias on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T09:44:14.466Z · score: 2 (2 votes) · LW · GW

It is possible, then, that exposure to complex visual media has produced genuine increases in a significant form of intelligence. This hypothetical form of intelligence might be called "visual analysis." Tests such as Raven's may show the largest Flynn gains because they measure visual analysis rather directly; tests of learned content may show the smallest gains because they do not measure visual analysis at all.

Do you think this is a sensible view?

Comment by kgalias on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T06:41:18.795Z · score: 4 (4 votes) · LW · GW

The terms that I singled out while reading were: Backpropagation, Bayesian network, Maximum likelihood, Reinforcement learning.

Comment by kgalias on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T06:33:57.097Z · score: 5 (5 votes) · LW · GW

You could start at a time better suited for Europe.

Comment by kgalias on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T06:18:21.037Z · score: 6 (6 votes) · LW · GW

I was under the impression (after reading the sections) that the argument hinges a lot less on (economic) growth than what might be gleamed from the summary here.

Comment by kgalias on Meetup : Warsaw Meetup proposal · 2014-07-29T12:30:30.318Z · score: 0 (0 votes) · LW · GW

There's a small chance I might be there - if not, see you next time!

Comment by kgalias on Meetup : Warsaw Meetup proposal · 2014-07-27T10:36:41.237Z · score: 0 (0 votes) · LW · GW

I would be interested, but I'd prefer the day before or so.

Comment by kgalias on [LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses "Common Misunderstandings" about Philosophy · 2014-06-26T18:06:17.921Z · score: 1 (1 votes) · LW · GW

Thank you for giving links to papers.

Comment by kgalias on Examples of Rationality Techniques adopted by the Masses · 2014-06-07T22:21:01.014Z · score: 14 (14 votes) · LW · GW

Making lists.

Comment by kgalias on Meetup : First LW Meetup in Warsaw · 2014-03-31T14:20:50.940Z · score: 0 (0 votes) · LW · GW

Too bad I missed this.

Comment by kgalias on Book Review: Linear Algebra Done Right (MIRI course list) · 2014-02-18T10:21:27.204Z · score: 1 (1 votes) · LW · GW

Somewhat relevant: http://golem.ph.utexas.edu/category/2007/05/linear_algebra_done_right.html

I've also seen this book described as "one of those texts that feels like a piece of category theory even though it’s not actually about categories", which is high praise.

Comment by kgalias on Open Thread for January 17 - 23 2014 · 2014-01-20T23:06:26.078Z · score: 0 (0 votes) · LW · GW

The cost here might be someone implementing a technical solution.

Comment by kgalias on Open Thread for January 17 - 23 2014 · 2014-01-20T22:48:34.958Z · score: 0 (0 votes) · LW · GW

Are minor nuisances never worth solving?

Comment by kgalias on Open Thread for January 17 - 23 2014 · 2014-01-17T20:13:30.703Z · score: 2 (4 votes) · LW · GW

I understand. Nevertheless, discussion so far hasn't gotten anywhere. Perhaps downvoting meetup threads would put some pressure on people involved in meetups to resolve the matter.

As of now, I haven't downvoted any meetup-related thread.

Comment by kgalias on Open Thread for January 17 - 23 2014 · 2014-01-17T13:42:54.932Z · score: 7 (15 votes) · LW · GW

Is it OK for me to downvote meetup threads if I don't want to see them?

Comment by kgalias on Examples in Mathematics · 2013-12-17T20:32:41.071Z · score: 0 (0 votes) · LW · GW

Thanks for the piece of counter-data!

I might look into the book, but the naming convention is a big turnoff.

Comment by kgalias on Examples in Mathematics · 2013-12-15T22:33:03.761Z · score: 0 (0 votes) · LW · GW

I already mentioned what Halmos' stance was. What I'm more interested in is how is it possible to work without examples.

Comment by kgalias on Examples in Mathematics · 2013-12-15T13:45:19.941Z · score: 0 (0 votes) · LW · GW

That seems somewhat surprising coming from Gowers.

Comment by kgalias on Examples in Mathematics · 2013-12-15T13:38:32.390Z · score: 2 (2 votes) · LW · GW

No, of course not, but it still might make sense to wonder why it's so.

Comment by kgalias on Examples in Mathematics · 2013-12-15T13:35:45.094Z · score: 1 (1 votes) · LW · GW

Whereas I can (somewhat) make sense of thinking with examples, it seems hard to describe just what exactly does it mean to think with general abstract concepts.

Comment by kgalias on Defining causal isomorphism · 2013-12-14T22:38:16.771Z · score: 0 (0 votes) · LW · GW

Can you provide some more background? What is a morphism of computations?

Examples in Mathematics

2013-12-14T22:15:08.908Z · score: 16 (16 votes)
Comment by kgalias on Defining causal isomorphism · 2013-12-14T19:35:23.800Z · score: 2 (2 votes) · LW · GW

On the other hand, allowing any invertible function to be a _morphism doesn't seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers.

I don't understand why this is a counterexample.

Comment by kgalias on Open thread for December 9 - 16, 2013 · 2013-12-09T20:51:15.800Z · score: 2 (2 votes) · LW · GW

What fanfics should I read (perhaps as a HPMOR substitute)?

Comment by kgalias on On learning difficult things · 2013-11-11T11:02:43.225Z · score: 6 (6 votes) · LW · GW

Reading Model Theory was the first time in my life where I read a chapter of a textbook and it made absolutely no sense. In fact, it took about three passes per chapter before they made sense.

I find this experience common and I'm sure most working mathematicians (as opposed to merely a student) would confirm. One of the most important things is not getting discouraged in the face of total incomprehensibility.

Comment by kgalias on 2013 Census/Survey: call for changes and additions · 2013-11-05T15:17:40.085Z · score: 10 (10 votes) · LW · GW

Burns Depression Checklist?

Comment by kgalias on Systematic Lucky Breaks · 2013-10-12T09:52:39.493Z · score: 1 (1 votes) · LW · GW

That doesn't seem to be relevant, as krav maga exactly teaches you things like targeting the throat (or groin).

Comment by kgalias on Help us Optimize the Contents of the Sequences eBook · 2013-09-19T16:16:01.864Z · score: 5 (5 votes) · LW · GW

Luke has said it will be in a different ebook.