Posts

A question of rationality 2009-12-13T02:37:41.722Z · score: 4 (49 votes)

Comments

Comment by mormon2 on A question of rationality · 2009-12-18T01:53:01.395Z · score: 0 (10 votes) · LW · GW

"How would you act if you were Eliezer?"

If I made claims of having a TDT I would post the math. I would publish papers. I would be sure I had accomplishments to back up the authority with which I speak. I would not spend a single second blogging about rationality. If I used a blog it would be to discuss the current status of my AI work and to have a select group of intelligent people who could read and comment on it. If I thought FAI was that important I would be spending as much time as possible finding the best people possible to work with and would never resort to a blog to try to attract the right sort of people (I cite LW as evidence of the failure of blogging to attract the right people).

Oh and for the record I would never start a non-profit to do FAI research. I also would do away with the Singularity Summit and replace it with more AGI conferences. I would also do away the most of SIAI's programs and replace them, and the money they cost, with researchers and scientists along with some devoted angel funders.

Comment by mormon2 on An account of what I believe to be inconsistent behavior on the part of our editor · 2009-12-18T01:43:29.131Z · score: -2 (10 votes) · LW · GW

"Disregarding the fact that deleting a top level post is as easy as deleting a comment...how do you know this is his reason?"

Because he has done it in the past.

Comment by mormon2 on An account of what I believe to be inconsistent behavior on the part of our editor · 2009-12-17T07:48:55.587Z · score: -6 (18 votes) · LW · GW

"it being a top-level post instead of Open Thread comment. Probably would've been a lot more forgiving if it'd been an Open Thread comment. . ."

Since I am already disliked lets just say it, the reason EY would prefer my post in the comments section of an open thread is two-fold: 1.) it can easily be deleted if he doesn't like it 2.) Since I happen to be the exemplar here and most of you guys don't like me (or don't like being unwitting subjects of social experiments). You would quickly vote my post down to the point where the only way to find it would be to search my profile for it meaning that the post would go nowhere.

Comment by mormon2 on A question of rationality · 2009-12-16T22:21:30.080Z · score: -6 (18 votes) · LW · GW

Maybe read a bit more carefully:

"I just wanted to see if anyone here could actually look past that (being the issues like spelling, grammar and tone etc.), specifically EY, and post some honest answers to the questions"

Comment by mormon2 on A question of rationality · 2009-12-13T20:36:05.840Z · score: -10 (16 votes) · LW · GW

I apologize I rippled your pond.

"If not, I am not interested in what you think SIAI donors think."

I never claimed to know what SIAI donors think I asked you to think about that. But I think the fact that SIAI has as little money as it does after all these years speaks volumes about SIAI.

"Given your other behavior, "

Why because I ask questions that when answered honestly you don't like? Or is it because I don't blindly hang on every word you speak?

"I'm also not interested in any statements on your part that you might donate if only circumstances were X. Experience tells me better."

I never claimed I would donate nor will I ever as long as I live. As for experience telling you better, you have none, and considering the lack of money SIAI has and your arrogance you probably never will so I will keep my own council on that part.

"If you are previously a donor to SIAI, I'll be happy to answer you elsewhere."

Why, because you don't want to disrupt the LW image of Eliezer the genius? Or is it because you really are distracted as I suspect or have given up because you cannot solve the problem of FAI another good possibility? These questions are simple easy to answer and I see no real reason you can't answer them here and now. If you find the answers embarrassing then change, if not then what have you got to loose?

If your next response is as feeble as the last ones have been don't bother posting them for my sake. You claim you want to be a rationalist then try applying reason to your own actions and answer the questions asked honestly.

Comment by mormon2 on A question of rationality · 2009-12-13T16:55:04.818Z · score: 5 (23 votes) · LW · GW

I am going to respond to the general overall direction of your responses.

That is feeble, and for those who don't understand why let me explain it.

Eliezer works for SIAI which is a non-profit where his pay depends on donations. Many people on LW are interested in SIAI and some even donate to SIAI, others potentially could donate. When your pay depends on convincing people that your work is worthwhile it is always worth justifying what you are doing. This becomes even more important when it looks like you're distracted from what you are being paid to do. (If you ever work with a VC and their money you'll know what I mean.)

When it comes to ensuring that SIAI continues to pay especially when you are the FAI researcher there justifying why you are writing a book on rationality which in no way solves FAI becomes extremely important.

EY ask yourself this what percent of the people interested in SIAI and donate are interested FAI? Then ask what percent are interested in rationality with no clear plan of how that gets to FAI? If the answer to the first is greater then the second then you have a big problem, because one could interpret the use of your time writing this book on rationality as wasting donated money unless there is a clear reason how rationality books get you to FAI.

P.S. If you want to educate people to help you out as someone speculated you'd be better off teaching them computer science and mathematics.

Remember my post drew no conclusions so for Yvain I have cast no stones I merely ask questions.

Comment by mormon2 on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-08T02:23:05.824Z · score: -8 (18 votes) · LW · GW

Responding to both Zack and Tiredoftrolls:

The similarity of DS3618 and my posts is coincidental. As for mormon1 or psycho also coincidental. The fact that I have done work with DARPA in no way connects me unless you suppose only one person has ever worked with DARPA nor does AI connect me.

For Tiredoftrolls specifically: The fact that you are blithely unaware of the possibility of and the reality of being smart enough to do a PhD without undergrad work is not my concern. The fact that I rail against EY and his lack of math should be something that more people do here. I do not agree with now nor have I ever agreed with ID or creationism or whatever you want to call that tripe.

To head off the obvious question why mormon2 because mormon and mormon1 was not available or didn't work. I thought about mormonpreacher but decided against it.

Comment by mormon2 on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-05T21:11:27.642Z · score: -3 (15 votes) · LW · GW

"Not to put too fine a point on it, but I find that no matter how much I do, the people who previously told me that I hadn't yet achieved it, find something else that I haven't yet achieved to focus on."

Such is the price of being an innovator or claiming innovation...

"First it's "show you can invent something new", and then when you invent it, "show you can get it published in a journal", and if my priority schedule ever gets to the point I can do that, I have no doubt that the same sort of people will turn around and say "Anyone can publish a paper, where are the prominent scholars who support you?""

Sure, but you have not invented a decision theory using the example of TDT until you have math to back it up. Decision theory is a mathematical theory not just some philosophical ideas. What-is-more thanks to programs like Mathematica etc. there are easy ways to post equations online. For example "[Nu] Derivative[2][w][[Nu]] + 2 Derivative[1][w][[Nu]] + ArcCos[z]^2 [Nu] w[[Nu]] == 0 /; w[[Nu]] == Subscript[c, 1] GegenbauerC[[Nu], z] + Subscript[c, 2] (1/[Nu]) ChebyshevU[[Nu], z]" put this in mathematica and presto. Further the publication of the theory is necessary part of getting the theory accepted be that good or bad. Not only that but it helps in formalizing ones ideas which is positive especially when working with other people and trying to explain what you are doing.

"and after that they will say "Does the whole field agree with you?" I have no personal taste for any part of this endless sequence except the part where I actually figure something out. TDT is rare in that I can talk about it openly and it looks like other people are actually making progress on it."

There are huge areas of non-FAI specific work and people who's help would be of value. For example knowledge representation, embodiment virtual or real, and sensory stimulus recognition... Each of these will need work to make FAI practical and there are people who can help you and probably know more about those specific areas then you.

Comment by mormon2 on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-05T20:57:14.057Z · score: 0 (14 votes) · LW · GW

How am I troll? Did I not make a valid point? Have I not made other valid points? You may disagree with how I say something but that in no way labels me a troll.

The intention of my comment was to find what the hope for EYs FAI goals are based on here. I was trying to make the point with the zero, zilch idea... that the faith in EY making FAI is essentially blind faith.

Comment by mormon2 on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-05T05:22:30.714Z · score: -1 (25 votes) · LW · GW

"As a curiosity, having one defector in a group who is visibly socially penalized is actually a positive influence on those who witness it (as distinct from having a significant minority, which is a negative influence.) I expect this to be particularly the case when the troll is unable to invoke a similarly childish response."

Wow I say one negative thing and all of a sudden I am a troll.

Let's consider the argument behind my comment:

Premises: Has EY ever constructed AI of any form FAI, AGI or narrow AI? Does EY have any degrees in any relevant fields regarding FAI? Is EY backed by a large well funded research organization? Could EY get a technical job at such an organization? Does EY have a team of respected experts helping him make FAI? Does EY have a long list of technical math and algorithm rich publications on any area regarding FAI? Has EY ever published a single math paper in for example a real math journal like AMS? Has he published findings on FAI in something like IEEE?

The answer to each of these questions is no.

The final question to consider is: If EY's primary goal is to create FAI first then why is he spending most of his time blogging and working on a book on rationality (which would never be taken seriously outside of LW)?

Answer: this is counter to his stated goal.

So if all answers being in the negative then what hope should any here hold for EY making FAI? Answer: zero, zilch, none, zip...

If you have evidence to the contrary for example proof that not all the answers to the above questions are no then please... otherwise I rest my case. If you come back with this lame troll response I will consider my case proven, closed and done. Oh and to be clear I have no doubt in failing to sway any from the LW/EY worship cult but the exercise is useful for other reasons.

Comment by mormon2 on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-03T16:45:04.972Z · score: -3 (37 votes) · LW · GW

Thank you thats all I wanted to know. You don't have any math for TDT. TDT is just an idea and thats it just like the rest of your AI work. Its nothing more then nambi-pambi philosophical mumbo-jumbo... Well, I will spend my time reading people who have a chance of creating AGI or FAI and its not you...

To sum up you have nothing but some ideas for FAI, no theory, no math and the best defense you have is you don't care about the academic community. The other key one is that you are the only person smart enough to make and understand FAI. This delusion is fueled by your LW followers.

The latest in lame excuses is this "classified" statement which is total (being honest here) BS. Maybe if you had it protected under NDA, or a patent pending, but neither are the case therefore since most LW people understanding the math is unlikely the most probable conclusion is your making excuses for your lack of due diligence in study and actually producing a single iota of a real theory.

Happy pretense of solving FAI... (hey we should have a holiday)

Further comments refer to the complaint department at 1-800-i dont care....

Comment by mormon2 on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-03T02:25:14.383Z · score: 2 (12 votes) · LW · GW

Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?

"The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive."

Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?

Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won't work. Even if that is the case it begs the question where are your contributions, your code, and published papers etc.? Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?

"So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?"

So I take it from the fact that you didn't answer the question that you have in fact not worked for Intel or DARPA etc. That being said I think a measure of humility is an order before you categorically dismiss them as being minor players in FAI. Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).

Comment by mormon2 on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-02T07:07:52.816Z · score: 5 (17 votes) · LW · GW

"That's my end of the problem."

Ok, so where are you in the process? Where is the math for TDT? Where is the updated version of LOGI?

"Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate."

So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?

"Most people who consider this problem do not realize the degree to which it is sheerly impossible to put up a job ad for the skills you need. LW itself is probably as close as it gets."

If thats the case why does Ben Goertzel have a company working on AGI the very problem your trying to solve? Why does he actually have design and some portions implemented and you do not have any portions implemented? What about all the other AGI work being done like LIDA, SOAR, and what ever Peter Voss calls his AGI project, so are all of those just misguided since I would imagine they hire the people who work on the projects?

Just an aside for some posters above this post who have been talking about Java as the superior choice to C++ what planet do you come from? Java is slower then C++ because of all the overheads of running the code. You are much better off with C++ or Ct or some other language like that without all the overheads esp. since one can use OpenCL or CUDA to take advantage of the GPU for more computing power.

Comment by mormon2 on Call for new SIAI Visiting Fellows, on a rolling basis · 2009-12-01T18:06:38.962Z · score: 8 (14 votes) · LW · GW

Is it just me or does this seem a bit backwards? SIAI is trying to make FAI yet so much of the time spent is spent on risks and benefits of this FAI that doesn't exist. For a task that is estimated to be so dangerous and so world changing would it not behoove SIAI to be the first to make FAI? If this be the case then I am a bit confused as to the strategy SIAI is employing to accomplish the goal of FAI.

Also if FAI is the primary goal here then it seems to me that one should be looking not at LessWrong but at gathering people from places like Google, Intel, IBM, and DARPA... Why would you choose to pull from a predominantly amateur talent pool like LW (sorry to say that but there it is)?

Comment by mormon2 on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-25T03:04:13.062Z · score: 3 (5 votes) · LW · GW

I am going to take a shortcut and respond to both posts:

komponisto: Interesting because I would define success in terms of the goals you set for yourself or others have set for you and how well you have met those goals.

In terms of respect I would question the claim not within SIAI or within this community necessarily but within the larger community of experts in the AI field. How many people really know who he is? How many people who need to know, because even if he won't admit it EY will need help from academia and the industry to make FAI, know him and more importantly respect his opinion?

ABranco: I would not say success is a personal measure I would say in many ways its defined by the culture. For example in America I think its fair to say that many would associate wealth and possessions with success. This may or may not be right but it cannot be ignored.

I think your last point is on the right track with EY starting SIAI and LessWrong with his lack of formal education. Though one could argue the relative level of significance or the level of success those two things dictate.

Comment by mormon2 on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-23T19:37:34.762Z · score: 4 (8 votes) · LW · GW

"You've achieved a high level of success as a self-learner, without the aid of formal education."

How do you define high level of success?

Comment by mormon2 on Request For Article: Many-Worlds Quantum Computing · 2009-11-20T08:36:18.256Z · score: 5 (9 votes) · LW · GW

I recommend some reading: http://en.wikipedia.org/wiki/Quantum_computer Start with this and then if you want more detail look at: http://arxiv.org/pdf/quant-ph/9812037v1 The math isn't to difficult if you are familiar with math involved in QM, things like vectors, and matrices etc. http://www.fxpal.com/publications/FXPAL-PR-07-396.pdf This paper I skimmed it seems worth a read.

As to the author of the post to whom your responding what is your level of knowledge of quantum computing and quantum mechanics? By this I mean is your reading on the topic confined to Scientific American and what Eliezer has written or have you read for example Bohm on Quantum Theory?

Comment by mormon2 on A Less Wrong singularity article? · 2009-11-18T16:46:59.722Z · score: 3 (7 votes) · LW · GW

"In what contexts is the action you mention worth performing?"

If the paper was endorsed by the top minds who support the singularity. Ideally if it was written by them. So for example Ray Kurzweil whether you agree with him or not he is a big voice for the singularity.

"Why are "critics" a relevant concern?"

Because technical science moves forward through peer-review and the proving and the disproving of hypotheses. The critics help prevent the circle jerk phenomena in science assuming they are well thought out critiques. Because outside review can sometimes see fatal flaws in ideas that are not necessarily caught by those who work in the field.

"In my perception, normal technical science doesn't progress by criticism, it works by improving on some of existing work and forgetting the rest. New developments allow to see some old publications as uninteresting or wrong."

Have you ever published in a peer-review journal? If not the last portion of your post I will ignore, if so perhaps your could expound on it a bit more.

Comment by mormon2 on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-15T16:59:01.656Z · score: 3 (11 votes) · LW · GW

"Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it.

Why? If you expect to make FAI you will undoubtedly need people in the academic communities' help; unless you plan to do this whole project by yourself or with purely amateur help. ..."

"That 'probably not even then' part is significant."

My implication was that the idea that he can create FAI completely outside the academic or professional world is ridiculous when you're speaking from an organization like SIAI which does not have the people or money to get the job done. In fact SIAI doesn't have enough money to pay for the computing hardware to make human level AI.

"Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied '1' and probably more than '0' too."

If he doesn't agree with it now, I am sure he will when he runs into the problem of not having the money to build his AI or not having enough time in the day to solve the problems that will be associated with constructing the AI. Not even mentioning the fact that when you close yourself to outside influence that much you often end up with ideas that are riddled with problems, that if someone on the outside had looked at the idea they would have pointed the problems out.

If you have never taken an idea from idea to product this can be hard to understand.

Comment by mormon2 on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-15T03:50:15.337Z · score: 8 (16 votes) · LW · GW

"and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries."

Why? No one in the academic community would spend that much time reading all that blog material for answers that would be best given in a concise form in a published academic paper. So why not spend the time? Unless you think you are that much of an expert in the field as to not need the academic community. If that be the case where are your publications and where are your credentials, where is the proof of this expertise (expert being a term that is applied based on actual knowledge and accomplishments)?

"Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it."

Why? If you expect to make FAI you will undoubtedly need people in the academic communities' help; unless you plan to do this whole project by yourself or with purely amateur help. I think you would admit that in its current form SIAI has a 0 probability of creating FAI first. That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.

I am sorry I prefer to be blunt.. that way there is no mistaking meanings...

Comment by mormon2 on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-14T18:06:42.130Z · score: 1 (9 votes) · LW · GW

"Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof)"

Couldn't have said it better myself. The only addition would be that IQ is an insufficient measure although it can be useful when combined with accomplishment.

Comment by mormon2 on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-14T02:24:39.734Z · score: 2 (6 votes) · LW · GW

No, because I don't believe in using IQ as a measure of intelligence (having taken an IQ test) and I think accomplishments are a better measure (quality over quantity obviously). If you have a better measure then fine.

Comment by mormon2 on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-14T01:56:06.665Z · score: 0 (10 votes) · LW · GW

Ok, here are some people:

Nick Bostrom (http://www.nickbostrom.com/cv.pdf) Stephen Wolfram (Published his first particle physics paper at 16 I think, invented one of, if not, the most successful math programs ever and in my opinion the best ever) A couple people who's names I won't mention since I doubt you'd know them from Johns Hopkins Applied Physics Lab where I did some work. etc.

I say this because these people have numerous significant contributions to their fields of study. I mean real technical contributions that move the field forward not just terms and vague to be solved problems.

My analysis of EY is based on having worked in AI and knowing people in AI none of whom talk about their importance in the field as much as EY with as few papers, and breakthroughs as EY. If you want to claim you're smart you have to have accomplishments that back it up right? Where are EYs publications, where is the math for his TDT? The worlds hardest math problem is unlikely to be solved by someone who needs to hire someone with more depth in the field of math. (both statements can be referenced to EY)

Sorry this is harsh but there it is.

Comment by mormon2 on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-13T17:53:45.894Z · score: 0 (14 votes) · LW · GW

"I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren't crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it's the fate of the entire planet instead of a few million dollars for personal use."

I don't think you really understand this; having recently been edged out by a large corporation in a narrow field of innovation, as a small startup, and having been in business for many years this sort of thing your describing happens often.

As for your last statement I am sorry but you have not met that many intelligent people if you believe this. If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.

I might be more inclined to agree if EY would post some worked out TDT problems with the associated math. hint...hint...

Comment by mormon2 on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-11T17:22:28.691Z · score: 3 (5 votes) · LW · GW

I think we can take a good guess on the last part of this question on what he will say: Bayes Theorem, Statistics, basic Probability Theory Mathematical Logic, and Decision Theory.

But why ask the question with this statement made by EY: "Since you don't require all those other fields, I would like SIAI's second Research Fellow to have more mathematical breadth and depth than myself." (http://singinst.org/aboutus/opportunities/research-fellow)

My point is he has answered this question before...

I add to this my own question actually it is more of a request to see EY demonstrate TDT with some worked out math on a whiteboard or some such on the video.

Comment by mormon2 on Open Thread: November 2009 · 2009-11-03T17:26:26.169Z · score: 5 (13 votes) · LW · GW

Ok, I am going to reply to both soreff and Thomas:

Particle physics isn't about making technology at least at the moment. Particle physics is concerned with understanding the fundamental elements of our world. As far as the details of the relevance of particle physics I won't waste the time to explain. Obviously neither of you have any real experience in the field. So this concludes what comments I am going to make on this topic until someone with real physics knowledge decides to comment.

Comment by mormon2 on Open Thread: November 2009 · 2009-11-03T00:16:24.262Z · score: 16 (20 votes) · LW · GW

I was wondering if Eliezer could post some details on his current progress towards the problem of FAI? Specifically details as to where he is in the process of designing and building FAI. Also maybe some detailed technical work on TDT would be cool.

Comment by mormon2 on Open Thread: November 2009 · 2009-11-02T16:12:45.181Z · score: 5 (15 votes) · LW · GW

What? Who voted this up?

"It is also quite possible that the Higgs boson will come out and it will be utterly useless, as most of those particles are."

So understanding the sub-atomic level for things like nano-scale technology in your books is a complete waste of time? Understanding the universe I can only assume is also a waste of time since the discovery of the Higgs Boson in your books is essentially meaningless in all probability.

"You can't do a thing with them and they don't tell you very much. Of course, the euphoria will be massive."

Huh? From someone who studies particle physics to one (you) who doesn't obviously (and I am going to be hard on you) you should refrain making such comments in nearly total ignorance. The fact that you don't understand the significance of the Higgs Boson or particle physics should have been a cue that you have noting to contribute to this thread.

Sorry but there it is...

Comment by mormon2 on Open Thread: November 2009 · 2009-11-02T15:56:29.450Z · score: 4 (6 votes) · LW · GW

This is going to sound horrible but here goes:

In my experience schools value depends on how smart you are. For example if you can teach yourself math you can often test out of classes. If your really smart you may be able to get out of everything but grad-school. Depending on what you want to do you may or may not need grad school.

Do you have a preferred career path? If so have you tried getting into it without further schooling? The other question is what have you done outside of school? Have you started any businesses or published papers?

With a little more detail I think the question can be better answered.

Comment by mormon2 on Open Thread: October 2009 · 2009-10-28T16:00:05.273Z · score: 9 (9 votes) · LW · GW

I apologize if this is blunt or already addressed but it seems to me that the voting system here has a large user based problem. It seems to me that the karma system has become nothing more then a popularity indicator.

It seems to me that many here vote up or down based on some gut-level agreement or disagreement with the comment or post. For example it is very troubling that some single line comments of agreement that should have 0 karma in my opinion end up with massive amounts and comments that may be in opposition to the popular beliefs here are voted down despite being important to the pursuit of rationality.

It was my understanding that karma should be an indicator of importance and a way of eliminating useless information not just a way of indicating that a post is popular. The popularity of a post is nearly meaningless when you have such a range of experience and inexperience on a blog such as this.

Just a thought feel free to disagree...

Comment by mormon2 on Open Thread: October 2009 · 2009-10-17T08:30:22.433Z · score: 4 (4 votes) · LW · GW

True but the Blue Brain project is still very interesting and is and hopefully will continue to provide interesting results. Whether you agree with his theory or not the technical side of what they are doing is very interesting.

Comment by mormon2 on How to think like a quantum monadologist · 2009-10-16T01:40:05.398Z · score: 4 (6 votes) · LW · GW

"Articles should be legible to the audience. You can't just throw in a position written in terms that require special knowledge not possessed by the readers. It may be interesting, but then the goal should be exposition, showing importance and encouraging study."

I both agree with and disagree with this statement. I agree that a post should be written for the audience. I disagree in that I think people here spend a lot of time talking about QM and if they do not have the knowledge to understand this post then they should not be talking about QM. The other issue is I think this post may be too muddled to really require special knowledge before the author clarifies the post.

General Post Question The one big thing that confuses me is the title do you actually mean Quantum Monadology? If so are you claiming some use of the formal term monad, or some definition of your own? I don't see this post as following from some real definition of monads as seen in scientific literature.

General Post Comment I think to be blunt this post is a bit muddled with ideas from all over the place put into one big pot and the result is not very enlightening. If you haven't already I suggest you lookup the precise definition of monad. I can't find it now but there was a paper a while back published on this topic of formalizing QM within the formal idea of monads.

Comment by mormon2 on The power of information? · 2009-10-13T16:09:00.366Z · score: 5 (5 votes) · LW · GW

Am I the only one who is reminded of game theory reading this post. In fact it basically sounds like given a set of agents engaged in competitive behavior how does "information" (however you define it, which I think others are right to ask for clarification) effect the likely outcome? Though I am confused by the overly simple military examples. I would wonder if one could find a simpler system to use? I also am confused about what general principles with this system of derived inequalities you want to find?

Comment by mormon2 on What Program Are You? · 2009-10-13T01:12:52.166Z · score: 3 (11 votes) · LW · GW

"TDT is very much a partial solution, a solution-fragment rather than anything complete. After all, if you had the complete decision process, you could run it as an AI, and I'd be coding it up right now."

I must nitpick here:

First you say TDT is an unfinished solution, but from all the stuff that you have posted there is no evidence that TDT is anything more than a vague idea; is this the case? If not could you post some math and example problems for TDT.

Second, I hope this was said in haste not in complete seriousness that if TDT was complete you could run it as an AI and you'd be coding. So does this mean that you believe that TDT is all that is required for the theory end of AI? Or are you stating that the other hard problems such as learning; sensory input and recognition, and knowledge representation are all solved for your AI? If this be the case I would love to see a post on that.

Thanks