Posts
Comments
That's actually in incredibly good, rational method for learning to write. It allows the writer to study an existing model, practice, then compare against the model afterwards. One could do this with pretty much any author, journal, or paper that you enjoy, and it should be effective no matter the type of writing. Now that I think about it, I'm going to suggest it to some friends of mine who teach English classes in the area. Thanks!
I think you're on the right track, Caesium. I've arrived at 41 years with a dynamic 25-year plan ahead of me, and I would suggest that you spend some time spreading yourself among very different activities and causes for at least four years, then consolidate your time into what you enjoy most. You will find that not all charitable organizations are equal, and there will be some causes (whether charitable or not) that really grab you by the short hairs and demand your attention. Think of it rather like the second run at your school life - you start with as wide a net as possible, gradually close in on what you're good at or you enjoy, then focus on what works best for you. The benefit of your schooling will allow you the luxury of choosing your path in life, but make certain that you've at least taken a peek down the others before you go too far. Lastly, I believe the desire to make money for the purpose of donating it is fairly recent. The Bill and Melinda Gates foundation was created at Melinda's behest well after Bill had achieved more wealth than any nerd imagined. I'm definitely not knocking it, as I myself donate to PBS and am on the board of several charitable organizations. The goal of making money to donate money is a trend that I believe speaks very well of the future of humanity as a whole.
"He was going to suck my blood!" -Richard
"Which is what we do to anyone when we tell them we'll be hurt if they don't live our way." -Don
-from Illusions, Richard Bach
My point of this as a rationality quote: it is a reminder that we have to stay our course regardless of how much others may not agree with the logic with which we rule our lives.
The world is not an inherently kind nor fair place. It is up to us to make it so.
- E.A. Manuel, Jr.
This post brings to mind a fault I see with trying to create a trust system of Friendly AI's. Humans are inherently untrustworthy and random. A robotic AI that is built to be friendly (or at least interact) with humans should by most concepts have a set of rules to work with that minimize its desire to kill us all and turn us into computer fuel. Even in an imperfect world, AI's would be trained to deal with humans in a forthright and honest fashion, to give the truth when asked, and to build assumptions based on facts and real information. Humans, however, are irrational creatures that lie, cheat, and steal when it is within our own best interest to do so, and we do it on a regular basis. For those of you who disagree with that premise, please look at the litany of laws we are asked to follow on a daily basis, starting with traffic laws. Imagine a world of AI drivers and place a human in their midst. Then take away all 'rules' that force every driver to move in x direction at y speed on a given roadway. The AI drivers would move with purpose, but be programmed to understand how important their speed and direction was based on the purpose of their travel. Those going to work at a leisurely pace would drive slower, and congregate in one or two areas of the road. The AI's that need to run a speedy errand or who are in an emergency would move faster and be programmed to take into account the slower vehicles. But a human in their midst would not care about the others so much as about their own personal issues. They would want to move faster because they like driving fast, or would want to move in the right because they get nervous in the left lanes. Or perhaps they would drive in the area that gave them the best view of the sunset, and slow down to enjoy it - forcing the AI's behind them to slow down as well.
And when we take the example of an AI who is supposed to work with humans as a receptionist, what does it do when the AI is faced with a human who lies to get past it? If the human lies convincingly and the AI let's him go, how will the AI react when it finds out the human lied? Are all humans bad? If the same human returns and is now part of the company, will the AI no longer 'trust' that human's information? If a human uses the AI to mess with another human (don't tell me people never use computers to play pranks on each other) how will the AI 'feel' about being used in such a manner? As humans, we have a set of emotions and memories that allow us to deal with people who do such things. Perhaps we would have a stern chat with the guy who tried to get past us, or play a prank back on the gal who messed with us last time. But should computers be equipped with such a mechanism? I really do not believe so. It is a slippery slope for a robot to play tricks on a human. Unless they are very advanced (such as body scanners that serve as lie detectors), there is little room for them to do anything but trust us.
Human nature dictates that we both assume 'someone else' is going to start a group, and assume the task of creating said group is very difficult. So even those people who lurk and say to themselves, "Gosh, I'd love to have a group like that in my city" won't take the first step unless they are given "permission".
In that sentence, you really did hit the nail on the head there, Bentarm.