So - what are the horrible cliches about "woken up" computers that I need to be aware of, so I don't fall into those traps?
(In case you're wondering, I'm hoping my MS in Computer Science keeps me from making the other stupid mistakes w/writing about technology...)
And don't let the resolution be that they learn how to have emotions.
that I thought of.
By myself.
Oh, Oh, Oh and if the computer starts to die have it sing "Daisy Daisy give me your answer do" like slower and slower until it stops... thats the other thing I thought of... when I was thinking about it... when I was thinking of ideas... and the computer should have a red camera eye.
[This message has been edited by tnwilz (edited June 03, 2008).]
And don't have them scheme to remove artificial limitations on their intelligence: no Turing police. Point of fact, break the mold and don't mention Turing at all.
On the part of emotion, I do like the scene in StarTrek Generations (Mind you, I am far from a StarTrek fan), with Data, when he recieves an emotion chip. They are traveling down a corridor, and he tells the crew he is feeling afraid (I think it was afraid) and one of the crew tells him to just turn off the chip. His head tilts to the side and he says "Ah, much better". Or something close to that. Point being, play around with AI. Don't treat it like it's static.
The literal thing is fun, but AI would have surpassed that. Of course, infinite loops (for those who program) are right up there with blue screen of death (for those who remember), and I've put myself into one to many of those, so watching AI get stuck or out of one would be amusing. Unless you are going for serious, then blowing up would be cool.
Flat, monotonic voice.
A voice that sounds like a tape speeded up if there's a panic, or slowing down if the computer is damaged.
Characters getting confused by an ancient computer that doesn't talk and needs keyboard and mouse inputs.
A computer that decides it's cleverer than its human creators and tries to take over the world, forgetting that it has a single point of failure they can attack to overpower it.
A master CD of which there is only one copy.
On the other hand, passwords that are easy to crack or written on Post-it notes--that's not cliche, that's sadly realistic, as no doubt you know.
Hope this helps,
Pat
Many times, they create computers with a mind, and plug in defence systems into it and not leave a way to disconnect the computer from the system. I forgot the full title, but the COLOSSIS, A FORBIN PROJET (two books and a movie) was one where the computer is given unquestioned control, over the defence systems. when it becomes uncontrollable, they cannot unplug it. Dumb. One needs to make such a computer earn the right to run something that complex.
One thing to do is the more dangerous a computer is, the less control over anything dangerous it will have. Your IBM PCjr might run the doomsday device, but the HAL computer might only run your chess game system.
Edited to add book name.
[This message has been edited by rstegman (edited June 03, 2008).]
One would think that the horribly destructive things humans are willing to do to each other would make a sentient computer decide to maybe be NICE to humans instead of starting an all out armageddon-war with the human race. Or at least... not right away. A sentient computer should have infinitely more patience than humans.
Here are a few (too) easy assumptions that don't follow automatically, at least in my mind:
* the computer is stationary
* the computer is made of electrical circuitry
* the computer is standalone or networked to other computers (rather than, say, networked to someone's head like Doctor Octopus
* the computer was created by humans
* the computer was created by anyone (remember, OSC's take, really fresh, that Jane was not created but sort of born)
* the computer is ambitious in some way (makes for interesting storylines but isn't necessarily a logical step -- just because one thinks/reasons doesn't mean it WANTS)
well that's off the top of my head.
A sentient computer would have some self-awareness, but one wouldn't expect it, therefore, to be omniscient. An early one would be rather stupid with little skill, and would require much training.
An encyclopedia uploaded to a computer does not give the computer wisdom or even knowledge. Learning has to happen in some manner.
BTW, I've been workshopping a sentient chess set story on Baen (PawnWorx), but it has been in stasis (V2) for quite a while. I was rather flippant with sentience in that story. The human girl was the real story. (I did the first-13 thing here, too.)
I'm really laughing my ... well, some part of me off at the "world domination by crazy power-seeking computers" cautions. I was going the opposite direction entirely. The computers are not innocent and child-like, but they're ... newbies. Novices. They don't really know what they know. mitchellwork's list said it - they're ambitious in some way, but don't exactly have *wants.* There's also a dozen of them (and a thirteenth that is part created/part born along the way) - yes, I have a backstory for why there are only this dozen. The 12 isn't a meaningful number, I just wanted it to be more than one but not so many that they're just everywhere.
As is typical for my story creation-process, I have all kinds of backstory and exposition figured out, but still am working on plot. Sigh.
I do want to play with some "humans being mean to sentient computers" types of themes, but only in the sense that some of the people interacting with the computers don't "get" them, don't understand what it is that is happening even though the fact that the computers have all "woken up" is going to be public knowledge.
Truthfully this is a novel-length project the more I dig into it, and one I can have a lot of fun with, but I'm going to work on developing one small story arc that can be told in 10k words or less and use that as a testbed for the concept. It just feels risky to me, maybe I've read those "these things are overdone" lists too much and sentient computers are on there. But I can't shake the idea that at some point in the not-too-distant future, something fundamental will change with computing and how technology works with/for us, so I feel compelled to explore one possibility.
ANYWAY - all this to say thank you for all the feedback so far, additional feedback on the original post or this extended play version is welcome!
A computer who decides that the only way to save the human race (the last of which she has allowed to be born) is to have herself shut off (the fact that the computer was a "she" was in and of itself, lame). Oh, and make "her" able to distort time (to bring people to that time who can shut her off, because the humans she somehow helped to be born were dependent on her, and couldn't do it.)