I concur with ChrisOwens. Nanotechnology is a way for my characters to work magic, but not in magical terms. As Arthur C. Clark stated, "Any advanced technology is indistinguishable from magic."
(By the way, does anyone recall when the term "nanotechnology" was coined, who did it, and under what circumstances? The earliest I can remember is a "Star Trek: Next Generation" episode about 1990, but that can't be it---the term was already familiar to me so I must have encountered it somewhere in SF before that point.)
[This message has been edited by Spaceman (edited December 19, 2006).]
This limit has already, or almost already, been reached with computers. This is the reason that all of the sudden, instead of announcing faster processors, companies are announcing "dual core" technology.
I agree about the dubiousness of some of the claims for nanotechnology. They'll do a great deal---possibly in ways their eventual creators fail to anticipate---but probably won't do everything their creators claim.
Sure we can build carbon fiber cords, but actual machines at nanotech sizes? We can think of what we could do. But we can't do it yet. And those carbon fiber cords aren't up to the point where can build a space elevator from them - yet.
Very small microtech, like the instrument you can stick in through the arteries in your leg to do brain surgery? Sure.
Sensors the size of pills that gather information as they pass through the body and radio the data out. Yeah, we have those.
But nanotech in the sense of microscopic ai robots. I think we are still a few centuries away - if its ever going to be possible.
Pipelining, another "innovation" in chip design, was an early crack in the CISC approach that AMD and Cyrix introduced in response to the instruction set expansions of MMX and SSI. It basically uses a simple preprocessor to translate unsupported instructions into a series of supported instructions.
Ultimately, "Dual Core" is an illustration of the continued shrinking of computer components in terms of their physical dimensions, it is the failure of designers to be able to find something useful to do with all the free space they have available, which is only because they haven't been using RISC from the beginning. Actually, the terms CISC and RISC are becoming irrelevant, since it is impossible to benefit from true CISC design anymore, and technologies that allow a processor composed of multiple RISC cells overseen by a preprocessor have developed to the point where RISC is only implemented at the lower levels of the heirarchy.
But anyway, quantum effects do become a serious problem once you reach the femto-tech level, where you are working with individual atoms. They are not a significant problem for nano-scale devices, though. The worse problem is finding a way to have input/output paths from a nanoscale processor to a conventional system. Which is why nano-tech is so often depicted as operating in the same way as a living organism. In other words, it's like how your cells do all their complicated little cell things, but you don't control that and can't monitor the output of any given cell. Your higher order architecture isn't nano-scale, and is somewhat arbitrarily irrelevant of the nano-scale implementation.
quote:
Actually, computers are getting faster. The problem is that for the last several decades Intel has been pushing CISC based chip design rather than RISC design, despite the obvious (and ever increasing) superiority of RISC as an architecture. As gates inscribed on a single chip shrink due to technology advances (and the reasonable size for a chip increases due to manufacturing improvements) it simply becomes impossible for human designers to actually utilize all the space they have available unless they start to use a common strategy implied by RISC design, putting multiple copies of the ALU on a single chip and having them work in tandem.
Now, I'll admit most processors aren't that big. If it were 1/4" on a side, then the maximum speed would be 16 Ghz, ignoring the time to open and close gates.
The point is that this is not significantly faster than processors are today. Therefore, architectural changes are needed, such as multi-core, or data flow technology.
As Spaceman points out, you must use all the available space on the chip, or there is no reason to make the chip as large as is "feasible" in the abstract. That is the core of resorting to things like dual core and cell design, a conventional CISC processor could be "staged" so that fundamentally different elements were working in tandem or series on each processor cycle (and this has been done in the past). The problem is that doing this for the current size of a chip relative to the components is really beyond human ability, and probably has been for some time.
People have created crystals of iron -- tiny slivers -- that can bend in half and snap back into place. They don't stay bent because there are no dislocations of impurities (space, I believe, is considered an impurity), as there would be in a regular bar of iron. They're tiny crystals, but they're real, and I can imagine someone putting them to some super-tiny cool uses.
Some people, instead of worrying about quantum effects, are trying to capitalize on them for quantum computing. They think that quantum computers might be able to, say, factor extremely large numbers (which would have major implications for cryptography).
Legend has it that the scientists at Bell Labs who designed the transistor brought it to the engineers to build. The engineers took the design and started to work on it, while the scientists went back to their desks -- and figured out that it wouldn't work after all. They went to tell the engineers to stop working on this impossible device, but didn't get there until after the engineers had already built one.
Many theoretically impossible-to-cross barriers have fallen, so I tend not to care _that_ much about the barriers unless they're somehow relevant to the story.
I only really need the science to work if the science is (part of) the point of the story. If the science isn't the point -- if the story I'm trying to tell just needs some cool science to advance its plot -- then I don't really care how it actually works.
In other words, if any sufficiently advanced technology is indistinguishable from magic, then for any magic I need to create I might be able to pretend I've discovered some technology that's sufficiently advanced to implement it.
Your mileage may vary.
[This message has been edited by oliverhouse (edited December 21, 2006).]
And quantum cryptography seems to be faring much better than quantum computers, so the industry doesn't have much to worry about.
The first nanotech use that comes to my mind is movement. When you make something out of billions of conjoined nanobots it can be quite flexible. You could basically have a shapeshifting robot, or why not try a shapeshifting planet? That could be interesting.
Things get more fun with quantumbots. They can be in multiple places at once, so long as you don't know where. And when they finally do figure out where they are they'll have no idea where they're going. And once that gets straightened out they could be popping up anywhere and firing electron beams randomly, some of which may be going backwards in time. Sounds like the empire's going to have a tough time tracking MY fleet... wherever it went.