Poor Ruk - the tall, haunting android from the classic episode "What Are Little Girls Made Of?" This unforgettably tragic figure tended the machines of his creators for centuries ("How many? Even Ruk doesn't know.") He survived the purge of his kind and their creators, serving their memory timelessly like dust yet was finally brought down, not by time or elements, but phasered out of existence by his "programmer" Dr. Korby.

The old boy deserved better. From Trek's beginning though other androids were similarly foiled by humanity's foibles, like Norman (plus the rest of the Mudd's series in "I, Mudd"), Reena ("Requiem For Methuselah"). Many of these were incapacitated if not destroyed by human stresses and should have been better designed. Andrea could not love (says Korby) though she dies in their embrace, yet Flint's creation Reena could love - and could be similarly destroyed obviously.

I mean really, "... but if everything Harry says is a lie, then he is telling the truth, but..." doesn't seem immune to any sophisticated, human-level intelligence. Any being incorporating one should be capable of detecting human weaknesses (exemplified in proximate abundance by Harcourt Fenton Mudd) and joots itself out of such a simple loop. Even as a child watching "I Mudd" I guessed Norman should just conclude Kirk was lying about Harry and pop! No conflict.

So in their way these androids symbolize innocence. Strong, youthful and beautiful (OK Ruk was no looker!) they are usually also simple in speech and not necessarily difficult to confuse or fool. Ruk altruistically helped Korby build Brown, then Andrea, finally the android-Kirk (and of course re-build Korby's frozen body). Were these later units capable of deception? Korby was, Brown was, Andrea's hard to tell - but android-Kirk intentionally sets on a path of deception. Arguably so did Norman (on his mission for Mudd to hijack Enterprise).

Complex artificial life, like friendly artificial intelligence, appears right around the technological corner. The famous Turing test (in which a human - or conceivably, any other intelligent operator - attempts to evaluate prospective "intelligence" via comparison with a reference "intelligence") is itself founded on deception; i.e., why should an AI be judged intelligent only after it can fake being human?

In fact, if I read it right, Turing's original paper describing the procedure has the target "intelligence" actually answering an arithmetic problem incorrectly - and after a suitably human-like delay! Such a riddle, to find in humans these complementary capacities for deception and trust. Are they, in fact, an integral part of intelligence?


Back to Dr.TOS
Back to top