>I think it's more the case that it's inherently more difficult to make an
alien character realistic. Turning humans into vulcans and klingons and even
ALIENs is much, much simpler and >cheaper. A vulcan probably costs ~$1000
per day of shooting. Animating a CGI character (in say Dinotopia) probably
costs at least that a *second*. I've no idea what anamatronics like in
>Farscape and The Ninja Turtle movies cost but I doubt it's better.
>And you are beginning to see more alian looking aliens. The bugs from
Starship Troopers and the wierd ghost things from Final Fantasy didn't look
at all like the more traditional B-movie >villians. ;)
Indeed you are correct as far as TV show and low budget movies are
concerned. And certainly even with large movies it is an issue, though Men
in Black has quite the cornucopia. However, even in books and games the
aliens still tend to favor either humanoid or Terran derived creature
designs. And the portrayal of very alien minds is something that$B!G(Bs quite
lacking. There are exceptions to be sure, but for every alien that$B!G(Bs truly
alien there must be a dozen cat warriors. There is another reason as well,
that being that when you use anthromorphic characters it acts like shorthand
for the character of those characters. The previously mentioned cat
warriors, for example, are easy to type. You pretty much know exactly what
to expect. Though I$B!G(Bd be tempted to have a cat race and make them
vegetarian scholar types that abhor violence of all kinds, but that$B!G(Bs just
me.
>Traditionally, a mind has been something humans have that animals don't.
Hence people tried to define it in those terms. and a lot of what a mind
actually is got put down as 'something >only stupid mindless animals do'.
And if X is something mindless animals do then X can't possible be anything
to do with the mind, can it? Other new approaches, such as MRI scanners
offer >newer insights. The mind may not be such a mystery in a few hundred
years.
>An organisation's ability to write programmes has always been dependent on
the number of programmers it can employ, which in turn has been dependent on
money. Until now. No one >organisation paid for the development of Linux. No
one organisation paid for the processing power that SETI_at_home can utilise.
It *might* be possible to programme a mind this way. If a >million monkeys
can produce Shakespeare, why can't a million (or more) internet programmers
produce a sentient computer programme?
True enough, though the problem isn$B!G(Bt just the sentience. It$B!G(Bs that humans
are a result of both a complex underlay of programming and capability as
well as an enormous amount of information processed over the span of their
lives. While the first part may be possible, to generate a truly human-like
intellect may require that the AI also develop over time much like a real
human does.
>Actually computers have begun to pass the Turing test. All the programmes
needed to do was sit back for a second and remember how stupid humans could
be. ;) Then it became a simple matter >to make the human think down to the
level of the computer programme and they believed the computer was as smart
as they were. Such tests have since been modified to prevent this from
>happening.
Only in the crudest sense. The real Turing test is the pen pal test. If
fooling a human for a few moments, or in a very limited sense, was enough
then Elisa would have done it. ;) A real Turing test passing AI could
establish a rapport with a person and maintain it for months or years
without the person ever suspecting that the machine wasn$B!G(Bt a person.
>They're talking about replacing tank tracks with composite bands. Such a
touch flexible material would probably make an idea replacement.
Possible, though self-repairing would be critical to any kind of an android.
Otherwise wear and tear would inevitably cause serious issues, especially
given the limits inherent to a human sized mechanism.
>This would be easy to get wrong. What people think will make them happy is
not always what will make them happy. The robot would effectively have to be
smarter than it's owner to figure >out when no means *really* no, and when
it means not now, not yet, maybe, take me now, etc - and which one of these
options would make their owner the most happy. Of course making the >owner
too happy could have an adverse effect on their work, get them fired, and
mean they have to give up their love-robot. Maybe the robot would have to
manage or ration their owner's >happiness to prevent this.
>Could end up with some very manipulative situations...
Well, perhaps not smarter, but certainly possessed of a human like ability
to read body language and intuit the feelings and moods of its owner.
Presumably it would learn form any mistakes and over time develop the level
of rapport needed to have such insight.
>Not sure I understand this. Is there a word missing or something?
I may have worded it badly. Basically humans will tend to project their own
feelings and viewpoint onto others. One could argue that there is no real
empathy, but simply the projection of our own feeling onto others. This can
be extended beyond other humans though. Take, for example, the movie Bambi.
Despite being both obviously animated, and a deer, the death of Bambi$B!G(Bs
mother still wins polls as the saddest moment in movie history. That$B!G(Bs
because people overlay their own feelings onto the character, despite the
character being fake. If a computer could seem sufficiently human like, it
wouldn$B!G(Bt matter if the AI itself had real feelings for its master. The
human, being human, would project those feelings on the AI. This is actually
a big part of the thematic material in the movie AI.
>Oh, give the human imagination some credit... ;)
Indeed, and yet things often turn out in a manner other than what we
imagine. I suppose we could imagine anything, and yet as the saying goes,
truth is often stranger than fiction.
ANTIcarrot.
Yahoo! Groups Sponsor
<
http://rd.yahoo.com/M=259395.3614674.4902533.1261774/D=egroupweb/S=17050837
64:HM/A=1524963/R=0/SIG=12o885gmo/*http:/hits.411web.com/cgi-bin/autoredir?c
amp=556&lineid=3614674$B"g(B=egroupweb&pos=HM>
<
http://us.adserver.yahoo.com/l?M=259395.3614674.4902533.1261774/D=egroupmai
l/S=:HM/A=1524963/rand=374111224>
Your use of Yahoo! Groups is subject to the Yahoo! Terms of
<
http://docs.yahoo.com/info/terms/> Service.
Received on Thu Aug 07 2003 - 18:01:09 CDT