> From: Andrew Priest [mailto:apriest_at_netidea.com]
> ➢ 'Cos it's so easy to dress someone up in a bad costume and
> churn out another B grade Sci-Fi... :-) Tends to follow
> our thought trains (as discussed before) that humans are
> the superior race (like we all believe we are 'above
> average' drivers...), hence creatures that are 'similar' to
> humans (anthropomorphic even, meaning having human-like
> characteristics!) are more likely to be accepted by our
> psyche as being possibly able to be smarter, faster, more
> able to run the universe or whatever than we are. How many
> aliens have you come across recently that don't follow the
> 'human' body model, even if they are a different size or
> have a few different appendages? Don't see many 1" flying
> salmon trying to take over the world? Equally as likely as
> some 6" tall fox from another galaxy, but our mindset says
> otherwise...
>
> I think you’re being overly harsh. Or I’m being too kind. Still,
> it’s inherently difficult to wrap one’s mind around a truly alien
> perspective.
I think it's more the case that it's inherently more difficult to make an alien character realistic. Turning humans into vulcans and klingons and even ALIENs is much, much simpler and cheaper. A vulcan probably costs ~$1000 per day of shooting. Animating a CGI character (in say Dinotopia) probably costs at least that a *second*. I've no idea what anamatronics like in Farscape and The Ninja Turtle movies cost but I doubt it's better.
And you are beginning to see more alian looking aliens. The bugs from Starship Troopers and the wierd ghost things from Final Fantasy didn't look at all like the more traditional B-movie villians. ;)
> ➢ A simple example (maybe too simple?). Many of you have a
> dog (or know someone who does). Does it have
> annoying habits, cleanliness, etc, that it would probably
> bring with it in anthro form that would really grate?
> I’d argue the problems run deeper.
Probably not many. At this point you need to separate dog instincts, from solutions to specific problems caused by the design of their bodies. Rolling in the dirt is a good way to give yourself a dust bath. Rolling in sh*t is a good way to hide your scent from prey. In fact these could both be learned reactions. This is different from *real* instinct, such as the urge to following a fast moving object with their eyes and similar. Humans of course have a similar instinct, and it hasn't stopped us from socialising with each other.
And doggie-breath and similar can depend on the individual dog. Some aren't that bad. And many humans are a lot worse.
> It’s also tough to imagine what a machine AI would be like. It’s
> quite possible that to achieve human-like quality an AI would
> have to develop mentally much like a human does. Programming
> something as intricate as a mind may be beyond us in sheer scope.
At present I'd agree, but only because of money and our current lack of understanding of what a mind is.
Traditionally, a mind has been something humans have that animals don't. Hence people tried to define it in those terms. and a lot of what a mind actually is got put down as 'something only stupid mindless animals do'. And if X is something mindless animals do then X can't possible be anything to do with the mind, can it? Other new approaches, such as MRI scanners offer newer insights. The mind may not be such a mystery in a few hundred years.
An organisation's ability to write programmes has always been dependent on the number of programmers it can employ, which in turn has been dependent on money. Until now. No one organisation paid for the development of Linux. No one organisation paid for the processing power that SETI_at_home can utilise. It *might* be possible to programme a mind this way. If a million monkeys can produce Shakespeare, why can't a million (or more) internet programmers produce a sentient computer programme?
> Passing the Turing test
> will require an AI that’s very flexible and adaptive.
&
> An old trick, though it will require more than that to pass the
> Turing test
Actually computers have begun to pass the Turing test. All the programmes needed to do was sit back for a second and remember how stupid humans could be. ;) Then it became a simple matter to make the human think down to the level of the computer programme and they believed the computer was as smart as they were. Such tests have since been modified to prevent this from happening.
> Yeah, longevity will be a huge issue for androids. Even now, for
> example, the high tension steel tendons they use in robotic hands
> don’t last so very long.
They're talking about replacing tank tracks with composite bands. Such a touch flexible material would probably make an idea replacement.
> ➢ Imagine having one of the sisters as a life sized robot
> (for lack of better description at this stage), that looked
> exactly like you'd expect her to (as per JMH's drawings)
> and reacted just like you'd want her to when you spoke to
> her etc. And reacted the way you'd like in other
> situations too!
Hmm. Not sure I'd want a robot with *quite* that many teeth, but I taker your point.
> Imagine it went beyond that. Imagine it learned your likes and
> dislikes, moods and behavior, and evolved to suit you better and
> better.
This would be easy to get wrong. What people think will make them happy is not always what will make them happy. The robot would effectively have to be smarter than it's owner to figure out when no means *really* no, and when it means not now, not yet, maybe, take me now, etc - and which one of these options would make their owner the most happy. Of course making the owner too happy could have an adverse effect on their work, get them fired, and mean they have to give up their love-robot. Maybe the robot would have to manage or ration their owner's happiness to prevent this.
Could end up with some very manipulative situations...
Many stories on this theme can be found at the following address. They are almost all adult to the usually warnings apply.
http://home.att.net/~DB_Story/Contents1.html
> Given the ability of humans to project
> onto other things their own feelings and desires… the possibility
> of real love on the human’s part is very strong. As Spielberg
> said in an interview on the AI DVD, it wouldn’t be how machines
> considered humans, but how humans considered machines.
Not sure I understand this. Is there a word missing or something?
> I believe
> that a sufficiently realistic robotic creature could, even with
> its artificiality, induce very real feelings within humans.
Given. Though I think function might be more important than appearance.
> ➢ Refuelling could probably be handled by dropping what ever
> was required into their mouth and swallowing it: "I'd like
> 1/2 litre of synthetic hydraulic oil and a serve of AA
> batteries please..." would probably be an unintrusive
> method. A give away they weren't human, other than they
> were a 5' tall talking skunk...
Probably use induction charging. Otherwise it would be simple to hide a charging port inside an ear or similar.
> I don’t think the effect of such high-level machine
> beings on humanity is even possible to know absolutely. It could
> alter society in ways we can’t even imagine.
Oh, give the human imagination some credit... ;)
ANTIcarrot.
Received on Thu Aug 07 2003 - 10:40:42 CDT