This strikes me as kind of meaningless. Yes, any ethical issues concerning an AI's utility function are ethical issues concerning the creation of an AI, because (as you observe) you cannot create an AI without a utility function. The two cannot be decoupled. That doesn't make any hypothetical ethical concerns regarding utility functions go away, it just means you can't create any AI at all ever without bumping into them, because all AI's have utility functions or they aren't AI's; just fancy calculators.name_here wrote:And I also don't see any ethical issue in making an AI like to do what you want it to do that isn't encompassed by making an AI in the first place; if you don't write a utility function of some kind you do not have an AI, you have a bad random number generator. So your options are to make it like to do what you want it to do or make it like to do something else. Well, or you could fuck up trying to do option one.
There are a lot of dystopian sci-fi novels that boil down to "the government is making people happy and that's evil because it's a chip in their brain/drug in the water/subliminal on the tele and that's not real happiness." The idea of having the power to control what makes another intelligent being happy makes a lot of people pretty uncomfortable, even if being happy about things is basically awesome by definition. Hypothetical: you are a mad scientist and you have a fetus in a test tube. You want to mad science at this fetus's development in such a way that it will develop into a person who is happiest digging precious metals out of the ground while having little to no regard for its own life or safety. Are you okay with that? Is this any different than an AI made out of flesh and blood instead of 1's and 0's? Note: fuck tzor, fetuses aren't people and stem cell research is awesome. We need to be performing more mad science on fetuses, not less.
There is something kind of weird about deciding what makes another intelligent being happy, particularly when you are doing so for your own benefit. I'm certainly not against the creation and use of human-like AI, but I do have to admit that finding hairs to split between things I'm okay with and things that - bare minimum - make me uncomfortable can get pretty fucking difficult.
I'm betting that the first human-grade AI (i.e. an AI that makes people sit up and say "the future is now!") will just be a piecemeal emulation of the human brain jury-rigged together. I'm still super psyched that we made a functioning electronic hippocampus. Somewhere on this planet there is a microchip that contains digitized mouse memories. How fucking cool is that? I mean, sure, we can't read it, because nobody has figured out what file formats mice's brains use to store information. But that information - unreadable it may be - exists in 1's and 0's.Starmaker wrote:Depending on how AIs will be made, you might not have full control over the utility function.
To be less dickish about it than Kaelik, he is right and you are absolutely projecting your very American understanding of slavery onto a civilization that is thousands of years older than that and whose institution of slavery is completely disconnected from the one you are talking about. To the Ancient Greeks, whether or not it was okay to own people was not an open question. The answer was "yes, duh, why are you even asking? Fucking weirdo, what's next, want to know if water is wet?" with a side of "but try not to be a dick about it if the person you own is rich or educated, which is redundant 99% of the time." Slavery was just another part of kicking people's asses and taking their stuff.Omegonthesane wrote:They probably thought the captured Greeks were somehow less of a person than they themselves were by having ticked a set of boxes which meant it was OK to make slaves of them.