There is a quite interesting article on the perceived dangers of AI by James Barrat over at io9. I think it is worth a read. No because I agree with everything but because it is well written. Also because this discussion, which has been going on and brewing for a long time really took off last year, and with the current boom in robotics and machine learning it is positioned to really be one of the most interesting ones of 2015.
It also points out two things I always thought should be mentioned in an discussion on AI. First:
AI won't have to be identical to our brains to achieve intelligence any more than an airplane has to be identical to a bird to fly, or a submarine identical to a fish to swim.
I'd go even further myself, but it is a very nice formulation!
Second, Barrat also points out that companies might not have the highest motivation for discussing the ethics of AI. Not out of malice (I am very sick of the irresponsible inventor, or mad scientist prototype), but simply because it is not built into the company process (though the article isn't quite formulated in that way).
For the such a discussion we'll need a quite new set of programming and experiemental philosophers I believe, and my, will their job be interesting.