Jack Morton's article about how accountable Facebook can be held for using pre-existing facial recognition technologies led me to the web barrage of strong opinions about Facebook's new auto-tag feature. Most articles online are of the opinion that the new facial-recognition option on Facebook is straight up terrifying. Here are the articles:
But it's not as simple as that.
Isn't it inevitable? In the natural evolution of tech innovation, wasn't it always going to happen eventually? And is it really that dangerous? Is it just a faster, easier way of achieving things that are already possible? How do ethics play a part and if it is an ethical question, what responsibility does Facebook have to "protect" its consumers when there's no real data supporting the idea that it's not safe? Plus, Facebook isn't a machine -- it's run by hundreds of humans who will also be subjected to this service. In my opinion, the only creepy part is that Facebook will then essentially have control of a database of billions of people around the world that they can identify by almost any information. But they don't control the information if any user can search using the same tools. Motive is the key ethical standard here -- if the technology exists and can be put to use with the only goal being to simplify a product, then shouldn't it be? It would seem weird to me to have a major innovation sitting on the shelf gathering dust just because it seems a little i-Robot of them to put it to use.