One publication that makes its way almost daily into my inbox frequently uses the phrase “AI Ethics.” While my objection to this phrase may seem overly nitpicky, the fact is that while having no common sense, Artificial Intelligence (AI) in and of itself also has no ethics. The real issue is what I would label ethical use of AI applications and systems, a title that describes a completely different, real, useful, and even quantifiable tool. Reliance on AI to assist — or even drive — decision-making is becoming increasingly prevalent in decision-making involving analyzing of all but the smallest amounts of data.
Another pet peeve of mine is the claim that a system — particularly one that references AI underpinnings — can be without bias. (Not the case. Ever.) We all have biases (some are more easily recognizable than others, but that’s another issue), and it should never be a claim of anyone or any entity that bias has been eliminated. We have biases based on the things we’ve learned, both consciously and un, including the way we’ve learned or experiences we’ve had or seen in others. Again, eliminating bias isn’t possible. What’s possible, however, is recognizing and managing it. Most biases can — and must — be identified so that those who are using these potentially powerful (read: life-changing) tools can use them in the most beneficial way and minimize the risk when bad information — or even good information — is used improperly and bad decisions and outcomes are the results.
Read the complete article here