Recently, we had the chance to follow a fascinating conversation between the famous writer Yuval Noah Harari and Fei-Fei Li, who is a computer science professor and a co-director of the Human-Centered AI Institute at Stanford. The topic of their interaction was the future of the AI, how is it going to evolve and how can it best serve human interests, instead of undermining them.
How Much Is too Much?
One of the points Li addressed during the conversation was that we need to work on AI systems that will be able to explain their processes and decisions. However, Harari counter-argued that such technologies are already too complex to get an explanation. He also added that this level of complexity is a threat to our authority, as well as autonomy.
On the other hand, Harari touched upon the limits: how much is too much when it comes to the information AI collects about you? He said this is a real risk and a growing concern among people who work in the field and not only. The famous writer underlined the fact that these mechanisms can collect important information about us and share it (even without us knowing about it) with other advertisers or even governments.
Can It Be Too Little?
At the same time, we should not forget that the AI still has a long way to go until it achieves the milestones we’re fearing. For example, one issue with the facial recognition systems is that it centers mostly on identifying white traits. People of other colors or from remote areas of the world have a hard time using this feature on tier devices. And it’s not such a problem if you’re just using it for leisure. But what if this was to be integrated into security systems and identification? We wouldn’t want it to mistake people there, wouldn’t we?