Real or artificial? Tech titans declare AI ethics concerns
I think we really need to sit down for a proper conversation on what we want to develop AI for. What is it that we want to replace with machines, and what is it that human beings will continue to do?
History has shown that if something is possible, human endeavor will eventually make it happen. Human beings can now fly, dive underwater, and go faster on land than any other land animal. We are now able to communicate across vast distances in real time. We can interact with people without having to be physically at the same location.
Still, human have been the ones at the center of these actions. Humans decide where to fly to, where to go diving, and where to travel on land. We decide on who we want to interact to, and what we want to talk about. But AI is slowly changing that. And if we do not come to a decision soon, AI will only make more and more decisions for us.
And the danger of that? If we make less and less decisions, our brains will grow stupid from lack of use.
Of course, this is not a danger if we can reuse that freed up brain power for something else. So the question is twofold.
1. What aspects of our lives are we willing to replace with machines, such that it frees us up from things that we do not (or cannot) do for us to focus on things that we should be really focusing on?
2. What are the things that we should be focusing on?
Looking at these two questions, I think we can say that machine vision is a good case for AI. It can help those who cannot see. It enhances our ability to see minute details so that we do not miss out on them. But at the same time, there is the downside: current machine vision is pre-trained before deployment, which means it will recognize a finite set of objects. Which means any reliance on such technology will "narrow" our perspective to that finite set, i.e. our attention gets drawn to things that the computer knows, making us more likely to miss other details that we might have otherwise caught if we did not rely on machine vision.
Let's look at machine transcription (aka speech-to-text synthesis). This definitely helps those who cannot hear. And for others, it frees up people from a mundane task so that we can use those people (resources) for other tasks. But is there a downside? So far, I can't think of any, but who knows?
Autonomous driving is another use case which has been undergoing heavy development. And a lot of attention in terms of ethics. Its controversy is: who is responsible should a self-driving car kill or injure someone? There is debate on this, and I hope people will take up this debate more actively, because with the current rate of development, self-driving cars are on the horizon. Let's sort the ethical questions before we are forced to face the reality of it.
And finally, my biggest fear: uncurbed, we will one day develop machines that are so intelligent, they no longer need human beings for their further development. They can develop themselves into even better and better versions able to do more and more things. Will we then find ourselves as second-class citizens of our own societies?
I think we really need to sit down for a proper conversation on what we want to develop AI for. What is it that we want to replace with machines, and what is it that human beings will continue to do?
History has shown that if something is possible, human endeavor will eventually make it happen. Human beings can now fly, dive underwater, and go faster on land than any other land animal. We are now able to communicate across vast distances in real time. We can interact with people without having to be physically at the same location.
Still, human have been the ones at the center of these actions. Humans decide where to fly to, where to go diving, and where to travel on land. We decide on who we want to interact to, and what we want to talk about. But AI is slowly changing that. And if we do not come to a decision soon, AI will only make more and more decisions for us.
And the danger of that? If we make less and less decisions, our brains will grow stupid from lack of use.
Of course, this is not a danger if we can reuse that freed up brain power for something else. So the question is twofold.
1. What aspects of our lives are we willing to replace with machines, such that it frees us up from things that we do not (or cannot) do for us to focus on things that we should be really focusing on?
2. What are the things that we should be focusing on?
Looking at these two questions, I think we can say that machine vision is a good case for AI. It can help those who cannot see. It enhances our ability to see minute details so that we do not miss out on them. But at the same time, there is the downside: current machine vision is pre-trained before deployment, which means it will recognize a finite set of objects. Which means any reliance on such technology will "narrow" our perspective to that finite set, i.e. our attention gets drawn to things that the computer knows, making us more likely to miss other details that we might have otherwise caught if we did not rely on machine vision.
Let's look at machine transcription (aka speech-to-text synthesis). This definitely helps those who cannot hear. And for others, it frees up people from a mundane task so that we can use those people (resources) for other tasks. But is there a downside? So far, I can't think of any, but who knows?
Autonomous driving is another use case which has been undergoing heavy development. And a lot of attention in terms of ethics. Its controversy is: who is responsible should a self-driving car kill or injure someone? There is debate on this, and I hope people will take up this debate more actively, because with the current rate of development, self-driving cars are on the horizon. Let's sort the ethical questions before we are forced to face the reality of it.
And finally, my biggest fear: uncurbed, we will one day develop machines that are so intelligent, they no longer need human beings for their further development. They can develop themselves into even better and better versions able to do more and more things. Will we then find ourselves as second-class citizens of our own societies?
No comments:
Post a Comment