As humans, we often recreate the fantasy of confronting certain kinds of intelligence that act against our interests. When it comes to facing the challenge of using artificial intelligence (AI) ethically, we find ourselves at a tipping point. Because this technological advance promises unprecedented developments. However, it also brings certain challenges and difficulties that warrant reflection and deserve action.
These accelerated changes have generated a series of problems of a social nature that struggle to keep up the pace of this new form of intelligence. What action should we be taking?
To start with, we need to establish moral, and probably also legal, limits when using artificial intelligence. If we don’t, we’d expose ourselves to the fact that the development of civilization would become the exact opposite.
Using artificial intelligence ethically
The debate around AI and its use is broad and complex. That said, there’s some agreement on certain moral norms in this regard.
1. Artificial intelligence at the service of human interests
An article published in Real Academia de Ciencias Morales y Políticas claims that, from a humanist point of view, AI has the task of minimizing the avoidable suffering of people. This is where the concept of malicious use of artificial intelligence comes into play. In fact, it refers to the potential dangers that misuse of these programs poses to society.
Therefore, the safety of people must be guaranteed, as well as the privacy and identity of the environment. Otherwise, the precept of this section would be breached.
2. Avoiding the dictatorship of data
The collection of massive data (big data) is the engine of progress in branches as disparate as medical technology and economic development. However, when data is used to skew the population and segregate it, we speak of the ‘dictatorship of data’, claims a bulletin from the CSIC.
Continuing with this example, AI could collect huge samples of the results of a new medical treatment or the incidence of health problems. In this scenario, we’d have to ask ourselves to what extent it’s ethical for an insurer to have access to this data for giving us quotes or providing us with coverage.
3. Respecting neuro rights
Big data could also be used to make predictions about human behavior. This could be really exploited in the field of marketing. Therefore, to use artificial intelligence ethically, such data shouldn’t be a tool that influences the identity of users or their cognitive freedom. These are our neuro rights.
In the field of artificial intelligence, it’s essential to ensure that neuro rights are respected in the collection, storage, analysis, and use of brain data. This involves obtaining informed and explicit consent from individuals before collecting data from their brains. In addition, the privacy and confidentiality of the data must be protected, and it must be used ethically and responsibly.
Furthermore, respect for our neuro rights must ensure that AI isn’t used to manipulate or unduly influence our identities, cognitive freedom, or autonomy.
This encompasses avoiding any discrimination, stigmatization, or manipulation based on brain data. Moreover, it implies ensuring that AI-based decisions and actions are transparent, explainable, and fair. However, this is quite a challenge, since most of the models that artificial intelligence works with are opaque. In effect, they give good results, but we don’t know why.
4. Preserving human dignity
Certain jobs, especially those that provide care, are considered unsuitable for AI and robots. That’s because they require the capacity for empathy and respect. For example, it wouldn’t be ethical to subject an individual to therapy directed by artificial intelligence, nor to have it function as policemen or judges.
The concept of empathy in robots poses extremely interesting challenges, due to the nature of human emotion and consciousness. Although robots can be programmed to recognize and respond to human facial expressions, tone of voice, and other emotional cues, they don’t have the ability to experience emotion and understand in the same way that we do.
A work published in the Economía y Sociedad explains that intelligent technologies are being given functions inherent to managing emotions. Consequently, like humans, they fall into a contradiction between moral duty and the scenario in which it’s implemented.
5. Keeping sources open
There’s one statement that prevails on the subject of artificial intelligence. It’s the idea that their code should be open and public. Moreover, its development shouldn’t be in the hands of a few, since it’s a technology that directly affects people’s lives, social configuration, and even their culture. Transparency must be guaranteed and malicious use of AI prevented.
Why should we use artificial intelligence ethically?
From national security to the use of an app, from politics to medicine, the use of artificial intelligence must be ethically reviewed in an unbiased manner. Any malicious use of it wouldn’t only cause or increase threats to our society but also deepen its negative consequences.
Since AI is changing our world and culture, passive agents must develop, in parallel, a culture of responsibility and good usage. Contrary to what many people think, this doesn’t only involve learning and applying cybersecurity measures.
In fact, promoting such a responsible culture is only the first step. It’s essential that governments and companies take measures for the ethical management of AI. Indeed, moral reflection has already begun and there have been some advances in this regard, as stated in Derecho Global.
The effectiveness of the conclusions of this reflection will depend on whether they meet their humanist objective. For the moment, those who are really dominating us are the humans behind the robots. Indeed, for now, behind all artificial intelligence, lies human intelligence.