Artificial intelligence: the challenge of ethics and reliability

The potential of artificial intelligence (AI) in the field of health is undeniable. Its application makes it possible to increase the speed and precision of the diagnosis and detection of diseases, optimize clinical care, facilitate the development of studies and medicines, reinforce research, support public health interventions or contribute to health self-care, among many others. Applications.

But it also comes with challenges and risks associated with ethics and sustainability, such as the collection and use of medical data for unethical purposes; your safety; the risk of biases, both in obtaining data (which may discriminate based on gender, origin, etc.), and in its classification or accessibility; or that are used for economic purposes, as well as their environmental impact.

In this sense, the report “Artificial Intelligence, Ethics and Society”, from the Artificial Intelligence Ethics Observatory of Catalonia (OEIAC) focuses on the two basic concerns associated with the use of AI. On the one hand, that related to the moral behavior of the people who design, manufacture and use it, who, after all, try to “imitate” human intelligence, which can cause problems such as deception, bias of data or cognitive errors. On the other, that which is related to the behavior of the AI ??systems themselves, which must be governed by ethical principles that allow them to make “moral decisions”.

In addition, they add, the ethics of AI goes beyond a question of moral design and implementation in systems, and must also take into account social and cultural values ??that would be expected, and affect, to be integrated into AI designs. These values ??are related to the digital divide that prevents, for example, people from many parts of the world from participating in the design and development of these technologies, which can have an impact, in the long run, on the success of their actual application.

The social nature of each territory also conditions the development of an ethical AI. The OEIAC report points to the existence of three models that coincide with the main powers of technological development. On the one hand, the “AI for control” model, used mainly in China, which uses this technology as a tool for social control and security.

The second model, “AI for profit”, predominant in the United States, where ethics is in the background, behind economic benefit, as it is “oriented towards the development and implementation of AI systems where a few companies dominate most of the sector technological”. Finally, the “AI for society” model, adopted by the European Union (EU), and which opposes and distances itself from the two previous ones, putting user privacy and ethical principles before the technological development of AI.

The adoption of ethical AI in the EU has led to initiatives and measures aimed at regulating and creating legislative frameworks that support the development of this technology. This has allowed Europe to currently have the most advanced legislation in terms of personal data and to promote a policy focused on the right of individuals to decide how their data should be used.

In 2019, the European Union published the “Ethical Guidelines for Trustworthy AI”, a document aimed at promoting trustworthy artificial intelligence based on the need to ensure its legal nature (complying with all applicable laws and regulations), ethical (guaranteeing respect for ethical principles and values) and robust (from a technical perspective and also taking into account its social environment).

Exit mobile version