Protection of personal data and security risks

The issue of the protection of personal data is strengthened because there is a lack of basic awareness on the part of the user, especially with the spread of AI…

The Net, intended as a connection of physical devices via air, has an aterritorial element that is increasingly difficult to monitor. The IT giants, in particular Google, are introducing new features with increasingly significant impacts on a personal level. This is certainly verified if we think about the diffusion of online data, especially personal data.

Therefore, the need to protect the personal data circulating on the Net is born, both in the buildings of the European Institutions and from the civil society. It is therefore necessary, first of all, to distinguish between personal and non-personal data, after which it is necessary to give rise to an increasingly effective and defensive protection from the point of view of user awareness. The latter, in fact, loses sight of the value of its data if it relates it to the gratuitousness of the price understood as financial savings; from this point of view, the cases of study “Facebook – WhatsApp“, which arose both in the European Community and in the National Antitrust Authority, are illustrative.

Protection of personal data and security risks

Today, in fact, we speak of the “Internet of things” to define the existence of new objects connected to the Net that work enormous masses of online data (the so-called big data). In order to qualify the value of the data, therefore, the definition of “digital oil” has been coined. This is supported by the growing use of Artificial Intelligence in the different fields of knowledge: technology, chemistry, medicine, statistics, marketing, etc..

Artificial intelligence and the profiles of European law

The lack of consumer awareness is in disastrous contrast to the spread of AI. Machine Learning, one of the most developed and well-known types of AI, itself creates rules for examining data through a process of self-learning that takes place on the basis of statistics processed on the data owned by the software. It is even truer then the thesis of those who support a greater awareness on the part of the end user in releasing consent to the processing of personal data.

One of the sectors that has registered the entry of the AI is certainly the judicial one. In fact, in December 2018, the European Commission for the Efficiency of Justice published an ethical charter on the use of artificial intelligence in justice. At a conference on 28 March in Lisbon, Prof. Filippo Donati, a lay member of the Superior Council of the Magistracy (CMS) highlighted how in an apparently “uninnovative” sector, AI poses problems, especially with regard to the imputability of conduct, therefore European legislation does not yet provide the answers that the pace of innovation requires.

The orientation of the European Commission

Therefore, the European Commission last year launched a Communication addressed to the European Parliament and the Council on the Artificial Intelligence of Europe which therefore aims to:

to promote the EU’s technological and industrial capacity and the adoption of AI by all economic operators;

– to prepare for the socio-economic changes induced by Artificial Intelligence

– ensure an appropriate ethical and legal framework, based on the values of the Union and in line with the Charter of Fundamental Rights of the European Union.

Exit mobile version