More and more companies are deciding to use artificial intelligence systems to manage their resources, which raises a number of legal questions and the need for regulation, due to the practical problems that may arise in the use of this technology. However, would the dismissal of a worker be valid if it is based on the results obtained from a computer program?
Although the use of algorithms is commonly used as a method of staff selection (use of software that enables exact information to be extracted from resumes to determine the most suitable profile for a position), the use of such computer tools may also help companies to determine issues such as professional promotion, salary to be received and even the application of a disciplinary regimes.
In this regard, although it is true that article 20 of the Workers’ Statute states that an employer may adopt the means of control it deems appropriate in order to verify that a worker has complied with his labor duties, it is also true that the courts tend to analyze each specific case and apply a proportionality test to the control measures used. Neither should we ignore the fact that article 22 of the Data Protection General Regulations (DPGR) provides for protection against decisions based only on the automatic processing of data, which adds even more complexity to this type of issue.
In this context, the UGT Trade Union Studies Service recently published a document called algorithmic decisions in labor relations, in which it proposes a law on “algorithmic justice in labor relations”.
Specifically, the trade union considers that artificial intelligence and algorithms applied to labor relations should be considered as “high-risk” activities, as they affect workers’ rights. It therefore proposes the drafting of regulations that govern certain issues that we could summarize as:
- Establishing of a registry of labor algorithms for all companies, to record independent computer solutions that affect the organization of work.
- Extension of the guarantees already established in the DPGR, in order for decisions based on algorithms to be understandable and accessible to anyone, as well as their legal representatives.
- All algorithmic decisions affecting workers must be subject to auditing.
- Promotion of gender equality and diversity amongst the persons in charge of programming and auditing algorithms.
- Definition of a system of liability for legal purposes.
This proposal, together with other studies that are being published, shows that we are dealing with a matter of relevant practical interest in labor relations that raises doubts that are yet to be resolved. Is it essential for a disciplinary decision relating to a worker to be reached by a human being? Are the results of a computer program actually objective or can they be biased? Who is liable for the incorrect functioning of software?
What does appear clear is that, in order to avoid a situation of a lack of due process for both the worker and the employer, an agreed response by legislators is required as soon as possible. In this regard, we refer to the agreement between the Government, trade unions and employers to approve the so-called Rider Law. Firstly, this law will include an assumption of a labor relationship with deliverers that operate through companies by means of algorithmic management of the service and working conditions through digital platforms. Secondly, the law will require the workers’ legal representatives to be informed of the rules controlled by algorithms and the artificial intelligence systems that may affect the working conditions governed by the platforms, including the access to and maintaining of employment and the preparation of profiles.
In any case, we will have to wait for the final wording of the Rider Law to be published to see whether the publication of the algorithms through which digital platforms are managed will have to be published or if such valuable information for the companies that use them will in some way be safeguarded.