The proposal for a European Directive on improving working conditions in platform work requires that, in intensely digitalized environments, there is always human control. This means that the decision regarding a job cannot be left exclusively in the hands of artificial intelligence.
Elon Musk, together with other collaborators, created in 2015 a language model called Chat GPT. Visually, it operates like a chatbot similar to others with which we already interact as part of the customer service of the most prominent companies. However, there is a fundamental difference between it and those chatbots. It is capable of providing responses that can hardly be distinguished, at certain levels, from those that would be given by a human worker. Its secret is that it can draw upon databases to locate the origin of the information requested and is capable of combining resources to generate responses. In other words, it really does some research work, similar to that which could be carried out by any human.
The fact that it is a free resource, readily accessible by any user, together with its huge potential, has caused in recent weeks a burst of uses, which seems to question the future feasibility of jobs based on tasks requiring documentation and analysis of data. A million-dollar offer, launched in the United States, to whoever is prepared to allow the legal arguments to be used at a trial to be dictated by this artificial intelligence, has even gone viral.
We cannot deny that, in the coming years, this type of technologies will pose a huge challenge for employment relations and will probably do so more quickly than was envisaged due to their huge creative capacity.
Perhaps for this reason, the proposal for a Directive of the European Parliament and of the Council, on improving the working conditions in platform work has been published very recently. One of the objectives of this legislation is to ensure that, in intensely digitalized environments, there will always be human control and a minimum of human contact. Thus, employees would be protected from the adverse effects on their employment contract of automated decisions. In fact, it is sought to establish a kind of human “quota”, to ensure that the employer will always have a sufficient number of persons with high quality training, whose function is to monitor decisions automatically taken regarding the workforce, with an impact on employment. Human control is thus proposed as the ultimate guarantee mechanism and must have the capacity to annul and overrule automated decisions. It must also be capable of promptly providing an explanation regarding any decision adopted that significantly affects employment.
In conclusion, it is desired that a job cannot be affected solely by the decision of artificial intelligence, but rather there must be at least one person in the company’s structure that has supervised the decision adopted. And if it has been erroneous in the supervisor’s opinion, he can overrule it, thereby preventing damage to employment.
This legislation seems to embark on the path of a new list of minimum guarantees in employment conditions, by which it is sought to provide “human” security mechanisms that defend employees from artificial intelligence. Thus, the employer is discreetly placed in the background, since he is authorized to delegate certain decisions to artificial intelligence, although not with full freedom since he must have ultimate control.
In short, artificial intelligence will be able to take significant decisions regarding employment, provided that, before it is implemented, it can be proven that the decision has been supervised by a human.
Garrigues Employment & Labor Law Department