1 Introduction

The regulation of Artificial Intelligence (AI) has garnered the attention of various stakeholders worldwide. In Brazil, the subject gained visibility between 2019 and 2021 when three bills concerning the regulation of AI began to be discussed in the National Congress. In 2022, the discussion on the regulation of AI in the country became even more intense due to the creation, by the Federal Senate, of a commission of jurists responsible for gathering information to create a regulatory framework for AI. Amidst these topics, many researchers have overlooked the fact that the legal regulation of AI depends, among other elements, on decisions by the Judiciary regarding disputes involving the different applications of the technology in question, rather than solely relying on general laws that directly deal with regulation.

In addition to this gap in the discussions on the legal regulation of AI, there is another gap in the discussions on the use of AI by the Brazilian Public Power: it is still not completely clear to researchers how AI applications have been used by national public offices. In the case of the Brazilian Judiciary, Salomão et al. [22] attempted to map AI use cases. Investigations such as this are valuable but do not provide us with a clear understanding of how this Power actually deals with AI in concrete situations where the technology in question may have caused objective harm to specific individuals.

To partially address these issues, our research question is: When analyzing cases involving facial recognition systems, does the Judiciary of the State of São Paulo evaluate the transparency of these systems and their cybersecurity level against fraud? Guided by this question, we applied web scraping techniques to collect data from the Court of Justice web portal and analyzed the arguments presented by the Judiciary Power to justify its decisions involving the use of facial recognition (FR) systems for credit loan contracts. We also assessed how clearly the use of such technologies is understood by both the Judiciary Power and the contracting party of the credit.

We chose to study facial recognition systems because, as Daly [6] points out, they are an example of an AI application that tangibly affects individuals’ lives, allowing them to conceive the technology in a more concrete manner, even though it may sometimes seem abstract. The choice of the State of São Paulo is justified by its status as the most prosperous state in the country, potentially making it the state with the most frequent use of AI applications. Finally, we discuss the challenges of transparency in AI, as we believe that ethical AI is impossible without it.

In addition to this introductory section, the article comprises three additional sections. Section 2 presents the methodology used to scrape data on the decisions of the Judiciary in the State of São Paulo and includes the methodology employed to analyze the collected data. Section 3 presents and discusses the obtained results, highlighting the dependence of facial recognition systems on sensitive personal data and noting that the Judiciary of the State of São Paulo has given little importance to the requirement of free and informed consent for the treatment of such data. Section 4 concludes the article.

2 Materials and Methods

An exploratory research was conducted to investigate the factors that influence the decision-making of the State of São Paulo’s Court of Appeal regarding subjects related to Artificial Intelligence [10]. The study was based on data collected from the decisions stored in the portal of services (E-SAJ) of the Court of Appeal portalFootnote 1 using a Web Scraping technique, which consists of extracting online data to obtain content of interest in a systematic and structured manner [11].

In this research, most of the data collection and treatment procedures were automated using computer programs. The technologies used can be defined into two categories: (1) Web scraping tools and (2) data treatment coding.

2.1 Web Scraping Tools and Frameworks

To scrape the data from the Court of Appeal processes, an Application Programming Interface (API) named TJSP was used. It is a community open-source application published on GitHub [8]. The main goals of the TJSP tool are to collect, organize, and structure data from process judgments of the first and second instances of São Paulo’s Court of Appeal, using different methods for various data extraction proposals. This tool was developed using the programming language R, and the integrated development environment (IDE) called RStudio was used to manipulate the code and instantiate the API for our research, defining the terms and authenticity properties.

The API was used to extract data from the Court of Appeal web portal proceedings by searching for all decision documents containing the term “Artificial Intelligence” and was improved to deliver the data in a structured and organized file. After defining the search terms, the following steps were taken: (1) Download the corresponding HTML pages with all the processes from the São Paulo’s State Court of Appeal web portal; (2) Based on the obtained HTML files, identify the information about the processes, returning information such as class, subject, rapporteur, court district, judging body, date of judgment, date of publication, case, summary, and code of the decision; (3) Store the retrieved information in a data frame, where each column represents specific information and each row represents a process; (4) Convert the data frame into a spreadsheet for qualitative and quantitative analysis.

The collected decisions in this document refer to 190 (one hundred ninety) matching results, comprising data from 2012 to 2023, related to the most general terms in the AI context: “inteligência artificial” and “artificial intelligence”; “aprendizado de máquina” and “machine learning”; “aprendizado profundo” and “deep learning”. The search included the terms searched in Portuguese and English, respectively, and in singular and plural. These results correspond to the first decision document registered on the portal on june 24th, 2010, to the latest scraping round performed on April 20th, 2023.

2.2 Data Treatment

Between the information retrieved from the decisions, there are different types of data, including integral, date-time, and character strings. Given the goal of our study and to help facilitate the qualitative analysis, we applied different data treatment techniquesFootnote 2, divided into pre- and post-processing approaches. These approaches were applied to the “Jurisprudence” column, which contains the decision text. Cleaning techniques were developed using the Python programming language and deployed in the Google Colab IDE to take advantage of the performance benefits offered by the platform and make them available to the research team.

Data Pre-processing.

The pre-processing and cleaning of data are the most critical and time-consuming parts of a web scraping project. To obtain a good visualization of the data collection, it usually requires multiple iterations of cleaning, transforming, and visualizing the data before it is in a suitable form [27]. There are different techniques, ranging from the simplest to the most advanced, to preprocess a data collection, i.e., prepare, clean, and transform data from an initial form to a structured and context-specific form. In our research, we applied data mining techniques to clean and search for repetitive or redundant parts in the “Jurisprudence” column content to improve the qualitative analysis.

The decision content follows a standard structure containing repetitive or redundant sections. These sections include: (1) titles, headers, references, and terms that are identical for all documents, such as “Judiciary Power,” “Court of Appeal of the State of São Paulo,” “Electronic Signature,” “Decision,” among others; (2) appeal and vote numbers, which differ for each decision but will not be used in the analysis. These numbers are identified by being preceded by terms such as “appeal n,” “civil appeal n,” “appeal.,” “record:,” “vote n,” and others; and (3) document page identifiers, such as current page numbers, referenced section page numbers, or total page numbers of the decisions. These identifiers are preceded by sections such as “(fls.,” “(fl.” Using the previously mentioned cleaning techniques, these expressions were removed from the results spreadsheets.

Data Post-processing.

Once the data cleaning stage is complete, we move on to post-processing the information. In order to derive insights from the data, it is imperative that the data is presented in a format that is readily analyzable. This requires the data to be not only clean and consistent but also well-structured, aiding in addressing inconsistencies in scraped data [14]. This stage involves structuring the collected data according to the research context, or in other words, highlighting and grouping the most important parts of each decision to streamline and aid in the accuracy of manual work done by analysts.

In the context of data post-processing, automated procedures were developed to improve the qualitative analysis by identifying relevant terms/contents that could make it easier to discover the topic addressed in each process and determine whether or not it should be part of the ongoing research analysis scope. (1) From each decision, all paragraphs in which the searched term appears were selected and added in a new column; (2) It was identified that some decisions refer to others using the number of the respective process. Thus, for each decision, references to its process number in other documents were searched, and this information was also stored in a new column of the results spreadsheet. Another procedure developed was the grouping of decisions based on rapporteurs, also using Python modules for data manipulation, this improve made it more practical and efficient to perform a qualitative analysis of the results in a critical and embracing way.

Also, aiming to continuing collecting data, to improve the useful information, and to make the research dynamic by keeping the results up to date, a periodicity of repetition of the scraping procedures was established. For this, other methods were developed in Python programs. They are: (1) A results counting mechanism, which compares the number of terms with corresponding matches and the number of matches for each term, comparing these numbers with the respective ones from the last scraping performed. These data are inserted into a new spreadsheet and updated every month; (2) In each results spreadsheet, the new scraped processes, compared to the ones from previous scraping, are marked with the character ’*’, so that analysts can easily identify new results, avoiding redundancies.

2.3 Qualitative Analysis Method

The main procedural methods adopted were analytical, comparative, and, especially, monographic. This was revealed through a longitudinal study of the specificities of each case, investigating the factors that influenced decision-making in each unique event, based on sectoral analysis and making generalizations about similar events. This article examined all decisions of the State of São Paulo Court of Appeal that contain the following six terms: “inteligência artificial” and “artificial intelligence,” “aprendizado de máquina” and “machine learning,” and “aprendizado profundo” and “deep learning.” During the search, 190 items were retrieved, out of which 41 were deeply analyzed for the purpose of this ongoing research, totaling 21.58% of the results. These analyses focused specifically on banking matters, with particular attention to disputes involving the validity of digital payroll loan contracts carried out by facial biometrics. The first decision analyzed was issued and published on August 30, 2021, and the last decision (according to data scraping conducted in April 2023) was on April 20, 2023.

Using the monographic method, this research will present the standard arguments issued by each of the rapporteurs in the decisions referred to, justifying the validity of the loan contracts. Firstly, the existence of a legal relationship between the bank and the consumer will be analyzed, as well as the alleged lack of voluntary consent. This will provide a generalized framework of the views adopted by judges regarding facial recognition technology (FRT) and its use in digital banking contracts in Brazil.

Finally, the analysis of the Appeal Court’s decisions was divided into 13 subcategories: (i) case number; (ii) class (type of appeal analyzed in the decision); (iii) rapporteur; (iv) judicial district; (v) judging body; (vi) subject; (vii) trial date; (viii) publication date; (ix) disputing parties; (x) reasoning; (xi) final decision; (xii) context of the use of the term “Inteligência Artificial” (A.I.); (xiii) jurisprudence.

3 Results and Discussion

From the web portal of the State of São Paulo’s Court of Justice, 190 decisions covering a span of 12 years (from 2012 to 2023) were analyzed. The results showed that 177 decisions involved the term “Inteligência Artificial,” 4 decisions referred to the term “Artificial Intelligence” (from October 2022 to April 2023), 3 decisions related to “Aprendizado de Máquina” (from August 2022 to February 2023), and 6 decisions mentioned “Machine Learning” (from September 2018 to November 2022). However, no decisions were found for the terms “Deep Learning” or “Aprendizado Profundo” through web scraping on the Appellate Court’s website.

Figure 1 illustrates the exponential growth trend in the number of decisions containing these six terms. The graph covers the period from 2012 to 2023, showing relative stability until 2019 and a significant increase from 2020 onwards. Several factors contribute to this trend, including: (1) The enactment of the “Lei Geral de Proteção de Dados Pessoais” (LGPD), Law No. 13,709, on August 14, 2018, which came into force in Brazil in September 2020. This has led to an increase in disputes related to the implementation of AI systems in various sectors of society, such as banking, service provision through applications, and compliance with the new standards. (2) As mentioned in the “qualitative analysis method” section, 37.3% of the “Inteligência Artificial” results are related to banking matters, including disputes involving credit card operations, ATM fraud, nighttime bank robberies with threats for consecutive withdrawals, phone or SMS scams where scammers pretend to be AI virtual assistants to obtain passwords/personal data, and payroll loan cases involving facial recognition (the focus of this article). The occurrence of facial recognition-related cases is also expected to grow rapidly, as depicted in Fig. 1.

Fig. 1.
figure 1

The Number of Occurrences Related to 6 Term searched “Inteligência Artificial”,“Artificial Intelligence”, “Aprendizado de Máquina”, “Machine learning”, “Aprendizado Profundo” and “Deep learning”, between 2012 and 2023.

The apparent drop in 2023 does not indicate an actual decrease in the number of decisions. It is important to note that this research only compiled data up until April of that year. The growth trend remains high, as the number of decisions analyzed up until April 2023 alone has already more than doubled the entire year of 2019.

Our results will focus solely on decisions related to the term “Artificial Intelligence,” totaling 41 in number. The other terms do not involve disputes regarding the validity of digital payroll loan contracts carried out through facial biometrics. In fact, they do not pertain to banking matters at all. These decisions cover topics such as “health plans,” “higher education,” “contractual readjustment,” “business management,” “indemnity for moral damages,” and “provision of transport application services,” among others. None of the appellants in these cases are banking parties.

Out of the 41 decisions analyzed, in 40 of them, it was determined that a legal relationship existed between the contracting bank and the customer. Therefore, the validity of the electronic payroll loan agreement was confirmed by the judges who recognized the legitimacy of the debt and the legality of the bank. Furthermore, the Court of Appeal acknowledged that there is no unenforceability of the loan refinancing agreement. In cases where the bank files an appeal, it is granted in terms of the contract’s validity and the absence of unenforceability for material and moral damages (when claimed by the plaintiffs). When the judgment is unfavorable to the plaintiff, their appeal is dismissed. The existence of the loan and the allegation of bad faith litigation (when claimed) are recognized or partially dismissed (removing the bad faith litigation while maintaining the validity of the loan).

Figure 2, it is possible to point out that the exponential growth trend in the occurrence of payroll loan cases involving facial biometrics over time. The graph covers the period from 2020 to 2023. However, the registered cases in the facial biometrics contexts emerge from 2021. Based on these data, it can be inferred that such an event occurs in this way, since the regulation of facial recognition technology (FRT) is still quite incipient in the country, both in terms of the legislative framework and its implementation. In the LGPD, for example, there is not even a mention of the term “facial recognition” or “facial biometrics”, restricting itself only to classifying in its art. 5, item II, as “sensitive” biometric data (relating to health, sex life, genetic data, religious/political/philosophical group, etc.). In Brazil, there are only three bills dealing with FRT: PL 2392/22, which prohibits the use of FRT for identification purposes in the public and private sectors without a prior report on the impact on people’s privacy; PL 3069/22, which regulates the use of automated facial recognition by public security forces in criminal investigations or administrative procedures; and PL 2338/23, which regulates artificial intelligence and allows the use of facial recognition systems for public security purposes in public spaces only with legal and judicial authorizations. Therefore, deprived of a normative framework that minimally regulates not only the FRT implementation procedures in the country but also the rights and duties of consumers and banks regarding such technology, judges end up issuing judicial decisions that could be more grounded, complete, and divergent.

Fig. 2.
figure 2

The Number of Occurrences Related to Facial Recognition Between 2020 and 2023.

Meanwhile, the four main arguments presented by judges to justify the existence of the digital payroll loan agreement involving facial recognition technology (FRT) are listed below:

  1. (i)

    Non-challenge of the documents presented by the bank. The plaintiff did not challenge his photo displayed by the bank (defendant) nor the receipts for withdrawing the money used by the plaintiff, nor did he require the production of an expert report at the appropriate procedural time to prove possible tampering with the evidence presented by the opposing party.

  2. (ii)

    Article 3 of Normative Instruction No. 28/2008 of the INSS/PRES allows the contracting of a loan to take place electronically. Therefore, the signature is not a requirement for the existence or even the validity of the contract, as it would only serve to confirm the expression of will. This is indeed essential and indispensable and was granted by the author when he agreed to collect his facial biometrics for the specific purpose of formalizing the contract. Therefore, a contract carried out electronically dispenses with the formal rigors of a standard contract, precisely to the extent that the electronic form can be carried out anywhere, as long as the consumer consents.

  3. (iii)

    In consumer relations, according to the Brazilian consumer protection code (CDC), there is a reversal of the burden of proof: presentation, by the bank, of a bank credit note (signed by the plaintiff through the capture of facial biometrics through ’selfie’, similar to its ID photo); proof of transfer of the contracted amount made to the applicant’s account, confirmation of address at the same domicile declared by the plaintiff, digital signature, presentation of original documents, geolocation, digital contract indicating that it was signed by mobile application, telephone model cell phone used for hiring, in addition to time and IP address.

  4. (iv)

    All decisions are based on a judgment that considered: “As for electronic hiring and facial biometrics, I understand that such a form of hiring is perfectly appropriate, given the technological resources available and widely used [...] “The electronic medium is valid. Falsifying a signature is something quite common, unfortunately, but falsifying a person’s face to ”fool” the artificial intelligence, even if it is possible, requires more sophisticated fraud, and the burden of proving the occurrence of this fraud was on the author, even if it were it is a consumer relationship, since it is a constitutive fact of the right alleged by itself.”

3.1 Consent

Payroll loan contracts have a history of problems in Brazil. In 2020, complaints about such credit doubled [3], totaling 20,564 records, half of the total in 2019 (39,012 records), according to records available in the government database. Likewise, the Consumer Protection and Defense Foundation (PROCON) in São Paulo recorded a 50% increase in complaints against banking institutions [21]. Idec - Brazilian Institute of Consumer Protection - evaluated more than 300 reports of complaints sent on the consumer portal, which show, in general, the easy access of banks to confidential bank data of consumers with the INSS (National Institute of Social Security), such as confirmation of bank operations without the direct involvement of the person responsible for the account or the release of the payroll loan without the consumer’s consent [4].

Such data can be explained by a conjuncture of phenomena, among them, the “esteira invertida”, in which fraudsters—usually bank correspondentsFootnote 3 or financial market operators—deposit an uncontracted loan in the account of the retiree or pensioner, without their authorization, to receive up to 6% of the transaction amount as commission. The fraudster voluntarily chooses victims who have previously carried out payroll loans, reusing the personal data of these consumers. Thus, in a situation of vulnerability, such retirees often end up using the money without knowing its origin. Therefore, the argument used by the reporting judges that the plaintiff did not challenge the withdrawal receipts presented by the bank does not hold up, as it is entirely plausible that the retiree, a hypo sufficient consumer in relation to the financial institution, really believed that the money in his account was his, not knowing that he had been the victim of fraudulent activity.

The Payroll Law determines that it can only be deducted from the benefit “when expressly authorized by the beneficiary” (Law no. 8213, art. 5, inc.V). If there is no express authorization, the discount will be undue. Likewise, article 39, item III of the Consumer Defense Code expressly prohibits the supplier of products or services from sending to the consumer, without prior request, any product or taking advantage of the condition of ignorance, social situation, age or health to sell its products or services (article 39, item IV). Such devices refer to the “consent” provided for in the LGPD (Brazilian General Law for the Protection of Personal Data, Law No. 13.709/2018), which is inspired by the General Data Protection Regulation of the European Union (GDPR): natural person holder of the data, or by their legal guardian, and must be expressed clearly and unequivocally, in writing or not”.

Consent must be informed, that is, before obtaining it, it is essential to provide information to data subjects so that they can understand what they agree with, for example: the identity of the person responsible for processing the data, the purpose of each of the processing operations for which consent is sought, what types of data will be analyzed, etc. [26]. Article 4, \(\textrm{n}^{{\underline{\text {o}}}}\) 11 of the GDPR clearly states that consent requires an “unequivocal positive statement or act” from the data subject, which means that it has to be given using positive action or explicit statement, mainly in situations where there is a severe risk to data protection and, therefore, it is appropriate to have a high level of security, such as biometric data and/or financial information - both used in payroll loan contracts. Therefore, such an explicit manifestation may be through contracts or electronic forms, but it is necessary to stick to adhesive contracts with pre-signed adhesion conditions, in which the data subject’s intervention is necessary to prevent acceptance.

Thus, the argument of the reporting judge that “contracts executed in electronic form dispense with the formal rigors of a standard contract, precisely to the extent that the electronic form can be carried out anywhere” does not hold, since the generalized acceptance of general conditions does not match the aforementioned definition of unequivocal, explicit and informed consent. Likewise, even though Article 107 of the Brazilian Civil Code (CCB) provides for freedom of form for contracting, using the electronic contract which dispenses with a traditional signature, any new technology used must prove the explicit demonstration of consent, guaranteeing the reliability of the protection of consumer data, ensuring unequivocal fraud prevention mechanisms—deficient characteristics in the use of facial biometrics, presented below.

3.2 Facial Biometrics

The General Data Protection Regulation (GDPR) clarifies what biometric data is in its Art. 4th [7]:

Article 4 - Definitions For the purposes of this Regulation, it is under- stood by: [...] “Biometric data”, personal data resulting from a specific technical treatment relating to the physical, physiological or behavioral characteristics of a natural person that allows or confirm the unique identification of that natural person, namely facial images or dactyloscopic data;

In the Brazilian legal framework, Decree No. 10,046/2019, which “provides for governance in data sharing within the scope of the federal public administration and establishes the Base Citizen Register and the Central Data Governance Committee” defines biometric attributes [1] as:

Art. 2nd For the purposes of this Decree, it is considered: [...] II - biometric attributes - measurable biological and behavioral characteristics of the natural person that can be collected for automated recognition, such as the palm of the hand, fingerprints, retina or iris of the eyes, face shape, voice and way of walking.

Biometric processing corresponds to the processing of biometric data. It can “be referred to interchangeably as recognition, identification, authentication, detection or other related terms, as well as (often opaque) ways of collecting and storing biometric data, even if the data is not processed immediately” [19]: Generally speaking, a facial recognition system operates by using biometrics to capture a person’s facial features, comparing the information to a database. They usually work through similar steps: a image of the person’s face is captured from a photo or video; then, the facial recognition software using a machine learning algorithm, verifies a series of facial characteristics such as the distance between the eyes, the curvature of the mouth, etc., to extract “facial signatures” based on the identified facial characteristics patterns. This signature would then be compared to a database of pre-registered faces. Finally, there is the determination, in which the verification of the analyzed face can occur [16].

From a technical point of view, facial recognition technology (FRT) is a subcategory within Artificial Intelligence (AI). FRT is less accurate than, for example, fingerprinting, mainly when used in real-time or in large databases [9]. Several factors influence the probability/accuracy of a match, such as: the quality of image, represented by the physical characteristics, external accessories usage, equipment and format usage, and environment conditions; quality of the dataset, size and proportion of the data; and the quality of the model by the choose of threshold, parameters, and fine-tuning [20, 23].

Determining the accuracy level of facial recognition can be challenging and result in both false positives (a face is mistaken for another in a database where it shouldn’t be) and false negatives (a face doesn’t match in a database in which it should be registered). Based on machine learning nature, the algorithms will never bring a definitive result, only probabilities. This means that identical twins can be misidentified. In addition, cutouts of ethnic, gender or age groups can also generate false results.

Debora Lupton [15], based on the studies of Donna Haraway [13] and Annemarie Mol [18], explains that digital data is never a “tabula rasa” (superficial), but must be understood and experienced as something generated by structures, through processes of different characteristics, but above all, perhaps, social and cultural. Artificial Intelligence technology is not created”by itself”, nor is it devoid of the structural constraints that shape it. It was found that, on average, only 10 to 20% of the group responsible for developing artificial intelligence technologies is made up of women [25]—the data refer to the largest technology companies in the USA, a global hub in the development of this field. Thus, darker people, ethnic groups, women, transgender people and people with physical disabilities are more likely to incur false negatives or positives due to less inaccuracy than other technologies (fingerprinting) and the low diversification of facial databases.

Furthermore, the quality of facial recognition (FR) models can be measured by their error rate, that is, the number of times such models fail to compare images of person’s face. Known technically as False Non-Match Rate, the error rate varies according to the types of images presented to FR algorithms. It tends to be smaller when the comparison is based on images taken in controlled environments, where variables such as the position of the face and the incidence of light can be manipulated. International visa photos are an example of this type of image. Compared to images obtained in real situations, the error rate of FR systems tends to be higher. In this context, it is worth mentioning that, “As of 2022, the top-performing models on all of the [. . . ] datasets [that integrate the National Institute of Standards and Technology (NIST)’s Face Recognition Vendor Test], with the exception of WILD Photos, each posted an error rate below 1%, and as low as a 0.06% error rate on the VISA Photos dataset” [17].

In this way, the justification given by the judge rapporteur for recognizing the validity of the FRT is highly generic and devoid of foundation. Faking the FRT does not necessarily require more sophisticated fraud, and automatic erroneous matching may occur depending on the accuracy level of a specific recognition system. Not limited to occasional errors, like any other computational technologies, biometric data is also at risk of violation and misuse, either by outsiders (hackers) or insiders, employees who may use the pre-collected data for their own use [5]. An example of insiders is the banking correspondents of payroll loans, which can improperly renew the loans or activate them without the consumer’s authorization, or even the bank attendants themselves who use the wide range of personal data of consumers stored by the institution to defraud loans.

In Brazil, studies such as the one by NIST are lacking. Therefore, we need to find out what are, in general, the error rates of the systems used in the country. For this reason, Brazilian banks must be able not only to inform the public about the error rates of the systems they develop and use, but also whether they are as low as those identified by the US agency. Brazilian banks must also be able to inform if the error rates of their systems are equally distributed among different social groups or if they are concentrated in specific ones. This requirement is necessary because recent studies have shown that different FR systems make more errors when confronted with the faces of non-white people [2, 12, 24]. Finally, and most importantly, any institution that decides to use FR systems must have clear protocols for using the outputs generated by the technology. In the case of taking out credit, for example, banks should investigate with users whether they are aware that automated facial recognition is a crucial step in the process.

4 Conclusions and Future Work

In the context of payroll loans, it is clear that the motivation and grounds for decisions are very generic, incomplete or even erroneous. The fact that the judge uses facial recognition as an infallible means of proof of hiring is wrong, since in addition to several factors influencing its accuracy, the algorithms do not bring a definitive result, based on alternative solutions for situations that do not appear or are not frequent in their databases. Likewise, presenting the digital contract as evidence indicating that it was signed by mobile application, cell phone model used for contracting, in addition to time and IP address, is not sufficient to demonstrate the existence of free consent. The geolocation mechanism, for example, is susceptible to fraudulent mechanisms both for the one based on IP (VPNs, proxies, tor, tunneling) and for the one based on GPS (through fake GPS applications). As for the electronic signature, despite art. 107 of the Brazilian Civil Code provides for free forms of contract, an electronic contract presupposes the same free, explicit and informed consent provided for in the LGPD and the GDPR.

Therefore, when analyzing disputes involving hiring through facial recogni- tion, it is recommended that the judge try to base his decisions on the greatest possible number of consolidated references, both in the area of science and tech- nology, as well as in the area of sociology and public policies, given that this is a field still undergoing implementation and adaptation in the most diverse instances of society.

From this, it is noted that it is still necessary to improve the standards that encompass facial recognition technology. Therefore, within the scope of financial institutions, it is recommended that the Brazilian Central Bank regulate this matter and define the measures that other banks must implement to guarantee the reliability, security, transparency, responsibility, diversity and effectiveness of the FRT systems, in addition to defining essential minimum conditions for the use of such technology in its service contracts, guaranteeing unequivocal, explicit and informed consent to its customers/consumers regarding the new facial biometrics technologies.

Finally, the CNJ (National Court of Justice), according to Article 102 of Resolution 67 of 2009, may edit normative acts, resolutions, ordinances, instructions, administrative statements and recommendations. Additionally, resolutions and statements will have binding force on the Judiciary Power once approved by the majority of the CNJ Plenary. Thus, it is recommended that the CNJ pass a resolution that governs the criteria to be met by magistrates for the production of evidence in cases involving AI systems. Transparency must be guaranteed in all cases, and it is necessary not to assume that the AI is infallible. In any case related to AI, the absence of production of evidence concerning to the existence of fraud should be accepted and the review of the decisions made by the AI should be enforced. If this situation in the Judiciary continues, we will watch the weakening of due process of law and human dignity.

This research carried out the web scrap data using umbrella terms from the field of Artificial Intelligence both in English and Portuguese such as “artificial intelligence”, “inteligência artificial”, “machine learning”, “aprendizado de máquina”, “deep learning” and “aprendizado profundo”, which may have limited the amount of returned decisions. As future work, we will analyze a list of terms related to applied artificial intelligence and its subareas, and their applications.