CONTRATOS E BLACK BOXING

uma questão sobre patentes e inteligência artificial no direito

 

Giovanna Martins Sampaio[1]

Universidade Federal do Sergipe

giovanna.martins@ufba.br

José Antonio Belmino dos Santos[2]

Universidade Federal do Sergipe

santosjabpb@gmail.com

______________________________

Resumo

Este artigo visa brevemente abordar apontamentos complementares sobre as questões contratuais contemporâneas no cenário entre inteligência artificial e direito de patentes; Assim, o trabalho discute as prerrogativas de seguro em uma perspectiva de direito comparado, investigando o trabalho da comissão européia sobre o assunto; Além disso, o presente trabalho apresenta problemas técnicos inerentes à IA e como isso afeta a conformação de estruturas legais como a responsabilidade civil, demonstrando a relevância de estudar a interação entre direito de propriedade intelectual e inovação, bem como tecnologia e contratos em mercados emergentes, concluindo finalmente com o destaque das limitações deste artigo.

 Palavras-chave: Contrato de trabalho. Seguros. Nexo de causalidade. Caixa preta.

CONTRACTS AND BLACK BOXING

a major issues regarding patents and artificial intelligence in law 

Abstract

This article briefly aims to address complementary notes about the contemporary contractual issues in the scenario between artificial intelligence and patents law; Therefore, the paper discusses insurance prerogatives in a comparative law perspective, investigating the work of the European commission on this matter; Further, the present work presents technical problems inherent to the AI and how this affects the conformation of legal structures such as civil liability, demonstrating the relevance of studying the interplay between intellectual property law and innovation as well as technology and contracts in emerging markets, concluding finally with the highlighting of the limitations of this article.

 

Keywords:Labor contract. Insurances. Legal causation. Black Box.

 

 

 

 

1 INTRODUCTION

This written work tried to present and explain the following questions and objectives: Artificial Intelligence technology can be considered as an Inventor? Second – therefore, an AI could hold a patent? Why? What are the main considerations and arguments to refuse this idea in the present?

All these problems amount to the addressing of the ethical and civil liability problematics that justify the ideology of the advantages of the AI as an improvement TOOL in the sphere of patent law, reporting the speed of data processing and “accuracy” that the employment of these “methods” provides to Inventors.

            As it can be seen, the plurality of areas encompassed by the theme of Artificial Intelligence, and its continuous updates and upgrades that will be briefly approached, made it very difficult to shape the present master’s final work, as well as it already gives the readers the idea of the scientific, conceptual and methodological limitations that could be found in its development.

The “contour” of this work was provoked by the following news:

A University of Surrey-based team has filed the first patent applications for inventions created by a machine. Applications were made to the US, EU, and UK patent offices; they are for a machine using artificial intelligence as the inventor of two ideas for a beverage container and a flashing light” (COHEN, 2019).

            It provides us with the notion of how technologies are exponentially evolving, entering, and influencing, in a unique away, the public sphere, and requiring the attention of the Law, public policies and different kind of regulatory guidelines (BARFIELD, 2015) to mitigate all the potential negative effects of an “unrestrained”, irresponsible and “unmonitored” use of AI and Machine Learning by individuals in the contemporary society.

In this field, it is relevant to bring what the Council of Bars and Law Societies of Europe stressed about the issue of the importance of a correct legal assessment of technology use, in order to provide legal certainty and finally, safeguard the parameters and ethical standards of justice in the context/framework of the Law,

As lawyers play an important role to ensure access to justice, defense of the rule of law and protection of democratic values, they seem to have a particular role to play when it comes to the further development and deployment of AI tools, especially in those areas where access to justice and due process are at stake. (CCBE, 2020).

            We should interpret the area of patent law as being “at stake”, for a couple of different reasons: the strategic role of the patent system to innovation (HIGGINS, 2019); also, it is a fundamental goal of this work to make legal students and professional think about how the legal system as a whole will deal with the liability and compensation problems raised from defects or errors on products that employ this theoretical AI developed patent – if we assume and accept this ideology -, bearing in mind the post-modern need of protecting consumers in B2C (Business-to-Consumer) businesses, especially within the concept of acquis communautaire[3] of the EU – European Union (MILLER, 2011).

Therefore, the “digitalized” ecosystem surely poses additional problems to be dealt with by the Law within its role of organizing the society, taking into consideration the specific, and historically constructed, ethical “guidelines and standards”; and since we are talking about the diversity currently found into the technological methodologies and tools, we consider that it is almost obvious the impact of AI and machine learning in Intellectual property, what was very recently endorsed by WIPO (World Intellectual Property Organization).

            Taking into account/envisaging the need to correctly address the employment of AI in patenting, in the scope of the law, we finally thought about the relevance of providing a dual study of the “conditions” of liability and Ethics in Artificial Intelligence, and this is directly the major justification to this written work.

 

2 CONTRACTUAL ISSUES AND INFORMATION: INSURANCE AND LABOR ASSESSMENTS

            Primarily, concerning the contractual assessment of Artificial Intelligence, we have indirectly the “impossibility” of applying the “good faith postulate” (thinking about a situation in which interpretation is required, that falls in the scope of the majority of cases, beside the pre-contractual duty of good faith in itself,[4] that has increasingly being recognized) due to the inherent feature of an “Intelligence” that is artificial (& further it is not sentient); Directly, we shall be concerned about how an AI could “contract” with a consumer that will hypothetically buy its “invention” in the form of a product? – without disregarding the needed consumer protection related to the right to be informed/ the right to receive “extensive” information, for example; Also, since the AI do not have legal personality, how could it “sign” any contract, with a consumer or thinking in a “potential” licensing agreement with an enterprise that will directly market the invention, and further considering the national & “regional” specificities of  Licensing?

            In this line, Barfield (2015, pg. 47) already brought,

For example, will a contract negotiated by an artificially intelligent machine be considered valid, who will be considered the contracting parties, and who will be responsible for a breach of contract?

            He also continues:

To take this point one step further, every enforceable contract has an offer and acceptance, consideration, and an intention to create legal obligations. At present, an artificially intelligent machine is not viewed as having the ability to form an intention on its own volition and thus for this and other reasons cannot contract on its own behalf. (BARFIELD, 2015, pg. 208)

            Therefore, as we can see, the very own fact of establishing obligatory insurance (and the proposal embraces just certain types of Artificial Intelligence and robots, which leads to the understanding that those “excepted” AI’s categories will not “dispose” of a compulsory or a compensatory fund) will not remedy the very basic issues regarding personality, capacity, liability & Black Boxing (lack of transparency) that are inherent to AI’s systems.

            The European Insurance Agency (EIA, 2017) already framed the aspects to be considered to decide on a compulsory scheme as an alternative to the liability “constrains” – further basing a “potential justification to grating” a patent to an AI; in the EIA’s report, we could see that “claims data” & “similarity of risks” are NOT sufficient or adequate (our word) to justify a compulsory insurance; further, related to “insurance & reinsurance capacity” & “competition” , they do not have a final opinion, and they are assessing it in a study within the market. Ultimately, the Commission further complements:

A "system of registration of advanced robots should be introduced", […], and this should be link in with the supplementary robot fund to "allow anyone interacting with the robot to be informed about the nature of the fund, the limits of its liability in case of damage to property, the names, and the functions of the contributors and all other relevant details. (EC; DELVAUX, 2016)

As we could see, the “assumption” and adequacy of an obligatory insurance scheme[5] are still very incipient[6], and therefore, it cannot consist in a credible[7] argument to base AI’s Inventorship. Also, as the Commission noted, the length of insurance claims & its “unpredictability” (EC, 2019) corroborates with our central argument that the major AI’s liability concerns remain existent, & we cannot frame the Artificial Intelligence as an Inventor of Creation. Finally, we follow the opinion brought by The European Parliament as it endorses the establishment of a specific European Agency for AI & Robotics:

The Parliament’s resolution also advocates the establishment of a European agency for robotics and artificial intelligence, with the aim of providing the technical, ethical and regulatory expertise required to meet the challenges and opportunities arising from the development of robotics in a timely and informed manner. (CMS, 2017)

            Therefore, we should comprehend the need for legal and ethical transparency related to the current use of Artificial Intelligence, & further, we follow scholars and professional who believe AI still “contains” greater liability & ethical concerns, which finally are not resolved by the “simple” determination of compulsory insurance. 

            Here we also have the interface between labor law & IP:

Several Member States such as, inter alia, Austria, Bulgaria, Denmark, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Portugal, Spain, Sweden, or the United Kingdom have mandatory provisions regarding the right of employees to perceive a reasonable remuneration for the rights in inventions transferred to the employer. For other IP creations, national solutions vary. (EUROPEAN IP HELPDESK, 2018)

            And in continuation, “[…] To avoid later disputes, the ownership of the IP created by an employee and the right to receive an additional remuneration shall be laid down in the employment contract, in accordance with the applicable laws”. (IBID)

            However, since AI do not have legal personality, and cannot further “assign” rights throughout licensing contracts or even sign contracts, how the Artificial Intelligence will have/conform an employee contract with the Human Inventor of the creation, for example? It does not seem very reasonable for us, & therefore, we must assume the employment of AI as auxiliary tool to be potentially and advantageously used by this real Inventor. Regarding the AI’ consequences in the European labor market, we would like to draw up some major points; as we already noted in this work, AI’s results will have to be further “controlled” by the individual inventor, amounting to the Inventorship of the later (Human agent): this will involve the need of “specific expertise” on how to assess the algorithms (LARSSON, 2019) – which finally justifies the ethical argument that prevents Artificial Intelligence of being considered as an Inventor (first, human contribution remains needed to create a patent; secondly, the speed and “considered specificities” of AI technologies raise the informational asymmetry issue related to consumers, which rendered a very consolidated protection of European Consumers; thirdly, it consists in an advantage to take AI as an auxiliary tool to pursue/achieve the patentability requirements in a faster pace, & have the processing of prior-art data in a higher speed).

            In the regard of human supervision of AI’ outcomes, since there is a high probability of bias[8] & errors in the framework of Artificial Intelligence, we need to consider the following: “However, this is also an area where removing human operation completely involves substantial risks, because the cost of failure can be so high. At present, a measure of human supervision is still required due to the probability of edge cases […].” (AGRAWAL; GANS; GOLDFARB, 2019)

            In this sense,

In particular, technological changes will modify the skills required of workers, meaning that potentially very large numbers of workers will need to upskill. Thus, more focus needs to be put on life-long learning. […] Almost all Member States are facing shortages of Information and communications technology professionals, and there are currently more than 600. 000 vacancies for digital experts. (EC, 2018)

            Therefore, we soon need to take into consideration the general, & even disruptive, changes that Artificial Intelligence is bringing to the whole labor market, & we consider those consequences are more special related to the Intellectual Property System/Market, to finally adapt ourselves to a truly human-centric AI. (EC, 2019)

            In this regard, in the specific contour of the European Market, EU need to further “build up” its “human assets”: “Finally, the EU needs to train more specialists in AI, building on its long tradition of academic excellence[9], create the right environment for them to work in the EU and attract more talent from abroad”. (EC, 2018)

            In complement, “Without such efforts, the EU risks losing out on the opportunities offered by AI, facing a brain-drain and being a consumer of solutions developed elsewhere. The EU should therefore strengthen its status as a research powerhouse while bringing more innovation to the market.” (EESC, 2019)

            A very “human” and essential quality to promote innovation is well-known: Diversity; in this sense, EU shall “nurture talent & multidisciplinary”, attracting investment in IP and technology, throughout technical education & proper digital training.

            This “European notion” of building up AI within the strengthening of Human assets, following a human-centric & Ethical approach, is further assessed as, “Initiatives to encourage more young people to choose AI subjects and related fields as a career should be promoted. […] Ensuring that workers are given the chance to adapt and to have access to new opportunities will be crucial for people to accept AI. Like any other technology, AI is not just imposed on society.” (EC, 2018)

In this regard, “[…] As more lawyers, law students and legal researchers embrace AI, they need also be aware of the potential dangers of placing blind faith in the impartiality, reliability and infallibility of legal AI” (KEATING, 2019) – since we can further assess their crucial roles in developing ethical & “human-centered” AI (CARRIÇO, 2018); This further involves assuming the inherent technical problem of Artificial Intelligence & its Black Boxing to further promote a more transparent and explainable AI.

Furthermore, the supervision & training[10] related to Artificial Intelligence goes “beyond” in a specimen of interface between human knowledge/education about AI, “adequate provision” of Data to be processed by the Artificial Intelligence, & third, the final assessment of “testing and experimenting” AI’ products (within the scope of a patent, in our “example”).

Lastly, we just introduce here the correlation between the contractual concerns and the ethical assessment/analysis of AI that will be done subsequently, further bringing the digital terminology of Ethics; in this final sense, we reiterate our position related to the advantageous use of Artificial Intelligence as a tool in Patenting that can be further deployed by the true Inventor, in a major competitiveness assessment :

Finally, cyberethics must be accompanied by large-scale training of stakeholders, from algorithm designers and legal tech companies to their users. New transdisciplinary humanities should be made available to all so that AI becomes a vector of positive development for humankind. (LARSON, 2019)

In this regard, we just further confirm that:

Artificial intelligence (AI) and automation processes have enormous potential to improve European society in terms of innovation and positive transformation, […] A human-in-command approach to AI should be guaranteed, where the development of AI is responsible, safe and useful, and machines remain machines and people retain control over these machines at all times. (EESC, 2019)

We all (policymakers, academia, professionals, engineers, programmers, etc.) need to be very prudent in the use of AI for innovation, considering all the “technical” problematics and barriers that come with it, in the sense of assuring a safe and “regular” employment of Artificial Intelligence as a tool.

 

3 LIABILITY, CAUSATION, AND BLACK BOXING

 

A lot has already been said and assessed about the liability implications of Artificial Intelligence technologies, and therefore, below, we will try to revise and summarize some different opinions and proposals that can be found in the AI framework, to finally complete and settle the “rational” idea about taking Artificial Intelligence systems as potential and enhanced tools to be satisfactorily applied in new inventions, since they can mainly optimize data “processing” and provide “more direct decisions” (because of their character of goal orientation, for example)(KREUTZER; SIRRENBERG, 2020), further contributing to the faster - and more accurate - search of prior art, addressing the state-of-art in a more efficient away.

            Here, we need to contextualize: an alleged AI Inventor would then be pursuing the goal of creating an invention that would be further patented, and we should suppose that “s/he” would try to “construct/come up” with the best invention possible, requiring it to pursue a greater research of available data and prior art; as already brought – and it is going to be briefly deepen in the next Chapter – the AI will not be able to technically record all the process that it has been done, due to the so-called black-boxing issue; then, “he” has been granted a patent that will be industrial applicable, in general, conforming a specific product that will be marketed to the consumers and open public; someone gets injured in regularly using this product, and therefore, will try to obtain compensation.[11] So, we should analogically consider that the final objective (patent) has “provoked” some type of damage to an individual: besides the AI having no legal personality, how this person will bring a claim or sue this Artificial Intelligence since it has no legal capacity to “integrated a proceeding” and/or “present itself before a judge in a Court”?  Also, we should bear in mind that the primary set of data it is still nowadays “in-putted” by humans, therefore we believe that an eventual and final created patent should be granted in the name of a real Individual[12], as an Inventor, provide this Human Inventor with the respective exclusion rights of the certain Patent.

            As we previously presented, the black-box Issue remains a problematical “technicality” which further contributes to the idea of the impossibility of Artificial Intelligence of, firstly, being held liable, and consequently, in a second place, of being an Inventor in itself,

given the interconnectedness of emerging digital technologies and their increased dependency on external input and data, making it increasingly doubtful whether the damage at stake was triggered by a single original cause or by the interplay of multiple (actual or potential) causes. (EC, 2019)

Summarily, as already presented, what was said in the above excerpt would not be considered even “legally fair” in the meaning that it would undermine and disrespect the causation link that should be precisely established in a liability-compensation case, in a couple of ways that will be complemented in the next pages[13].

Therefore, considering the final production/offer of data as a “message”, in the line of reasoning presented above, we could assure the liability for the final invention (created and further produced & marketed) will rest on the human agent/individual; therefore, if we cannot attribute this final responsibility for the AI (LARSSON, 2019), we also cannot legally & legitimately consider it as the creator of an invention, since this would express a fundamental breach of “legal construction”: this would undermine Civil liability theory as one of the general postulates of the Law (regardless of the option for common or civil law traditions) that justify the existence of legal constrains into the private sphere of the individuals within the society; Also, this could be considered as an exemption of liability, as we will discuss it later (those distinct arguments corroborates with our “thesis” of the AI as an “advantageous” tool to be implemented in the patenting system). Considering the possibility of exempting the due Human Responsibility throughout the consideration of Artificial Intelligence as an Inventor, Barfield further states important points that shall make you reflect about the potentialities risks regarding AI’s employment/use (here, our framework is about the patenting system.

Other scholars go further, presenting the liability issue under the perspective of what can be considered as Vicarious Liability[14], as they provide a kind of comparative approach between existing situations in other to provide a solution for the assessment of the AI’ liability issue: “The following relationships are the best examples of vicarious liability: (a) liability of the principal for the act of his agent or liability of the parents for their child; […] This means that for AI's behavior, vicarious liability appears to a person on whose behalf it acts or at whose disposal and supervision AI is. It can be listed as users of AI or their owners.” (EC, 2019)Here, we will just bring the comment that the Artificial Intelligence responsibility problematic also requires the assessment of maximized informational asymmetry of the Consumers in relation to the AI, especially in the field of industrial property law and knowledge management; therefore, considering/holding the “proprietary” of the AI liable undermine the idea of assuming the AI technique as an inventor.

            We can understand the strong legal “gymnastics” that it is done in order to fit the AI “actions” into the different pre-existent legal liability categories, however, we believe that a complete AI liability framework needs to be created, as we consider that vicarious or tortious liability assessments are not able to address the issue of the legal and ethical use of Artificial Intelligence, especially considering the specificities and depth of the interplay between these AI technologies and Patent law: Taking into account the exponential development of the machine learning methodologies, for example, we consider that other legal liability theories need to be created in order to better address the unique character of AI systems, in parallel with the adequate safeguard of consumers[15], as well as the need of this new Liability here invoked in adapting and upgrading[16] at the same pace of the technologies, without neglecting the legal certainty and “ethical legality” postulates. Further, in this sense, that will be more expanded in another following topic, the use of AI must observe safety standards and Transparency guidelines in order for the proper, trustworthy and explainable employment of these technical tools, and all of these requires the Human control and monitoring of the “artificial” instruments; Ultimately, we advocate that this disruptive regulatory scenario/framework for AI (HATTO, 2016) should be standardized in the maximum[17] , in the regional-European contour and even in a cross-border consideration assessment, since dealing with these enhanced technologies involve the downfall of the territorial/geographical barriers: providing an ethical and legal AI should be a major concern for the different stakeholders playing in the international arena. Moreover, this position has the potential of meeting consumers’ concerns about liability/damages & safety, as well as it can prevent fragmentation within the Internal Market (the “European economic area”) (DELVAUX, 2016).

IriaGuiffrida (2019) recently complimented: “The focus of tort law is to determine who is liable for the loss suffered by the plaintiff as caused by the tortfeasor’s wrongful act […] Tort suits involving harm caused by devices usually allege either negligence from the tortfeasor or are based on the theory of products liability. (GIUFFRIDA, 2019)

And this very same author further continues in a very relevant criticism: “If we use regulation rather than tort suits, regulators will have to decide optimal risk tolerances: how much harm are we willing to allow to obtain the social benefits of AI?” (GIUFFIRDA, 2019).

            And regarding the liability issue, the same author sort of “extend” it to: “There are AI developers; algorithm trainers; data collectors, controllers, and processors; manufacturers of the devices incorporating the AI software; owners of the software (which are not necessarily the developers); and the final users of the devices”. (IBID) Further, as the huge scope or definition of the AI employments and Machine Learning Methodologies, it is of special importance that we assess and measure the respective harm taking into account the different kinds of potential harms that can be inflicted and imposed in consumers: this is a very serious issue, and it should provide the observation of protection guidelines related to the final consumers; Ultimately, Giuffrida brings: “Each of these technologies carries independent liability risks. When they combine (as they inevitably do in “real” life), the liability landscape becomes layered and increasingly complex. […] These features make AI and related technologies sublimely useful, but also intrinsically problematic.” (IBID)

            For example, the Commission summarized the main concerns about the tort laws within the framework of the EU (European Union), further emphasizing our principal conclusions about an insufficient contemporary general “environment” capable of conferring to Artificial Intelligence Tools liability for damages, neither “inventorship” of inventions and patents.

            And continuously, “Legal requirements have to be distinguished from industry standards (or practices) not yet recognized by the lawmaker. Their relevance in a tort action is necessarily weaker, even though the courts may look at such requirements”. (EC, 2019)

            Therefore, we further emphasize the necessary connection between the ethical standards and legal norms, in the focused figure of Liability, the reason the final chapter of this present work will be elicited; Furthermore, as previously approached, to the correct and congruent use of AI and Machine Learning systems as a “helping/improving” tool, observing their opacity and complexity, it is necessary to promote the engagement of the different stakeholders (business, academia, public sector, consumers, & civil society), developing a sense of “human-centric” Artificial Intelligence (EC, 2019), besides the fact of addressing the foundation and singular characteristics of an ethical AI.

Moreover, we also comprehend the extensive normative adjacent context in relation to the theme of the new emerging technologies, in the focused contour of “European primary and secondary Law”– such as the human and fundamental rights postulates; The product Liability Directive; The Machinery Directive;[18] the European product safety and liability regulation; consumer protection law, amongst others; This, inclusively, demonstrates the difficulties transposed by this work, since we had to find the theoretical “common values and principles” of Artificial Intelligence within the preferred scenario of Patenting law; and to reach this endeavor, it was unavoidable to mainly touch the fundamentals of Ethics, and the AI Personality “conditionalities’”, with the legal contour and focus on the liability assessment.

Therefore, we thought that would be a concerted decision to drop the initial methodological idea for this written work, about producing an exhaustive comparative law approach of the subject of AI & Patenting, within the scope of this present work in International Business Law, reducing the spectrum of the present work  in addressing the basis of “AI’ ethics” and its liabilities' perspective; However, we further corroborate the complexity of the ethical assessment of the AI methodologies and techniques, consisting the reason which we already state one of the limitations of the present master’s work: a more in-depth study of the ethical implications of Artificial Intelligence in legal liability will be assigned to a future article.

            We express and clarify that assessing the Black-Box technical issue is essential to further fundament that AI technologies cannot be held liable, as the “situational claimants” are prevented from “tracking” the practicalities and “factual conditional” that would “reconstitute” the damaging event, in the meaning if establishing the specific linking and attachment between the AI’s conduct, and the injuries and harms suffered; and it shows demonstrable value even in the legal addressing of Artificial Intelligence as a tool.

            Finally, regarding the rule of law, we should never forget about the temporal circumstances: even if a claimant “decided” to process an AI (and imagining that the “victim” could access the respective jurisdiction, which means, for instance, conferring legal capacity), and assuming a “short-term prescription” legal system (EC, 2019), in the sphere of the new emerging technologies the time assessment is even further critical: how the damaged person will even be able to produce evidence regarding an Artificial Intelligence if these systems end up not fully disclosing all the “relevant information” that would prompt “it” to take a specific decision, as we already mentioned in this work?

And taking into consideration the technological feature of AI and the higher complexity of the data it uses: how long would it be necessary to a common consumer for hypothetically “researching” and obtaining the needed data/information, which is not even entirely possessed by the counterpart? And further, how long until the consumer could note a defect or errors, in an especially technical product, for example? We can further revise this question in the following paragraph: “However, one should be aware that particularly in jurisdictions where the prescription period is comparatively short,the complexities of these technologies, which may delay the fact-finding process, may run counter to the interests of the victim by cutting off their claim prematurely, before the technology could be identified as the source of her harm.” (EC, 2019)

            In this sense, we shall realize that - considering/accepting the possibility of rendering Artificial Intelligence as an Inventors & further Liable for eventual damages caused – seems to undermine the Causality, which would confer “fairness” and balance to rights & duties/obligations in Civil responsibility Law: consequently we see the cumbersome difficulty of assessment the Causal Link in relation to AI, since causality cannot be truly determined. (GIUFFRIDA, 2019)

Furthermore, addressing the above issue of information asymmetry, we believe that this very own idea is capable of conflicting with the whole system of Causation.Ultimately, legal professionals recently stated about legal liability concerns: “When considering this possibility, there may also be broader issues of socio-economic policy to be taken into account. For instance, the perceived desirability of ensuring on the one hand that no-one who suffers loss through the operation of an AI system should go without compensation, set against concerns that there could be a chilling effect on innovation. […]”. (CCBE, 2020)

Therefore, clearly we shall perceive the different interests & stakeholders involved in the “deployment” of Artificial Intelligence, especially in the Patenting System and Intellectual Property as a whole, however, we do not consider that “the chilling effect” will play a big role here – since this argument follows IP throughout history; moreover, we need to assure an ethical & transparent use of AI, by the Inventor & also the Market in itself, maintaining the level of protection that was gradually given for consumers until today, and to pursue this intent, we need to keep “trusting” the consolidated patent system as “technical-neutral” & further considering Artificial Intelligence as an auxiliary tool.

Complementing, we intended to provide you, readers, with a very brief overview of some “conceptual” proposals surrounding AI Liability, to finally confirm that these systems cannot be held liable; secondly, Artificial Intelligence systems cannot in itself, observing its major characteristics, be an Inventor in the sense of the reasonable, “proportional” and explained parameters of Patenting.

In this sense, about the description of AI, we need to further assess the “proportionate-proportionality” criteria; It has been debated for a long time the contour and framework of the principle of proportionality in balancing fundamental and human rights; in this view, we need to remember the special considerations of European consumer law - and consequently assessing the European Patent System - concerning the “vulnerability” of consumers; we further believe consumer’s harm is potentialized with Artificial Intelligence for different reasons: despite the major daily use of this kind of methodologies, AI’s combinatorial process - and the inherent heuristics involved in it - make Artificial Intelligence a complex technology to understand; secondly, because of Black Boxing and the lack of entire transparency, not even programmers or “digital designers” can completely explain an AI given and determined result, which arises the explainability’ concerns [19] that will be further encompassed in this written work.[20]

In our perspective, the “resolution” is the one developed in this work: considering AI as a tool - in the patenting system in its crucial role in Innovation, for example - will ascertain the liability in the human inventor that employ the AI technology, allowing the consumer to be better protected (concerning the identification of assets and contracts; & further related to the “disparities” of the level of information and the Assessment of this information), since s/he (consumer) will be facing another human individual; In this regard, even the asymmetry of information between vendors and “buyers” will be more adequate and “equal” between an individual Inventor and the consumer, comparing it to the lack of major explainability intrinsic to the Artificial Intelligence as “technical method” and considering the processing speed of AI in comparison to humans, consumers, or inventors-vendors.

Therefore, concerning the “Proportionality assessment”, beyond this principle comprises the actions concerning the interface MS (Member States) & EU, the basic definition of proportionality, in mathematics’ Algebra, for example, is the equality between two ratios[21]; Therefore, putting this chartered in our situation, as we already saw the “institutional” correlation between mathematical methods and AI by the EPO “reflections” about this subject, in the field of Artificial Intelligence it will not be possible to have a closer “equality” (equity or evens or equalization also) because of the very own nature of AI; therefore, we further assure that it is unproportionate to consider AI as an Inventor, especially related to the legal liability concerns, as in our view, it finally confronts fairness standards by definition (YOUNG, 2013) - as it was also already discussed in this work.

Further, in this regard, we just compliment this critical assessment of AI bringing the fruitful and complex concept of “technology neutrality”; Ultimately, the same author, Rajab Ali (2009) also brings “The fundamental rules should be the same online as off-line (or more broadly, the same for an online technology activity as for the equivalent off-line technology activity); and Legal rules should not favor or discriminate against a particular technology.”(ALI, 2009)

Therefore, in this regard, we just evaluate the idea that the patent system was historically constructed throughout time, and it is consolidated nowadays; in this sense, we believe that the patenting legal framework was already developed to embrace new and upgraded technologies such as AI, without major modifications that could undermine the purpose of the whole system; finally, the requirement of Inventorship as it exists today, should not be changed to consider Artificial Intelligence as a patent’ inventor since it is not reasonable neither proportionate as we already explained.  

            In this sense, we would like to further bring some relevant considerations made by the Commission: “Digitalization brings fundamental changes to our environments, some of which have an impact on liability law. This affects, in particular, the (a) complexity, (b) opacity, (c) openness, d) autonomy, (e) predictability, (f) data-drivenness, and (g) vulnerabilityof emerging digital technologies.” (EC, 2019)

Finally, Ms. Ursula von der Leyen brought, addressing the “general dangers” of AI & regarding the European Union’ Values in this new extensive Era/Context of Artificial Intelligence, that “To grasp the opportunities and to address the dangers that are out there, we must be able to strike a smart balance where the market cannot. We must protect our European wellbeing and our European values. In the digital age, we must continue on our European path.” (KEATING, 2019) In conclusion, as we could see specifically in this chapter, the black-box problem is still unresolved, and it further limits, even more, the possibility of an AI to obtain a patent, and neither the Technologies and the specific laws existent today could be able to address properly this issue nowadays in our contemporary society.

 

4 CONCLUSIONS: LIMITATIONS OF THE PRESENT STUDY

Ultimately, we will in this last chapter make further considerations about some themes that are correlated to the present article’s  topic, since we plan to study them in future articles and even in the scenario and path of a PhD program.

As the readers could realize from this work’s bibliography, the media has recently published the first AI patents filling in the European Office; we consider that there is a probable tendency of other new fillings of so-called AI Patents in the near future around the world, and for us, this could be relevant in the meaning of prompting a proper and adequate legislation that is able to address the liability and ethical problematics of the use of Artificial Intelligence systems as technical methods; in this sense, a detailed analysis of the Intellectual Property offices’ decision should be undergone in an appropriate situation; and therefore, a theoretical assessment of the articles and works published by the scholars that “fight for this cause” (ABBOTT, 2019) could jointly be done in “reaching and studying” the Offices’ decisions.

Therefore, we identify some legal limitations within the present written work that can be followed in a future assessment of the topic: a deeper analysis of the current regulations that could be applicable (with the necessary reservations); a comparative approach that could be employed regarding the development of Artificial Intelligence in the patent system, in relation to the Common Law & American experience, and further related to the Latin American countries practices; and third, the differences in the very own systematic patenting requirements, as still nowadays, there is a space left to domestic law in addressing and shaping the content of patent and “other” intellectual property requirements (e.g. the conception of the invention in the USA’ practice, as already brought by LIM in his studies)[22]; in the fourth place, this author is further interested in the need of addressing the issue of AI and Patenting throughout a “principiological” perspective.

In this regard, we further bring the possibility of investigating the topic employing a philosophical & Cartesian angle[23] that allows us to explain the uncertainty inherent to AI’ lack of transparency &explainability, which can be further be done in a future work/article; Also, as the interplay AI-IP scenario here developed involves, beyond an inter & multidisciplinary assessment, a historical approach, we plan to frame & pursue a review that accounts the management & marketing of those “AI Inventions” throughout the Swot analysis in face of the concepts of cui bono and good faith (MUSY, 2000; STORME, 2003) primarily mentioned in this written work (GUILLAMON; PANERO; CASANI, 2016).

Also, some AI’ characteristics and Patenting features could not be deepened in this work: accountability, trustworthiness, and explainability (and the intrinsic distinctions between these concepts already brought by Giesen& Kristen, 2014[24]), as well as the FAT assessment (Fairness, Accountability & Transparency); and also, the algorithmic regulation, combinatorial processes, statistical analysis & Heuristics within the interface between Artificial Intelligence systems & Patenting-Intellectual Property. (HILDEBRANT, 2018; YOUN et al, 2015)

And furthermore, special concerns and aspects of patent litigation and infringement, and even reinstatement issues; legal capacity, parties’ representation and taking of evidence; the moral and “correlated” rights in inventor-ship and re-establishment of rights and its respective concepts: all of those specific questions could also be further addressed in a future work.

Ultimately, a better assessment of the consequences in labor law, and taking into consideration a strategic and economic analysis of law (AGRAWAL; GANS; GOLDFARB, 2019), for example, is taken by this author as an interesting and intriguing theme to be investigated in the future; Continuously, a proper assessment of contractual principles in current legal practice (eg. Good faith assessment), and consumer law related to defective products developed and marketed in the context of Artificial Intelligence “methods” are the next reflections that the present author intends to analyze in a future article.

Lastly, we also would like to research and explore the “pros” of employing Artificial Intelligence in the creation of inventions to be patented, taking into account the competing assessment and further observing the “understandings” of corporate governance[25], and competition law, for example: the use of AI methods and technologies, as a specific tool, in the Patent system, should be seen as a priority in conforming public policies about this very subject, as it can be evaluated in relation to competitive advantage.

Finally, we also conceive the possibility of surveying, tracing and examining the relationships between the interplay AI-IP, within the meaning of machine and deep learning, and the assessments and considerations of the IoT (3IF.BE, 2015) and Big data analytics[26], or even augmented reality, in relations to the consequences these “methods” pose, in order to promote the proper risk assessment and impact evaluation of the deployment of those “methodologies” within the international regulatory framework. Further, the identified significance & disruptive character of those technologies require a constant evaluation of the generated impacts especially nowadays with the closer/major use of AI in “strategic” industries such as Intellectual Property and Patent’ Rights; remembering that AI can be an advantageous tool in data processing for Patentable Inventions, however, its use must be legally, ethically, morally, and fairly supervised by human agents, as we already developed in this work, finally observing the underlying feature of the Patent System as technology-neutral.

REFERENCES

3IF.BE. McKinsey Industry 4.0 – how to navigate a changing industrial landscape.2015. Available at: https://www.3if.be/en/news-background/19-publicaties/43-mckinsey-industry-4-0-may-2015-how-to-navigate-a-changing-industrial-landscap . Last Access in 10. March. 2020.

ABBOTT, Ryan. Inside Views: Everything is Obvious. 2019. Available at: https://www.google.com/search?client=safari&rls=en&q=ABBOTT,+Ryan.+Inside+Views:+Everything+is+Obvious.+2019.&ie=UTF-8&oe=UTF-8  . Last Access in 15. May. 2020. (Intellectual Property Watch)

AGRAWAL, Ajay; GANS, Joshua; GOLDFARB, Avi. Artificial Intelligence: the ambiguous Labor market impact of automating prediction.2019. Journal of Economic Perspectives; Vol 33; n. 2; Pages 31 to 50; Available at: https://www.jstor.org/stable/10.2307/26621238 . Last Access in 17. April. 2020. (jstor platform access)

ALI, Rajab. Technology Neutrality. 2009. Lex Electronica – Revue du Centre de Recherche en droit public; Vol. 14; N. 2. Available at: https://core.ac.uk/download/pdf/55652076.pdf . Last Access in 08. June. 2020.

BARFIELD, Woodrow. Cyber-Humans: Our Future with Machines. 2015. Springer. 291 pages. (ULB online archives - Cible Plus) - book

BATHAEE, Yavar. The artificial intelligence Black Box and the failure of intent and causation.2018. Harvard Journal of Law & Technology, Vol. 31, n. 2 - Springer; Available at: https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf . Last Access in 11. May. 2020.

CCBE. Council of Bars and Law Societies of Europe - Considerations on the legal aspects of artificial intelligence. 2020. Available at: https://www.ccbe.eu/fileadmin/speciality_distribution/public/documents/IT_LAW/ITL_Guides_recommendations/EN_ITL_20200220_CCBE-considerations-on-the-Legal-Aspects-of-AI.pdf . Last Access in 22. May. 2020.

CERKA, Paulius; GRIGIENE, Jurgita; SIRBIKYTE, Gintare. Is it possible to grant legal personality to artificial intelligence software systems? 2017. Elsevier. Computer Law and Security Review, n. 33. Pages 685 to 699. (ULB online archives - Cible Plus) – article

CMS. Do Robots have rights? The European Parliament addresses artificial intelligence and robotics. 2017. Available at: https://www.cms-lawnow.com/ealerts/2017/04/do-robots-have-rights-the-european-parliament-addresses-artificial-intelligence-and-robotics . Last Access in 08. June. 2020.

DELVAUX, Mady (Committee on Legal Affairs - EU Commission). 2016. Draft Report with recommendations to the Commission on Civil Law Rules on Robotics - 2015/2103.Available at: https://www.europarl.europa.eu/doceo/document/JURI-PR-582443_EN.pdf?redirect . Last Access in 15. May. 2020.

EC - European Commission. Artificial Intelligence for Europe. 2018. Available at: https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe . Last Access in 25. May. 2020. (Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions)

EC - European Commission. Building trust in Human-Centric Artificial Intelligence.2019. Available at: https://ec.europa.eu/digital-single-market/en/news/communication-building-trust-human-centric-artificial-intelligence . Last Access in 04. June. 2020. (Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions)

EC - European Commission. Coordinated Plan on Artificial Intelligence. 2018. Available at: https://eur-lex.europa.eu/resource.html?uri=cellar:22ee84bb-fa04-11e8-a96d-01aa75ed71a1.0002.02/DOC_1&format=PDF . Last Access in 23. April. 2020. (Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions)

EC - European Commission. Liability for Artificial Intelligence: and other emerging digital technologies.2019. Available at: https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608 . Last Access in 22. May. 2020.

ESC (European Economic and Social Committee). Artificial Intelligence for Europe. 2019. Available at: https://www.eesc.europa.eu/sites/default/files/files/qe-04-19-022-en-n.pdf . Last Access in 22. May. 2020.

European IP HelpDesk. Your Guide to IP and Contracts. 2018. Available at: https://www.iprhelpdesk.eu/sites/default/files/2018-12/european-ipr-helpdesk-your-guide-to-ip-and-contracts.pdf . Last Access in 13. April. 2020.

GIUFFRIDA, Iria. Liability for AI Decision-Making: some legal and ethical considerations. 2019. Fordham Law Review, Volume 88, Issue 2. Available at: https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5627&context=flr . Last Access in 24. April. 2020.

GUILLAMON, Carmen Lázaro; PANERO, Patricia; CASANI, Amparo Montañana. Strengths, Weaknesses, Opportunities, and Threats (Swot) – analysis in Roman Law Subject. 2016. Available at: https://www.researchgate.net/publication/314918230_STRENGTHS_WEAKNESSES_OPPORTUNITIES_AND_THREATS_-_SWOT-_ANALYSIS_IN_ROMAN_LAW_SUBJECT . Last Access in 10. June. 2020.

HATTO, Peter (European Commission: Directorate-General for Research and Innovation).Standards and Standardization – a practical guide for researches. Available at: https://ec.europa.eu/research/industrial_technologies/pdf/practical-standardisation-guide-for-researchers_en.pdf . Last Access in 10. June.2020.

Insurance Europa.Compulsory insurance: when it works and when it doesn’t. 2017. Available at: https://www.insuranceeurope.eu/sites/default/files/attachments/Compulsory%20insurance%20Insight%20Briefing.pdf . Last Access in 20. May. 2020.

KEATING, Dave (EURACTIV.com). Should the EU embrace artificial intelligence, or fear it? 2019. Available at: https://www.euractiv.com/section/data-protection/news/should-the-eu-embrace-artificial-intelligence-or-fear-it/ . Last Access in 17. April. 2020.

KREUTZER, Ralf; SIRRENBERG, Marie. Understanding Artificial Intelligence – fundamentals, use cases, and methods for a corporate AI journey.2020. Springer. 313 pages. (ULB online archives - Cible Plus) – book

LARSSON, Stefan. The social-legal relevance of artificial intelligence. 2019. Available at: http://www.aisustainability.org/wp-content/uploads/2019/11/Socio-Legal_relevance_of_AI.pdf . Last Access in 22. May. 2020.

LARSSON, Stefan; HEINTZ, Fredrik. Transparency in Artificial Intelligence. 2020. Internet Policy Review - journal on internet regulation; Vol 9; Issue 2; Available at: https://policyreview.info/concepts/transparency-artificial-intelligence . Last Access in 20. May. 2020.

Osborne Clarke. European Commission’s AI White Paper: a new framework for liability issues.2020. Available at: https://www.osborneclarke.com/insights/european-commissions-ai-white-paper-new-framework-liability-issues/ . Last Access in 13. May. 2020.

WANG, Weiyu; KENG, Siau. Ethical and Moral Issues with AI: A case study on healthcare Robots. (Conference - 24th Americas on Information Systems). 2018. Available at: https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1580&context=amcis2018 . Last Access in 15. May. 2020.



[1] Mestranda no PROFNIT - Programa de Pós-graduação em Propriedade Intelectual e Transferência de Tecnologia para Inovação, UFBA (início em 2019); Mestrado em Direito LLM pela Universidade Livre de Bruxelas, Bélgica (presencial, 2019/2020 - concluído cum laude), em International Business Law;

[2] Professor do Programa de Pós-Graduação Stricto Sensu em Ciência da Propriedade Intelectual. Doutorado em Engenharia de Processos pela Universidade Federal de Campina Grande (2007).

[3] It is an essential reference to European-Community Law, in its sense of “primacy” in relation to local & State’s law, which follows the principle of subsidiary; Therefore, the “acquis” involves political objectives and principles of the European Union, in its entirety; (EUABC.com. Acquis Communautaire. Available at: http://en.euabc.com/word/12 . Last Access in 07. June. 2020.) Finally, this comprises relevant & “more flexible Soft law” as declarations, recommendations, opinions & guidelines to promote “legal uniformity” within a so-called transnational legal space (ZERILLI, Filippo. The rule of soft law: an introduction. Available at:< https://www.peacepalacelibrary.nl/ebooks/files/The%20rule%20of%20soft%20law%20An%20introduction%20Zerilli.pdf> . Last Access in 06. June. 2020.)

[4] For more content about the good Faith Principle, see: STORME, Matthias. Good Faith and Contents of contracts in European Private law. 2003. Available at: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiOr6edl-3pAhVRwAIHHdmJCtIQFjAAegQIAxAB&url=https%3A%2F%2Flirias.kuleuven.be%2Fretrieve%2F89734&usg=AOvVaw0DApCCmiZhFTq3-O-Tne7d . Last Access in 03. June. 2020.

[5]For further comments about this issue, see: LEVY, David. Intelligent no-fault insurance for robots. 2020. Journal of Future Robot Life; Pages 35 to 57. Available at: https://content.iospress.com/download/journal-of-future-robot-life/frl200001?id=journal-of-future-robot-life%2Ffrl200001 . Last Access in 06. June. 2020.

[6] In this sense, “Incipient” would mean something that is very initial; that it is in the very beginning of its development (Cambridge; Lexico & MacMillam Dictionaries).

[7] “Credible” would mean “able to be believed” (Cambridge; Lexico & MacMillam Dictionaries).

[8]Referring to machine learning and AI technologies, and their need of human supervision, Gernot Fink brought: “These systems can be fooled in ways that humans wouldn't be. […] For example, random dot patterns can lead a machine to “see” things that aren’t there. If we rely on AI to bring us into a new world of labor, security and efficiency, we need to ensure that the machine performs as planned”. (FINK, Gernot. Markov Models for Pattern Recognition - from theory to applications (second edition). 2014. Springer. 276 pages. (ULB online archives - Cible Plus) – book) Further, we are not advocating here in this written work that Artificial Intelligence is not useful or advantageous, contrarily, the employment of AI tolls is seen here as competitively beneficial, which is complemented by WIPO Director General Francis Gurry “AI’s ramifications for the future of human development are profound. The first step in maximizing the widespread benefit of AI, while addressing ethical, legal and regulatory challenges, is to create a common factual basis for understanding of artificial intelligence”. (NURTON, James (Editor). The IP behind the AI boom. 2019. WIPO Magazine, Technology Trends, Artificial Intelligence. Available at: https://www.wipo.int/wipo_magazine/en/2019/01/article_0001.html . Last Access in 01. April. 2020.)

[9]The Commission will also support breakthrough market-creating innovation such as AI through the pilot of the European Innovation Council. […] Funding in fundamental research is expected to be provided by the European Research Council, based on scientific excellence. Marie Skłodowska-Curie actions provide grants for all stages of researchers’ careers and have supported research in AI in the past years”. (EESC (European Economic and Social Committee). Artificial Intelligence for Europe. 2019. Available at: https://www.eesc.europa.eu/sites/default/files/files/qe-04-19-022-en-n.pdf . Last Access in 22. May. 2020.)

[10]What is still needed is at least one of the following: a description of the way that the model is trained, including a reference to the training data; or every learned coefficient or weight of the model.” (DEVLIN, Alan. The Misunderstood function of disclosure in patent law. 2010. Harvard Journal of law and Technology; Vol. 23; N. 2; Pages 401 to 446. Available at: http://jolt.law.harvard.edu/articles/pdf/v23/23HarvJLTech401.pdf , Last Access in 14. June. 2020.)

[11]In this sense, “Once an item enters the stream of commerce and injures someone, the victim may be able to pursue a products liability claim against the party found to be responsible.” (HG.Org. What is a Product defect? Available at: https://www.hg.org/legal-articles/what-is-a-product-defect-34498 . Last Access in 05. March. 2020.)

[12]The actors may include producers of the AI systems machines, the users of AI, the programmers of the software run on such machines, their owners, and the intelligent systems themselves.” (European Commission. White paper on artificial intelligence – a European Approach to excellence and trust. 2020. Available at: https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf . Last Access in 04. June. 2020.)

[13]: In this sense: “As is already the standard rule in all jurisdictions, whoever demands compensation from another should in general prove all necessary requirements for such a claim, including in particular the causal link between the harm to be indemnified on the one hand and the activities or risks within the sphere of the addressee of the claim may trigger the latter’s liability on the other. This general principle is supported inter alia by concerns of fairness and results from the need to consider and balance the interests of both sides.” (European Commission. Liability for Artificial Intelligence: and other emerging digital technologies. 2019. Available at: https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608 . Last Access in 22. May. 2020.)

[14]In this sense, “Vicarious liability is rather associated with fault liability, as liability of the principal without personal fault of their own, but for the (passed-on) ‘fault’ of their auxiliary instead, even though the auxiliary’s conduct is then not necessarily evaluated according to the benchmarks applicable to themselves, but to the benchmarks for the principal.” (European Commission. Liability for Artificial Intelligence: and other emerging digital technologies. 2019. Available at: https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608 . Last Access in 22. May. 2020.)

[15]It is possible to apply existing liability regimes to emerging digital technologies, but in light of a number of challenges and due to the limitations of existing regimes, doing so may leave victims under or entirely uncompensated”. (European Commission. Liability for Artificial Intelligence: and other emerging digital technologies. 2019. Available at: https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608 . Last Access in 22. May. 2020.)

[16]“[…] Constantly evolving and changing. For this reason, legislation governing this field should be: (i) universal to be effective, regardless of changes in information technology, or (ii) constantly amended to be effective, regardless of changes in information technology;” (CERKA, Paulius; GRIGIENE, Jurgita; SIRBIKYTE, Gintare. Is it possible to grant legal personality to artificial intelligence software systems? 2017. Elsevier. Computer Law and Security Review, n. 33. Pages 685 to 699. (ULB online archives - Cible Plus) – article.

[17]For once, we would like to set common European principles and a common legal framework before every member state has implemented its own and different law. Standardization is also in the interest of the market as Europe is good in robotics […].” (European Parliament. Rise of the robots: Mady Delvaux on Why their use should be regulated. 2017. Available at: https://www.europarl.europa.eu/news/en/headlines/economy/20170109STO57505/rise-of-the-robots-mady-delvaux-on-why-their-use-should-be-regulated . Last Access in 20. May. 2020.)

[18]We came here in this footnote to show an example of this normative correlated regulations, and to further make the readers critically reflect about how public institutions are preliminary addressing the AI and Machine Learning issue; how they intend to tackle it, and what are the initial proposals for which they are seeking approval and public engagement: The Machinery Directive is the core piece of EU legislation for the mechanical engineering industry: it promotes the free movement of machinery within the Internal Market while setting out the ‘Essential Health and Safety Requirements’ to be observed when placing a machine on the market for the first time. […] The Directive has a dual objective while guaranteeing the safety of machinery and ensuring the free movement of machinery throughout the EU, it must keep its relevance in light of the technological developments.  Mainly, emerging digital technologies allowing the EU to remain competitive on the global market.” (Orgalim - Europe’s Technology Industry. Policy decoded: the machinery Directive and AI. 2019. Available at:< https://www.orgalim.eu/insights/policy-decoded-machinery-directive-and-ai> . Last Access in 20. May. 2020. (European Technology industries)

 

[19]Explainable Artificial Intelligence (XAI), running a project that aims to create a suite of machine learning techniques that: a) Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and b) Enable human users to understand, appropriately trust, and effectively manage artificial intelligence outputs.” (YU, Ronald; ALI, Gabriele Spina. What is inside the Black Box? AI Challenges for lawyers and researchers. 2019. Pages 2 to 13. Available at: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/8A547878999427F7222C3CEFC3CE5E01/S1472669619000021a.pdf/whats_inside_the_black_box_ai_challenges_for_lawyers_and_researchers.pdf . Last Access in 20. April. 2020.)

[20] Also, we would like to raise here the issue of governing data, as it requires due efforts concerning to attributing fairness and “explainability”- transparency to AI Systems; Referring to the groups of developers and programmers that create the AI Machine, LOHR & GUSHER said that “These groups or their partners will also need to monitor outcomes continuously to be sure they are fair and accurate—and that they remain true to the original objectives”. (LOHR, Todd; GUSHER, Traci. KPMG. Ethical AI – Five Guiding Pillars. Available at: https://advisory.kpmg.us/content/dam/advisory/en/pdfs/2019/kpmg-ethical%20-ai-five-guiding-pillars.pdf . Last Access in 01. June. 2020.)

[21]Britannica.Com. Proportionality – Mathematics.Availableat: https://www.britannica.com/science/proportionality . Last Access in 08. June. 2020.

[22]LIM, Daryl. AI & IP Innovation & Creativity in an Age of Accelerated Change. 2018. Akron Law Review, Pages 813 to 875. Available at: https://repository.jmls.edu/cgi/viewcontent.cgi?article=1724&context=facpubs . Last Access in 14. April. 2020. (John Marshal Law School - Institutional Repository).

[23]Standford Encyclopedia of Philosophy. Certainty. 2008. Available at: https://plato.stanford.edu/entries/certainty/#ConCer . Last Access in 08. June. 2020.

[24]GIESEN, Ivo; KRISTEN, François. Liability, Responsibility, and Accountability: Crossing borders. 2014. Utrecht Law Review; Vol. 10; Issue 3. Available at: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiijdPTxYPqAhWG16QKHcy7AEYQFjAAegQIBRAB&url=https%3A%2F%2Fwww.utrechtlawreview.org%2Farticles%2F10.18352%2Fulr.280%2Fgalley%2F281%2Fdownload%2F&usg=AOvVaw3QkSQr7FxmSKcRHCVFPFHl . Last Access in 12. June. 2020.

[25]Corporate governance is essential to develop and enforce policies, procedures, and standards in AI systems. Chief ethics and compliance officers have an important role to play, including identifying ethical risks, managing those risks and ensuring compliance with standards.” (MINTZ, Steven (Corporate Compliance Insights). Ethical AI is Built on Transparency, Accountability, and Trust. 2020. Available at: https://www.corporatecomplianceinsights.com/ethical-use-artificial-intelligence/ . Last Access in 20. March. 2020.)

[26]The correlation between AI/machine & deep learning, and Big Data is explicit and will add further explanation to this work in its continuation, in the sense of the relevance of information processing & dealing in the Patent System, concerning its requirements of “non-obviousness” & “state-of-art assessment”. (ANYOHA, Rockwell. The history of Artificial Intelligence. 2017. Available at: http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ . Last Access in 13. June. 2020.)