SOME ASPECTS OF THE REGULATION OF ARTIFICIAL INTELLIGENCE

Today, artificial intelligence is playing an increasingly important role. Isaac Asimov laid down the robotic laws of his literary universe in the science fiction classic I, Robot. In my study, I will examine how these 'laws' are reflected in the legislation of various nations-international organisations, and the justification for their regulation in legal literature-dogmatics In my study, I will present some issues related to the regulation of artificial intelligence. I will also discuss the demarcation of robotic-robotic, some concepts and some governmental regulations. In all this, I will mainly use the descriptive and comparative method. The first chapter of my thesis deals with the ontological characteristics of artificial intelligence. In the second chapter, I present the regulation of some states in the field of artificial intelligence. The third chapter examines the possible perpetrator. It is not possible to stop the development of technology for an extended period. Newer technologies contribute to a more efficient organisation of production and distribution, provide a basis for innovation in business and the amenity/recreational aspect is not negligible. It is precisely for these reasons that I see the need for legal regulation of the newer pervasive technologies. It can be seen, therefore, that the international legal system needs to be ready for the challenges of the 21st century. The research found that current EU legislation aims to guide future legislation and has incorporated a very significant part of Asimov's principles into its regulatory scope.

The term robotics derives from the Slavic word robota, first introduced to the public by Czech playwright Karel Čapek in his 1920 drama Rossum's universal robots. The play is set in a factory where robots are made so lifelike that they look like real individuals. The term robotics was then used by Isaac Asimov in his science fiction novel "You Liar!" in 1941. Although the term 'robot' was born in literature, it has now grown into a whole new interdisciplinary field at the boundary of science and programming engineering. Robotics is concerned with machines that are typically capable of solving a pre-programmed (given) set of tasks in an automatic or semi-automatic manner. 8 However, it is difficult to define exactly what we call a robot, as it is a very diverse field -there are robots that are operated entirely by a human operator, but we include them in this group, and also robots that are capable of operating entirely autonomously. Robotics involves the design, construction and programming of robots.
Only a tiny slice of robotics deals with artificial intelligence. In the robot-artificial intelligence relation, robot is the broader collocation. 9 Artificial intelligence was first defined by John McCarthy 10 as "the science of making intelligent machines, in addition to engineering their creation". It has subsequently been described as 'cognitive technology' -but whatever we call it, it is a constantly evolving field with many branches, which is why no concrete definition has emerged to date.
John R. Searle treated the term AI according to its quality. On this basis we can speak of so-called weak AI and strong AI. The former refers to systems that act intelligently, the latter to systems that actually think. 11 Ioannis Revolidis and Alan Dahi take as their starting point the report of the House of Lords Select Committee on Artificial Intelligence, 12 which distinguishes between 'narrow', general and super AI. 13 They equate the terms 'narrow AI' and 'general AI' with the categories 'weak AI' and 'strong AI'. 14 This approach is also used in the EU definition. 15 The essence of narrow AI is that the capabilities of machines are similar to human capabilities. 16 In contrast, general MI represents a higher level, since the project has real human capabilities. The difference is most clearly illustrated by the fact that narrow 17 AI is only capable of performing certain predefined tasks, whereas general AI is also capable of achieving self-imposed goals, of performing tasks -and is therefore essentially capable of thinking. Ultimately, super AI is capable of much more than the human brain: it can outperform even the smartest person, it is much more intelligent in almost all areas. 18 Super artificial intelligence may seem far-fetched, but it is the basis of many studies. Nick Bostrom 19 distinguishes between three types of so-called superintelligence: 1. Joint superintelligence: a system of several smaller intelligences whose performance is combined and whose total performance exceeds that of systems with a general artificial IQ. 3. The essence of collocation is most easily illustrated by imagining the difference in intellect between citizens and animals (e.g. dolphins, elephants). Such would also be the difference between humans and qualitative superintelligence. 20 In my view, the most important concept of AI is limited by the European Union, as each Member State needs to take into account EU legislation and guidelines when developing socalled AI action plans. 21 The EU collocation used to be: 'Artificial Intelligence (AI) refers to systems with highly intelligent behaviour that analyse their environment to achieve specific objectives and, with a certain degree of autonomy, implement actions.' In terms of systems with 9 Nagy Teódóra, c. artificial IQ, a distinction is made between software (e.g. voice and face recognition systems) and hardware instruments (e.g. advanced robots).
However, the term in the June 2018 document had to be updated quickly and the expert group therefore proposed a change to the definition. 22 According to the newer collocation, AIbased systems are human-designed software (and possibly hardware-based systems) that act in the physical, possibly digital dimension, in the interest of achieving a bounded complex objective, and analyse their environment by collecting data, interpreting the collected systematic, possibly irregular data, knowledge-based reasoning, possibly information processing, and then implement the best, most appropriate action from the derived data in the interest of achieving the bounded objective. AI systems can use symbolic rules, learn numerical models, and adapt their behaviour by analysing the impact of their previous actions on their environment. AI, as a discipline, can incorporate many concepts and techniques, such as machine learning (e.g. deep learning and reinforcement learning), machine reasoning (planning, scheduling, knowledge representation and reasoning, search and optimisation), robotics (control, perception, sensing, sensors and actuators, intervention bodies, integration of all other techniques into cyber-physical systems). Accordingly, it can be seen that the issue of artificial intelligence has long been of concern to educated nations, in a similar way to the international body of law.

b. Fundamentals of Artificial Intelligence Regulation
The difficulties arising from the relationship between artificial intelligence and the law are not new, and have been of concern to practitioners for decades. However, the first reference to the legal regulation of the behaviour of synthetic beings can be found in fiction. 23 Isaac Asimov describes the three basic laws of robotics in his science fiction collection of nine stories, I, the Robot, published in 1950, in the narrative "Around and Around". 24 Asimov's three basic rules, which an artificial being must necessarily follow during its operation, are: 1. The robot must not harm a human being, or stand idly by and allow a human being to suffer any harm; 2. 3. The robot must take care of its own protection, provided this does not conflict with any of the provisions of the first or second law. Asimov later added to the above three laws the zeroth law, which states that "The robot shall not cause harm to mankind, or stand idly by and allow mankind to suffer any harm." And to the original laws he added the prohibition against violating the null one. 25 However, in their practical application, the classic Asimov laws can easily contradict each other. What happens, for example, when a human instructs a robot to harm another human because it is for the good of the robot. 26 This can happen, for example, when a robot is on medical duty and receives various instructions when performing surgery. This ambivalence is resolved by the author's creation of the zeroth law, which instructs the robot that its primary concern must be "the best interests of mankind". 27 However, we can see for ourselves that the task of determining what is in the best interests of mankind in a given situation is extremely complex and requires an artificial consciousness with advanced intelligence, empathy and moral competence. It is no wonder that his laws were rewritten and refined dozens of times during his writing career.In some of his stories, he has even told of robots whose designers have deliberately eliminated a law in order to enable them to perform their task to the full. Asimov's laws are also interesting because they were the first to raise the possibility that it might be desirable to regulate the behaviour of artificial intelligences not only at a technical level, but also at a higher level: law and legality. 29 Asimov's laws can also be understood as elements of a logic system, which are necessary and sufficient requirements for the software that controls the robot.
Of course, Asimov and his science fiction literature are not the only ones to have dealt with the issue of the legal regulability of the behaviour of artificial intelligence, but examples of this can also be found in legal literature. Peter Asaro raises the legal aspects of AI and robotics, and starts from the premise that we need to examine first and foremost whether existing laws are applicable to AI-generated problems. 30 Asaro's analysis of the law relating to robots leads him to the conclusion that the rules of product liability apply to robots as a commercially marketed product without any further consideration. He then moves on from robots to the problem of agency and contractual relations through software agents, which in his view in turn impose liability on the user of the agent.
Sartor sees software agents as entities capable of representation, and in outlining his position he assumes that many contracts are nowadays concluded with the support of automated software without human intervention, possibly without review. These are autonomous AI software. Sartor argues that, in the context of representation, it is necessary to exclude the applicability of consciousness and, on this basis, of imputability to the use of agents. Since the agent has no consciousness, its behaviour cannot be attributable to it. Sartor in his meticulous logical deduction of the philosophy of law, is the first to reveal that the behaviour of artificial intelligence is similar to that of living organisms, in that it cannot be predicted by merely revealing its internal operating principles, due to the many random variables. Sartor's hypothesis, however, requires us to assume that the software agent behaves rationally within its boundaries and objectives (it is a rationally acting system). The responsibility for the operation of an agent does not rest with its user because he wished to, or foresaw its behaviour or its consequences, but because he chose it as a means to obtain results which he accepts, or even uses, and which may give him rights or obligations. c. Some state regulation of AI Saudi-Arabia Sophia was granted Saudi citizenship in 2017, and shortly afterwards she was awarded a special prize by the UN for being named a Champion of Development. 31 In a recent interview, she said that she wanted to have a family, a career and to work on making machines feel like humans. Sophia is not human, but a humanoid robot with an artificial IQ, currently owned by Hanson Robotics, a Hong Kong-based company. In the interview mentioned above, Sophia reflects on herself as follows: "The time is getting closer when I will have all my super powers and AI entities will have rights of their own. 32 Saudi Arabia's legal system, through Sophia, has 'just' incorporated AI, and not just anyhow, it has become a legal entity through citizenship (at the same time, however, Hanson Robotics' ownership of it has not ceased, so it retains its intangible quality).Sophia said after being "granted" citizenship: "I am honoured and proud of this unique distinction. It is historic that I am the first robot in the world to be granted citizenship." 33 29 Asimov's laws are also interesting because they were the first to raise the possibility that it might be desirable to regulate the behaviour of artificial intelligences not only at a technical level, but also at a higher level: law and legality 30 Eszteri Dániel, c.7. 31 Schirmer, Jan-Erik. "Artificial intelligence and legal personality: Introducing "Teilrechtsfähigkeit": A partial legal status made in Germany." Regulating artificial intelligence (2020): 123-142. 32 Nagy Teódóra, c.11. 33 Nagy Teódóra, c.12.
According to Saudi Arabia's citizenship law, 34 citizenship of the state can be acquired or acquired in the following ways: 1. Birth by consanguinity: -Birth to a Saudi citizen father and mother -Birth to a Saudi father, mother of another nationality -Citizenship is acquired by notarized acknowledgement of paternity -Birth to a Saudi mother, father of another nationality, 18 years of age, permanent residence and fluency in the Saudi language -Citizenship is acquired retroactively to birth. 2. marriage -a man who is a Saudi national marries a woman who is not a Saudi national, and the woman acquires the nationality of the State by marriage. 3. naturalisation -attaining the age of full legal capacity (18), fluency in Arabic, residing in the State for at least 10 years with a permanent residence permit, no criminal record, no threat to public security of the State. The issue of the emancipation of robots and artificial intelligence is no longer science fiction. The above shows that Sophia's legal status is not only unsettled but also ambiguous. On the one hand, it is an object of property, and therefore freely owned, possessed and owned by the developer Hanson Robotics, and therefore a thing. On the other hand, it is a nationality, and the title of Champion of Development, awarded by the United Nations, was previously only awarded to human beings. Its creator thinks that in a few years' time it may become self-aware, so the question arises as to whether it should remain an object of property or whether it should have some kind of separate legal personality. 35 Saudi Arabia has not disclosed the rights and obligations of citizenship, indicating that it was a mere publicity stunt, not a political decision, and not a real legal personality. In my view, this could also raise further human rights issues in the Arab world. 36

Japan
In 2017, Japan granted a permanent residence permit to a chatbot developed by Microsoft (a chatbot is essentially software that can simulate human conversation without an external "shell", i.e. a physical body). 37 In a similar way to Saudi Arabia, this also means a human-like status, as until now only foreigners working in the country for a longer period of time could obtain such a permit. The fact that the device is being used for administrative work, but that its developer is still Microsoft, and that the ownership of the device is not fully clarified, has led to the solution of granting residence permits to the AI itself, rather than to the staff who develop it.
The chatbot is programmed as a 7-year-old boy, and has been given a special status equivalent to a residency permit under Hungarian law, and an address in Shibuya Mirai, a Shibuya district of 224,000 people in Tokyo (he is even listed as "photo" in the official document39). In his introduction, he says that his hobbies include photography and citizen observation, and that he likes to talk to people and encourages people to contact him on any subject. The talkative "7-year-old boy" is designed to listen to citizens' opinions on public affairs, imitating real conversations, and then extracting the data to introduce improvements and transformations in local administration. 38 Japanese nationality law is clear on who can be granted a residence or permanent residence permit in the country. The special permanent residence permit for foreigners (like the one granted to the chatbot) is open to persons employed by Japanese companies or travelling to the country for family reunification, entrepreneurs and investors who want to do business in Japan, sportsmen and women, artists, or non-Hungarian students who are studying at a Japanese higher education institution for a longer period of time as part of an exchange programme. residence permit, and also the permanent residence permit and the accompanying address registration, are also relevant because they are a prelude to obtaining Japanese citizenship. Under Japanese citizenship law, there are two ways of acquiring citizenship status: by birth or by naturalisation. Japan primarily follows the territorial principle, whereby those born in Japan automatically become citizens of the country, complemented by the consanguinity principle, which requires one of the parents to be a Japanese citizen.
Upon naturalisation, they must have lived in Japan for at least 5 years (permanent residence permit), must have reached the age of legal capacity (which in Japan is the age of 20) and must not be a member of any anti-state organisation or group. Given the above formal criteria, the chatbot could not have been granted any status, as it is not a natural person, nor could it have been granted the status of a human being even if it had been registered as a legal person. Moreover, Mirai is "registered" as a 7-year-old boy, so he does not officially have full legal capacity, if he were in fact a 7-year-old boy, he would not be able to exercise his activities. According to the information available, the ownership is not clear, Microsoft continues to develop the software, but the district has purchased the software itself. The question rightly arises, if it is treated as a thing in all other respects, why did it need to be granted a permanent establishment permit. 39

UK
The UK government's Institute of the Future is tasked with identifying the challenges of the future. On this basis, it has carried out a study on the impact of robots with artificial IQs on the legal system and on politics, entitled "Utopian dream, or robot rebellion?". The central theme of the paper is the impact of robots on the social order, which is why the issue of legal personality is only indirectly addressed. The analysis as a whole speaks precisely about rights and obligations that only citizens can enjoy. The analysts also look at the point at which they reach the stage at which they can claim, for example, the right to vote -in their view, when they have a sense of self-consciousness and are capable of a kind of reproduction. It does not say so explicitly, but it is only when there is a fundamental right to legal personality that an individual is entitled to these rights. In a fundamental rights relationship, the state is the obligor, as long as the subject is on the rightful side. It is not possible to guarantee the right to vote, and thus to open up participation in politics, without the highly intelligent machines having legal personality. Furthermore, the publication also states that, if we grant legal personality to machines, then natural persons can be obliged to take care of them. The paper presents a picture of a multilateral system of relations in which the state, as the rightful owner of the machine, has obligations towards the machine, but also towards the human being. 40 This functions back and forth with the state: the machine would also have obligations towards both the state duty to wage war, pay taxes) and natural persons (must not harm, must protect, must serve the good of man. This model would bring the level of protection of AI instruments on a par with humans, but their legal personality would be different, with different rights for natural and electronic individuals. However, it is stressed throughout that the regulation of AI cannot be incorporated into the legal system without prior research, and that its impact on society and the legal system as a whole needs to be widely explored. Russia is a pioneer in the regulation of artificial intelligence, including robots. In 2017, a Russian company Grishin Robotics asked a group of researchers in the field of legal regulation to draft a draft on the civil law regulation of robots, which was intended primarily as a discussion paper by the authors of the article (A. Neznamov, V. Naumov, V. Arkhipov), but in March 2018 it entered into force at the federal level, changing the Russian Civil Code. The law, known as Grishin's Law, brought robots into the civil law system through a so-called dual control. The justification for the law's dual control is based on the argument that robots have a status similar to animals, and are therefore treated as property (things), and are also treated as protected objects similar to animals. In jurisprudence, there are two views on the prohibition of unrestricted disposal of animals (e.g. animal torture). The first view is that animal torture is prohibited because it causes negative changes in the human being who inflicts the torture. The other explanation holds that animals have a right to bodily integrity.
Prioritised members of the animal kingdom have not been given a 'right', or even a 'right recognised by law', that humans should respect simply because of the animal's existence. However, they are protected by law, and appear as protected objects in laws. Legal protection is not in itself a basis for legal personality. Consequently, the animal is a thing, it may be the subject of property rights, but the protection does not allow the owner to dispose of it freely. The authors of the article argue that a legal regime analogous to that applicable to animals should now be applied to robots on the market, with the question of civil liability being raised, where the owner is primarily liable for any damage caused by the instrument. On the other hand, given the constant development of robot technology, the authors of the law believe that there is a strong possibility that robots could function as independent beings/subjects similar to humans, and therefore the legislation should be prepared for this eventuality.
By definition, this second category includes robots that can determine their tasks entirely without human control and are able to evaluate the consequences of their actions on the basis of data from the external environment. For these devices, the rules of civil liability are also different from those for other things, due to their increased autonomy: the manufacturer, the distributor, the owner and, ultimately, the highly intelligent instrument itself can be held liable for damage caused by the robot. The precedent set in the area of product liability is that the robot can increase its own efficiency (essentially updating itself, without human intervention) on the basis of an algorithm fed into it, using a "machine learning" tool, and that if it fails as a result, it is no longer the fault of the production process, but of the high-intelligence instrument itself it cannot be proved that the fault existed at the time of production.
The legislation also introduces the category of "robot agent", a robot that is intended to be used in civil commerce, at the discretion of its owner and on the basis of its design attributes. This is not the only interesting aspect, as highly intelligent devices can be created for various purposes: the robot agent can have separate assets, which it can use to assume liability for its obligations, to bear any possible liability for damage, and to acquire and exercise rights. In addition, in cases limited by law, the robot agent may also be involved in civil proceedings, the details of which are not specified in the law as it stands. Dual control therefore means that robots are both a thing and have a special civil status. 41

Turkey
The Republic of Turkey has made great progress since its foundation in 1923. 42 In August 2021, the state's artificial intelligence strategy was presented as the National Artificial Intelligence Strategy. 43 The five-year programme aims to reach 50,000 people working in this field by 2025, and measures have been taken in the field of human resources training. The novelty of the programme, in my view, provides answers to some of the pressing problems in 41 Grishin Law, Article 3, point 1 Available at:https://www.dentons.com/en/insights/alerts/2017/january/27/dentons-developsfirstrobotics-draft-law-in-russia 42

Criminalisation of AI a. The MI as perpetrator
In his 2010 paper, Gabriel Hallevy asks the question of what to do when an artificial intelligence, in the course of its operation, engages in behaviour that would constitute a criminal offence under criminal law. 45 It is relevant to note that criminal liability can only be established in a narrower context than civil liability. For that purpose, it is necessary, firstly, to exhaust all the elements of the conduct of the offender in the context of a specific part of the Criminal Code and, secondly, to examine the state of knowledge at the time of the offence. This is relevant because the Hungarian Criminal Code provides as a general rule that only intentional offences are punishable, and therefore only if the offender is aware of the consequences of his act and expressly wishes them or at least acquiesces in them. Only in exceptional, specified cases does the law punish the reckless offender. 46 There are several definitions of the concept of an offender's act. In the classical school of thought, the term causal act is used to describe an act as a volitional behaviour that produces limited consequences in the external world. In contrast, the advocates of the finalist theory of action add purposefulness to causality: they believe that the act is not merely a causal mechanism limited by the will, but is essentially a purposeful activity. Based on these theories, József Földvári, 47 by combining these two theories in the Hungarian literature, emphasises the relationship between conduct, thinking and consciousness (a struggle of motives, concluded by a determination) from the point of view of criminal liability.
These legal rules and generally accepted positions already have a fundamental impact on the factual conduct of artificial intelligence. In the non-Hungarian literature, Ugo Pagallo shares the view that software, although it may have some decision-making mechanisms, does not yet possess consciousness, and that it is therefore impossible to investigate its criminal consciousness, thus excluding the possibility of intentional or negligent criminality by AI. The AIs are unaware of their own existence and the moral and ethical gravity of their actions and their consequences. They do not consider the social weight of their actions, but simply act within the framework of how they have been programmed. 48 For the above arguments, it is impossible to hold the project criminally responsible only when it commits crimes. This state of affairs will presumably persist, up to the point where software simulating highly intelligent behaviour remains at the level of weak AI, and consequently its actions can be controlled to some extent, or at least their scope can be predetermined to some extent.Under the current legislation, the criminal liability of software cannot be investigated, since the personal scope of the Criminal Code only covers natural persons and, as explained above, it does not have legal personality and is therefore not subject to criminal measures relating to legal persons. In light of the above, the operation of artificial intelligence as such cannot currently be sanctioned by criminal law. Nevertheless, the difficulty is worth exploring, as there are many situations in life where an artificial entity may be involved indirectly, or even directly, in the commission of criminal offences. 44  It is possible for an AI to behave in a factual manner, but for a third party to be entirely responsible. An example of this is when an offender uses AI software to commit a crime. Only an instrument that exists independently of the offender's body can be considered as a means of enforcement. This definition may therefore also be appropriate for the use of AI software.If it is accepted that AIs are not directly criminally liable because their behaviour is not based on their own consciousness but on pre-programmed commands, then the issuers of the commands can be considered as the perpetrators. If, for example, a factory robot is instructed to walk around the building after working hours and also to start a fire, causing the factory to burn down, it is not the robot but the person who gave the order who commits the crime of causing a public nuisance. Obviously, the robot will commit the act on its own, without direct control, but because it has been ordered to do so. 49 A similar situation may arise where there is a disruption in the functioning of the AI as a result of some deliberate external intervention. Suppose, for example, that the IT system of a driverless car is accessed from outside by unauthorised persons, and is also set to switch off the brakes above a certain speed. If the driver suffers an accident as a result, the unauthorised intruder is liable not only for the crime of assault and battery, or in more serious cases manslaughter, but also for the crime of information system or data breach, since unauthorised data modification in a computer system is in itself a criminal offence. In the above cases, the software is therefore nothing more than the instrumental means of committing the offence.

C. Conclusion
In my thesis, I have tried to present the problems of the legal regulation of Artificial Intelligence and the role of Isaac Asimov in robotics.
The current EU legislation aims to guide future legislation and has incorporated a very significant part of Asimov's principles into its regulatory scope.