Control of AI What rights do (autonomous) machines have?

From Dipl.-Phys. Oliver Schonschek | Translated by AI 4 min Reading Time

Related Vendors

What is AI allowed to do and what not? The answer to this also determines possible risks with autonomous machines. It is not enough to establish guidelines for the use of AI; the policy must also be technically controlled and enforced, for example, with IAM.

To control the permissions of machines, a comprehensive application of Identity and Access Management can help.(Image: freely licensed /  Pixabay)
To control the permissions of machines, a comprehensive application of Identity and Access Management can help.
(Image: freely licensed / Pixabay)

It is the classic scenario when discussing autonomous machines: "the eerily autonomous vehicle," as the Federal Office for Information Security (BSI) once called it. The autonomous vehicle vividly illustrates how dangerous it can be when the underlying artificial intelligence makes mistakes and crosses boundaries.

Challenge of autonomous driving

In road traffic, the decisions of AI must be understandable, and functional safety must be ensured at all times, as the Fraunhofer Institute for Cognitive Systems IKS emphasizes. The EU Agency for Cybersecurity ENISA lists a variety of AI risks that can make autonomous vehicles and other machines a threat. AI systems and autonomous machines must adhere to defined rules to minimize risks, and they must be protected from manipulation and misuse. This includes, for example, that AI systems as controllers of autonomous machines may only use certain data to prevent manipulation. The BSI explicitly warns against data attacks on AI.

Such attacks can start as early as the training of an AI system by manipulating the underlying data (poisoning attacks), according to the BSI. Especially when data or pre-trained models from external sources are used, these attack opportunities arise.

Obviously, you have to check and limit the data sources to prevent that. But how do you do that?

AI policies are important, but they are not enough

The EU Cybersecurity Agency warns of AI dangers that can also lead to risks in autonomous machines. To mitigate these risks, the permissions of autonomous systems should be reviewed and limited.(Image: ENISA)
The EU Cybersecurity Agency warns of AI dangers that can also lead to risks in autonomous machines. To mitigate these risks, the permissions of autonomous systems should be reviewed and limited.
(Image: ENISA)

Now, AI guidelines can specify which data sources are allowed for training an AI and who is permitted to make data inputs during operation that can influence the AI. As shown by the so-called Blueprint for an AI Bill of Rights, the steps from principles to practice ("From Principles to Practice") must not be omitted. Technical implementation is required.

It is also not sufficient for us humans to specify in IT policies who is allowed to use what, when, for what purpose, and how. Additionally, access to IT systems and the access to data, applications, and interfaces must be controlled and technically enforced.

Just like with humans, the guidelines for autonomous machines and AI systems must not only be documented, but they also require technical implementation. Crucial for implementation is that permissions are always tied to a specific digital identity. This is done in IAM (Identity and Access Management).

However, since digital identities exist not only for humans but also for machines and AI systems, the application of IAM is also evident in the area of autonomous machines and AI processes.

Enforce AI policies technically

Guidelines for Usage of generative AI include, for example, the rule:

Access controls and permissions: Access to LLM training datasets should be restricted and controlled. The use of these datasets should only be allowed for authorized individuals, organizations, and bodies. Access permissions can be governed by contracts, licenses, or other legal agreements. These agreements can set forth the conditions for the use of datasets.

But this should not only apply to “individuals, organizations, and entities” and should not only be regulated contractually.

Rather, data access and the use of other system resources should be technically controlled and resolved for machines and AI systems through IAM. Solutions like Azure Role-Based Access Control, or Azure RBAC, could not only manage which human user roles are allowed to use Azure Open AI Service but also which machine identities and other AI systems are permitted to do so. A Data Access Governance that checks and regulates who can access sensitive data could also apply this to machine IDs and identities of AI systems.

With offerings like Glean or Adobe Experience Platform (Attribute-based access control in Customer AI), it can be determined whether and how certain data may be used by generative AI. Such an approach shows the way how guidelines for autonomous machines and integrated AI systems can be implemented in detail.

Also regulate access to applications, interfaces, and AI services

In addition to the guidelines for data access, IAM solutions can also be used to implement specifications regarding which identity may use certain applications, interfaces, services, or AI services. It is possible to define under which conditions a possibly autonomous machine should gain access to a specific AI.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

With concepts and solutions like IAM, detailed boundaries for intelligent, autonomous machines can be defined and enforced. It is possible to specify which rights an autonomous machine may have and which it may not, similar to setting up a permissions and roles system for users. Therefore, AI policies and IAM should be closely linked so that autonomous systems can be restricted in their rights to minimize possible risks, as quickly becomes apparent with autonomous driving.