Artificial Intelligence 7 Best Practices Against GenAI Proliferation in the Company

From Fabian Glöser* | Translated by AI 4 min Reading Time

Related Vendors

Generative AI simplifies everyday work, but its use comes with risks. Forcepoint provides seven tips on how to harness the potential of chatbots like Chat GPT, Copilot, or Gemini without compromising data protection and data security.

Sometimes only a few clicks separate benefit and harm—this is especially true for the careless use of generative AI.(Image: freely licensed / AI-generated / Unsplash)
Sometimes only a few clicks separate benefit and harm—this is especially true for the careless use of generative AI.
(Image: freely licensed / AI-generated / Unsplash)

Generative AI takes over many time-consuming tasks for employees and makes them more efficient. No wonder they want to use the practical helpers in their daily work and often get started without waiting for officially introduced tools by the company. However, this creates significant risks: Not only do data protection violations and the leakage of sensitive company data threaten, but also unfair or incorrect decisions if employees place too much trust in AI results and overlook biases or errors. There are also liability issues, should discrimination, incorrect decisions, or copyright infringements occur due to the use of AI. Companies therefore urgently need a plan on how to handle GenAI and safely introduce new tools.

In our experience, the following approach has proven effective:

  1. Establish a standard process Companies need a consistent process for the application, evaluation, and approval of new AI tools as well as their subsequent implementation and securing. The standardized process ensures that the tools meet all internal requirements—such as those related to benefits, costs, and data protection—and are always selected based on the same criteria. It also prevents tools from entering the company through back channels and being used without employee training or sufficient security measures.

  2. Establish an AI Council Since AI impacts many areas of a company, not just the specific department looking to implement a new tool, it makes sense to establish an AI Council. This is a committee that brings together experts from IT, the security team, and the legal department, among others, and works closely with the departments. It not only evaluates all use cases and AI tools individually but also ensures that no unnecessary tools are implemented. Additionally, it provides advisory support to departments and communicates the benefits and successes of GenAI projects within the company to increase their acceptance.

  3. Set priorities The introduction of new technologies and applications always comes with challenges and changes. Therefore, especially in the early stages of their GenAI journey, companies should avoid introducing too many AI tools simultaneously to prevent getting lost in multiple projects. It is better to set priorities and initially focus on one or two use cases and tools that offer significant benefits or are attractive to multiple departments. The experiences gained during the implementation can then help establish additional tools more quickly and smoothly within the company.

  4. Train employees After selecting and implementing new AI tools, companies should not leave their employees to handle them alone. Future users need guidelines on how to work with the tools—they must know what is allowed and what is not, as well as the associated risks. Training sessions are essential to inform them about the guidelines, allow them to practice working with the tools, and teach them not to blindly trust the AI but to question and verify the results.

  5. Do not leave everything to AI When companies automate their processes with the help of AI, they should carefully evaluate which decisions are entrusted to the algorithms and where human oversight or decisions are necessary. On the one hand, this is about preventing discriminatory and incorrect decisions, and on the other hand, about complying with the EU AI Act. For high-risk AI systems, such as those in critical infrastructures, human resources, or credit assessments, the act requires human supervision.

  6. Regulate access to AI tools To ensure that employees only use verified and approved AI tools, companies should protect access with security solutions that combine tools like Cloud Access Security Broker (CASB), Zero Trust Network Access (ZTNA), and Secure Web Gateway (SWG). Effective solutions allow access to be regulated based on users, groups, and other criteria and enforce policies even on unmanaged devices. When unauthorized AI tools are accessed, it is possible to redirect employees to an already approved alternative.

  7. Prevent data leaks Policies and training alone are not sufficient to effectively prevent data protection violations or data leaks. After all, employees may intentionally or accidentally input or upload personal or confidential data into AI tools. Data security solutions can prevent this. Ideally, they identify and classify sensitive data across all company storage locations and block its transfer. For less critical data, a warning to the employee is often sufficient, while for highly critical data, the transfer should be directly blocked.

    Subscribe to the newsletter now

    Don't Miss out on Our Best Content

    By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

    Unfold for details of your consent

Thoughtfully Towards Safe And Successful AI Usage

GenAI is a productivity booster that companies should not do without. However, they need appropriate processes and solutions to select and implement the most suitable tools, prevent unauthorized use, and avoid data leaks. In doing so, they should also ensure not to introduce too many standalone solutions.

Platforms are better, where all solutions work together optimally and use a central set of policies to consistently apply security guidelines not only to GenAI but also to email, cloud services, the web, and all other channels through which data can leak.

Fabian Glöser works as Team Leader Sales Engineering at Forcepoint in Munich.