Generative AI simplifies everyday work, but its use comes with risks. Forcepoint provides seven tips on how to harness the potential of chatbots like Chat GPT, Copilot, or Gemini without compromising data protection and data security.
Sometimes only a few clicks separate benefit and harm—this is especially true for the careless use of generative AI.
Generative AI takes over many time-consuming tasks for employees and makes them more efficient. No wonder they want to use the practical helpers in their daily work and often get started without waiting for officially introduced tools by the company. However, this creates significant risks: Not only do data protection violations and the leakage of sensitive company data threaten, but also unfair or incorrect decisions if employees place too much trust in AI results and overlook biases or errors. There are also liability issues, should discrimination, incorrect decisions, or copyright infringements occur due to the use of AI. Companies therefore urgently need a plan on how to handle GenAI and safely introduce new tools.
In our experience, the following approach has proven effective:
Establish a standard process Companies need a consistent process for the application, evaluation, and approval of new AI tools as well as their subsequent implementation and securing. The standardized process ensures that the tools meet all internal requirements—such as those related to benefits, costs, and data protection—and are always selected based on the same criteria. It also prevents tools from entering the company through back channels and being used without employee training or sufficient security measures.
Establish an AI Council Since AI impacts many areas of a company, not just the specific department looking to implement a new tool, it makes sense to establish an AI Council. This is a committee that brings together experts from IT, the security team, and the legal department, among others, and works closely with the departments. It not only evaluates all use cases and AI tools individually but also ensures that no unnecessary tools are implemented. Additionally, it provides advisory support to departments and communicates the benefits and successes of GenAI projects within the company to increase their acceptance.
Set priorities The introduction of new technologies and applications always comes with challenges and changes. Therefore, especially in the early stages of their GenAI journey, companies should avoid introducing too many AI tools simultaneously to prevent getting lost in multiple projects. It is better to set priorities and initially focus on one or two use cases and tools that offer significant benefits or are attractive to multiple departments. The experiences gained during the implementation can then help establish additional tools more quickly and smoothly within the company.
Train employees After selecting and implementing new AI tools, companies should not leave their employees to handle them alone. Future users need guidelines on how to work with the tools—they must know what is allowed and what is not, as well as the associated risks. Training sessions are essential to inform them about the guidelines, allow them to practice working with the tools, and teach them not to blindly trust the AI but to question and verify the results.
Do not leave everything to AI When companies automate their processes with the help of AI, they should carefully evaluate which decisions are entrusted to the algorithms and where human oversight or decisions are necessary. On the one hand, this is about preventing discriminatory and incorrect decisions, and on the other hand, about complying with the EU AI Act. For high-risk AI systems, such as those in critical infrastructures, human resources, or credit assessments, the act requires human supervision.
Regulate access to AI tools To ensure that employees only use verified and approved AI tools, companies should protect access with security solutions that combine tools like Cloud Access Security Broker (CASB), Zero Trust Network Access (ZTNA), and Secure Web Gateway (SWG). Effective solutions allow access to be regulated based on users, groups, and other criteria and enforce policies even on unmanaged devices. When unauthorized AI tools are accessed, it is possible to redirect employees to an already approved alternative.
Prevent data leaks Policies and training alone are not sufficient to effectively prevent data protection violations or data leaks. After all, employees may intentionally or accidentally input or upload personal or confidential data into AI tools. Data security solutions can prevent this. Ideally, they identify and classify sensitive data across all company storage locations and block its transfer. For less critical data, a warning to the employee is often sufficient, while for highly critical data, the transfer should be directly blocked.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
Thoughtfully Towards Safe And Successful AI Usage
GenAI is a productivity booster that companies should not do without. However, they need appropriate processes and solutions to select and implement the most suitable tools, prevent unauthorized use, and avoid data leaks. In doing so, they should also ensure not to introduce too many standalone solutions.
Platforms are better, where all solutions work together optimally and use a central set of policies to consistently apply security guidelines not only to GenAI but also to email, cloud services, the web, and all other channels through which data can leak.
Fabian Glöser works as Team Leader Sales Engineering at Forcepoint in Munich.