top of page
  • Writer's pictureESET Expert

What employees need to know before chatting with ChatGPT


Discussions about AI tools have become the most prominent in the past months. Because of their ability to boost productivity and save time, many employees have already adopted them into their daily work routines. However, before using the benefits of innovative AI tools, your employees should know how to engage with them securely – without jeopardizing your company’s data safety.

AI tools may help us develop ideas, summarize or rephrase pieces of text, or even create a base for a business strategy or find a bug in a code. When using AI, however, we must remember that the data we enter into the tools cease to belong to us as soon as we press the send button.

One of the primary concerns when utilizing large language models (LLMs), such as ChatGPT, is the sharing of sensitive data with large international corporations. These models are online text, enabling them to effectively interpret and respond to user queries. However, every time we interact with a chatbot and ask for information or assistance, we may inadvertently share data about ourselves or our company.

When we write a prompt for a chatbot, the entered data become public. This does not mean chatbots would immediately use this information as a base for replies to other users. But the LLM provider or its partners may have access to these queries and could incorporate them into future versions of the technology.

OpenAI, the organization behind ChatGPT, introduced the option to turn off chat history, which will prevent user data from being used to train and improve OpenAI´s AI models. That way, users get more control over their data. If the employees in your company would like to use tools such as ChatGPT, turning the chat history off should be their first step.

But even with chat history turned off, all the prompt data is still stored on the chatbot servers. By saving all prompts on external servers, there is a potential threat ofunauthorized access by hackers. Furthermore, technical bugs can occasionally enable unauthorized individuals to access data belonging to other chatbot users.

So, how do you ensure that your company's employees use platforms such as ChatGPT securely? Here are some mistakes employees often make, and ways to avoid them.

Using client data as an input

The first common mistake employees make when using LLMs is inadvertently sharing sensitive information about their company’s clients. What does that look like? Imagine, for instance, doctors submitting their patients’ names and medical records and asking the LLM tool to write letters to the patients’ insurance companies. Or marketers uploading customer data from their CRM systems prompting the tool to compile targeted newsletters.

Teach employees to permanently anonymize their queries before entering them into chatbots. To protect customer privacy, encourage them to review and carefully redact sensitive details, such as names, addresses, or account numbers. The best practice is to avoid using personal information in the first place and to rely on general questions or queries.

Uploading confidential documents into chatbots

Chatbots can be valuable tools for quickly summarizing large volumes of data, and creating drafts, presentations, or reports. Still, uploading documents to tools such as ChatGPT may mean endangering company or client data stored in them. While it may be tempting to copy documents and ask the tool to create summaries or suggestions for presentation slides, it is not a data-secure way to go.

This applies to important papers, such as development strategies, but also less-essential documents – such as notes from a meeting – may lead to employees uncovering their company’s treasured know-how.

To mitigate this risk, establish strict policies for handling sensitive documents, and limit access to such records with a "need to know" policy. Employees need to manually review the documents before requesting a summary or assistance from the chatbot. This ensures that sensitive information, such as names, contact information, sales figures, or cash flow, is deleted or appropriately anonymized.

Exposing the company's data in prompts

Imagine you are trying to improve some of your company's practices and workflows. You ask ChatGPT to help with time management or task structure and input valuable know-how and other data into the prompt to assist the chatbot in developing a solution. Just like entering sensitive documents or client data into chatbots, including sensitive company data in the prompt is a common, yet potentially damaging, practice that can lead to unauthorized access or leakage of confidential information.


Samsung nixes “generative AI,” Amazon keeps it cautious.

At the beginning of 2023, an engineer from Samsung discovered that a sensitive internal source code he uploaded to ChatGPT was leaked. This has led to Samsung banning “generative AI” tools in the company. Amazon also encountered a similar issue. The company reportedly came across some ChatGPT responses that resembled internal Amazon data. In this case, however, Amazon did not ban AI tools, but warned its employees to use them responsibly.


To address this issue, prompt anonymization should be an essential practice. That means no names, addresses, financials, or other personal data should ever be input to chatbot prompts. If you want to make it easier for employees to use tools such as ChatGPT securely, create standardized prompts as templates that can be safely used by all employees, if necessary, such as "Imagine you are [position] in [company]. Create a better weekly workflow for [position] focused mainly on [task]."


AI tools are not just the future of our work; they are already present. As the progress in the field of AI and, specifically, Machine learning is moving forward every day, companies inevitably need to follow the trends and adapt to them. From the data security specialist to the IT generalist position, could you make sure your colleagues know how to use these technologies without risking a data leak?

Comments


bottom of page