Skip to main content

Nearly a third of UK employees are sharing private company data with ChatGPT

11 July 2023

• 40% of ChatGPT users don’t fact-check answers before using them in the workplace • Company guidelines are severely lacking in UK organisations and in the majority of cases, regulation is only being given verbally • Only 26% of employees know how ChatGPT generates it information

According to new research by Kaspersky, 29% of employees in the UK are deliberately sharing personal company data with ChatGPT at work, potentially running the risk of compromising data privacy, IP and sensitive information. With 33% reporting that there are no guidelines in place to regulate the use of ChatGPT in the workplace and 22% not knowing how data processing works, the need to address data privacy and content verification in connection with AI-tools has become a top priority for the management.

ChatGPT has become wildly popular in recent months, reaching one million users in its first five days and passing the 100 million-visitor mark in just two months. Thanks to its many functionalities, from generating content to checking pieces of text, the tool is also proving to be extremely handy to use at work. Kaspersky therefore investigated how Brits are using ChatGPT in their own workplace.

Lack of Workplace Guidelines

1-in-3 workers (33%) indicated that rules and guidelines are currently completely lacking when it comes to using generative AI tools like ChatGPT in their workplace, with nearly a quarter (24%) admitting that they are not clear or comprehensive enough. Furthermore, 10% believe that rules or guidelines are not even necessary, which could lead to ChatGPT being abused because of privacy and transparency issues.

When asked about what the guidelines look like, the majority of respondents (47%) stated that they were given instructions verbally, either in an all-company meeting (34%) or individually (13%). Only 26% said that rules were laid out formally in an official email and only 11% said a standalone formal document had been produced, suggesting that the majority of UK organisations are not taking the matter seriously enough.

“Despite their obvious benefits, we must remember that language model tools such as ChatGPT are still imperfect as they are prone to generating unsubstantiated claims and fabricate information sources. Privacy is also a big concern, as many AI services can re-use user inputs to improve their systems, which can lead to data leaks. This is also the case if hackers were to steal users’ credentials (or buy then on the dark web), as they could get access to the potentially sensitive information stored as chat history” said Vladislav Tushkanov, Data Science Lead, Kaspersky. "Businesses that might benefit from the use of ChatGPT in their workplaces and other LLM services, may want to formalise detailed policies for employees to regulate their use. By creating clear guidelines, employees can both avoid overreliance and prevent potential data leaks that would undermine even the strongest cybersecurity strategy.”

Checking the Facts Before Taking the Glory

The majority of respondents (58%) admitted to using ChatGPT at work as a time saving shortcut, such as for summarising long text or meeting notes. Similarly, 56% are using it for other content related tasks, such as generating fresh content, creating translations or improving texts.

However, when it comes to using the content, a large proportion of employees (40%) stated that they do not verify its accuracy or reliability before passing it off as their own work. This is in contrast to a third of respondents (33%) who checking the output prior to utilising it, even though it is copied/pasted verbatim.

Methodology: Using an online questionnaire, Kaspersky sampled 1,000 British full-time employees to understand their use of ChatGPT at work, to what extent they are transparent about it and how they handle the output of their searches in a work context.

Nearly a third of UK employees are sharing private company data with ChatGPT

• 40% of ChatGPT users don’t fact-check answers before using them in the workplace • Company guidelines are severely lacking in UK organisations and in the majority of cases, regulation is only being given verbally • Only 26% of employees know how ChatGPT generates it information
Kaspersky logo

About Kaspersky

Kaspersky is a global cybersecurity and digital privacy company founded in 1997. With over a billion devices protected to date from emerging cyberthreats and targeted attacks, Kaspersky’s deep threat intelligence and security expertise is constantly transforming into innovative solutions and services to protect businesses, critical infrastructure, governments and consumers around the globe. The company’s comprehensive security portfolio includes leading endpoint protection, specialized security products and services, as well as Cyber Immune solutions to fight sophisticated and evolving digital threats. We help over 200,000 corporate clients protect what matters most to them. Learn more at www.kaspersky.com.

Related Articles Press Releases