ChatGPT: A new insider threat use case

Since ChatGPT became available for public use last November, it’s presented questions for employers about use cases and how best to incorporate the tool into the workplace and maintain compliance. Confidentiality and data privacy are the primary concerns for employers because there is the possibility that employees will share proprietary, confidential, or trade secret information when having conversations with ChatGPT.  Internal threats, whether on purpose or by accident, will also arise as a result.

It’s clear that ChatGPT is a powerful tool. ChatGPT reached 100 million monthly active users in January, only two months after its launch. That would make it the fastest-growing online application in history. The viral success of ChatGPT has kickstarted competition among many tech companies to bring AI products to market, suggesting its use in the workplace isn't set to fade anytime soon.

Some businesses have encouraged workers to incorporate ChatGPT into their daily work. But others worry about the risks. In fact, companies such as JPMorgan restricted workers' use of ChatGPT, and Amazon, Microsoft, and Wal-Mart have all issued warnings to employees to take care in using generative AI services.

It's counterproductive to pretend employees will not use ChatGPT

Every worker could likely use ChatGPT in some way to make their workflow faster if they wanted to. Employees in industries ranging from marketing to healthcare to education are taking advantage of the tool. A survey conducted by social network, Fishbowl, highlighted 43% of professionals reported using AI tools, for work related tasks. However, most of those professionals (70 percent) do so without their bosses' knowledge.

It is not really ChatGPT that employers need to be worried about. It is how employees use ChatGPT. The first step companies need to take to mitigate the security risks is to get visibility into how employees are actually using the tool.

The problem with putting company data into ChatGPT

Open AI uses the content people put into ChatGPT as training data to improve its technology. This is problematic because employees are copying and pasting all kinds of confidential data into ChatGPT to have the tool rewrite it; and that information could be retrieved at a later date if proper data security isn't in place.

As people around the world become enthusiastic about the capabilities of ChatGPT, malicious actors have also become captivated. Without surprise, it did not take long before we started seeing hackers using the tool to engineer better social engineering attacks, creating more convincing phishing emails or social media messages that could lead to individuals or organizations disclosing sensitive information or clicking on malicious links. 

The traditional security products that companies rely on to protect their data from external attacks are blind to employee usage of ChatGPT. For example, data going to ChatGPT often doesn’t contain a recognizable pattern that security tools look for, like a credit card number or Social Security number.

In a time when bad actors can easily access AI, ML, and NLU tools for malicious intent, how do organizations protect themselves from these targeted attacks?

Protect your business from within

With today’s cloud connected, distributed and highly collaborative workforce, employees are your biggest asset and potentially your biggest risk.

Insider threats, both malicious and inadvertent, continue to top the list for regular occurrence. The 2022 Ponemon Cost of Insider Threats Global Report highlighted a staggering 44% increase in incidents from the prior year with data theft being the leading activity (42% IP related).

innerActiv meets this challenge with a leading Insider Risk Intelligence Platform. Powerful analytics look across user behavior and data movement on endpoints, networks, in the cloud, and on-premises, to provide complete visibility, detection, prevention, and response to proactively mitigate critical risk and safeguard sensitive data from unintentional or even malicious leaks.

With innerActiv, ChatGPT is just another “application” that can be monitored and safeguarded. Here are a few of the ways innerActiv is helping to protect internal use of ChatGPT in organizations:

  • Implement multiple rule sets to monitor an identity or group of users for specific keywords, browser activity, or application usage

  • Capture keystroke activity, clipboard activity and productivity of users

  • Generate alerts or events if a user enters defined keywords and patterns on the ChatGPT messenger window

  • Monitor any clipboard activity, either copying content from a confidential document and pasting it on the ChatGPT messenger window, or vise-versa

Since the vast majority of security threats follow a pattern or sequence of activity leading up to an event, insider threat intelligence is needed to measure, detect, and contain undesirable behavior of trusted accounts within an organization. Through continuous monitoring of user and system access, activity and data movement, a baseline of trusted behavior can be established to bring risks that you may not even notice to light.

There are many legitimate uses of ChatGPT in the workplace, and companies that navigate ways to leverage it to improve productivity without risking their sensitive data can benefit. Learn more how innerActiv can help you protect from within.

Previous
Previous

Getting Started: Insider Risk Management

Next
Next

Surreptitious Spyware versus Insider Risk Management