GCHQ warns that ChatGPT and competing chatbots pose a security risk

ChatGPT and other AI-powered chatbots are an emerging security threat, according to the spy agency GCHQ.

The National Cyber Security Centre warns in an advisory note published on Tuesday that companies such as ChatGPT maker OpenAI and its investor Microsoft “are able to read queries” typed into AI-powered chatbots.

“The query will be visible to the organization providing the [chatbot], so in the case of ChatGPT, to OpenAI,” stated that “ GCHQ’s cyber security arm.

Bing Chat, a chatbot service launched by Microsoft in February, took the world by storm due to the software’s ability to hold a human-like conversation with its users.

The NCSC issued a warning on Tuesday, warning that curious office workers experimenting with chatbot technology may reveal sensitive information through their search queries.

“Those queries are stored and will almost certainly be used for developing the LLM service or model at some point,” GCHQ cyber security experts said, referring to large language model [LLM] tech that powers AI chatbots.

“This could imply that the LLM provider (or its partners/contractors) can read queries and may incorporate them in some way into future versions.

"As a result, before asking sensitive questions, the terms of use and privacy policy must be thoroughly understood."

Microsoft disclosed in February that its staff is reading its users’ conversations with Bing Chat, tracking conversations to detect “inappropriate behavior”.

“While LLM operators should have measures in place to secure data, the possibility of unauthorized access cannot be entirely ruled out,” said Immanuel Chavoya, senior security manager at cyber security company Sonicwall.

“As a result, businesses must ensure that they have strict policies in place, supported by technology, to control and monitor the use of LLMs in order to reduce the risk of data exposure.”

The NCSC also warned that AI-powered chatbots can have “serious flaws,” as Microsoft and Google have discovered.

After the software gave the wrong answer about scientific discoveries made with the James Webb Space Telescope, an error generated by Google’s Bard AI chatbot wiped $120 billion (£98.4 billion) from its market valuation.

The error was prominently displayed in Google promotional materials for the Bard service’s launch.

Mishcon de Reya, a law firm in London, has prohibited its lawyers from entering client data into ChatGPT for fear of legally privileged information leaking or being compromised.

Accenture has also warned its 700,000 employees worldwide not to use ChatGPT for similar reasons, as nervous executives fear confidential customer data will fall into the wrong hands.

Other businesses around the world are becoming increasingly skeptical of chatbot technology.

Softbank, the Japanese tech conglomerate that owns Arm, has warned its employees not to enter “company identifiable information or confidential data” into AI chatbots. Other businesses have jumped on board with AI chatbot technology.

City law firm Allen & Overy has deployed Harvey, a chatbot tool developed in collaboration with ChatGPT makers OpenAI.