Apple restricts employee ChatGPT use as companies worry about data leaks

This post was originally published on this site

Apple is joining the list of companies stopping its employees from using A.I. tools like ChatGPT.

Apple is restricting some workers from using these tools due to fears they might accidentally upload confidential information, reports The Wall Street Journal citing an internal document and sources in the company.

The iPhone maker is also asking employees not to use a similar Microsoft-owned tool, GitHub’s CoPilot, which helps write code.

Apple did not immediately respond to a request for comment. 

OpenAI, the owner of ChatGPT, says it uses prompts and other data submitted by users to train their models, and so warns against uploading anything confidential or sensitive.

Many other companies have tried to restrict the use of ChatGPT and similar tools after data privacy concerns.

Amazon warned employees against uploading information to such services in January, and Wall Street banks like JPMorgan and Bank of America have reportedly banned employees from using ChatGPT.

Some companies may have already experienced a data leak. Korean media reported in March that one Samsung engineer had uploaded confidential source code to ChatGPT in order to ask for its help in fixing a faulty database.

Earlier this month, Samsung Electronics imposed a temporary ban on using ChatGPT, noting that using the service could violate the company’s security policy. The company said it was working on its own A.I. tools.  

Governments are also concerned about privacy.

Italy briefly banned ChatGPT from the country due to concerns about leaking personal data, though rescinded the measure a few weeks later after OpenAI met the government’s demands.

Data privacy

A.I. developers are starting to notice that both companies and users are worried about data leaks, and so are beginning to offer more private options.

ChatGPT introduced a so-called incognito mode last month, which would not permanently save chats and prompts (though OpenAI did say that it would still temporarily keep prompts to monitor them for abuse).

Microsoft is also reportedly working on a private version of OpenAI’s software, targeted to companies, that would also keep corporate data private and not use it to train the A.I. models.

Privacy would come at a price, potentially costing ten times as much as the regular version of ChatGPT, according to The Information. OpenAI is also working on a “ChatGPT Business” that also includes data controls. 

Earlier this month, IBM announced the launch of watsonx, an A.I. service that would also keep data private. “Clients can quickly train and deploy custom A.I. capabilities across their entire business, all while retaining full control of their data,” IBM CEO Arvind Krishna said earlier this month.  

ChatGPT on iPhone

While Apple employees may soon be restricted from using ChatGPT, Apple’s customers may soon find it easier to access the A.I. tool.

OpenAI released a version of ChatGPT for iOS, the operating system used on Apple iPhone and iPad, on Wednesday. The app is free, and syncs chat history with the web version. The app even responds to voice prompts, using the developer’s own speech-recognition technology. 

The ChatGPT developer said the app would launch in the U.S. first, followed “in the coming weeks” by other countries. It is also working on a version for Google’s Android operating system. 

The app’s welcome screen still tells users “don’t share sensitive info,” notes The Verge. “Anonymized chats may be reviewed by our A.I. trainers to improve our system,” the app says.