UPDATED 13:17 EST / MAY 19 2023

AI

Apple reportedly bans some employees from using ChatGPT as it works on its own AI model

Apple Inc. has banned some employees from using OpenAI LP’s ChatGPT artificial intelligence service, according to a new report.

The Wall Street Journal reported the development late Thursday, citing sources and an internal Apple document. The iPhone maker is one of several major enterprises that are known to have restricted employees’ use of generative AI tools such as ChatGPT.

Apple’s newly reported policy is said to apply to not only ChatGPT but also other AI products. Those products are said to include GitHub Copilot, a coding assistant from Microsoft Corp.’s GitHub unit. Copilot is powered by OpenAI-developed neural networks, including the GPT-4 model that underpins ChatGPT.

Apple’s decision to prohibit some of its employees from using such tools is reportedly driven by concerns about data leaks. Under ChatGPT’s terms of service, OpenAI can collect users’ chatbot prompts and leverage them to train its AI models. Last month, the startup added a setting that allows ChatGPT users to opt out of data collection.

According to the Journal, Apple is building a custom large language model to support employees’ work. The goal is presumably to have workers use the internally developed model instead of external tools such as ChatGPT. The report didn’t specify the tasks that Apple’s upcoming model will be capable of performing.

The AI development effort is said to be led by John Giannandrea, a former Google LLC executive who joined the iPhone maker in 2018. He led the search giant’s machine learning efforts before leaving. At Apple, Giannandrea is senior vice president of machine learning and AI strategy.

Apple is one of several major enterprises to have partly or fully banned workers from using ChatGPT. Previously, JPMorgan Chase & Co. and Verizon Communications Inc implemented similar policies. Amazon.com Inc., in turn, has reportedly urged engineers seeking an AI coding assistant to use its internal machine learning software instead of ChatGPT.

Large enterprises represent a major market for AI software. To address such companies’ concerns about data leaks, OpenAI may add additional cybersecurity controls to its tools. Some of those controls might arrive with ChatGPT Business, an upcoming business version of the chatbot that was previewed last month and will be geared toward “enterprises seeking to manage their end users.”

OpenAI already provides privacy assurances for customers of GPT-4, the large language model that powers ChatGPT. The model is available through an application programming interface that companies can use to build custom chatbot applications. OpenAI doesn’t use data that customers upload to the API to train its models.

Apple is not the only major tech firm investing in internally developed large language models to support employees’ work. Earlier this week, Meta Platforms Inc., detailed CodeCompose, an in-house AI coding assistant. The assistant is available in several editions, the most advanced of which features 6.7 billion parameters, and was trained on Meta’s internal code repositories.

Photo: Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU