The updated terms of service from Zoom have brought to light the company’s intentions to utilize customer data for training its artificial intelligence (AI) models. Effective from July 27, these changes have sparked discussions around the ethical implications of AI training using individual data.
In an era marked by labor strikes from industry guilds and concerns about the compensation of artists, the integration of AI into creative processes has further complicated matters. The updated terms of service reveal that Zoom has the right to leverage “service-generated data” for training and fine-tuning its AI models. This category includes customer data on product usage, telemetry, diagnostic information, and similar content collected by Zoom.
While such data usage for AI training is not uncommon, it has raised questions about data ownership, consent, and privacy. The debate around the extent to which AI should rely on individuals’ data has gained prominence, especially in the context of generative AI tools like chatbots and image-generation programs.
Zoom’s approach to consent is noteworthy. It explicitly states that AI models will not be trained using audio, video, or chat customer content unless the customer provides consent.
The introduction of generative AI features by Zoom, such as meeting summaries and chat message composition tools, underscores the company’s AI ambitions. Users are given the choice to enable these features, and those who opt in are required to sign a consent form. Zoom asserts that the data used solely aims to enhance AI service performance and accuracy, and it assures customers that shared data will not be employed for training third-party models.
As Zoom navigates this AI trajectory, it serves as a microcosm of the broader debates surrounding AI ethics and user data control. The company’s transparency and emphasis on customer consent in its terms of service highlight its intention to strike a balance between AI innovation and user privacy.