Slack, a popular workplace communication platform, has come under scrutiny for using customer data to train artificial intelligence (AI) and machine learning (ML) models without obtaining explicit consent from users. The company’s privacy policy reveals that it analyzes customer messages, content and files to develop these models, and users are automatically enabled by default.
The controversy came to featherlight when Corey Quinn, director of DuckBill Group, publicized the issue on social media. This sparked outrage among Slack users who were unaware of the company’s data usage practices. Many users have expressed frustration with the lack of transparency and difficulty in opting out of the AI training process.
To opt out, individual users must ask their organization’s Slack admin to contact the Slack customer support team via email with their workspace URL and the subject line “Slack Global Model Opt-Out Request.” This process has been criticized for putting the onus on users to protect their data without immediately requiring explicit consent.
Slack tried to clarify its position, stating that machine learning models are used for platform-level features such as channel and emoji recommendations and search results. The company also emphasized that customers can exclude their data from being used to train non-generative machine learning models.
However, inconsistencies in Slack’s privacy policy led to further confusion. While one section states that Slack cannot access core content when developing AI/ML models, the machine learning model training policy contradicts this claim. Additionally, Slack’s marketing for premium AI tools suggests that user data is not used for training purposes, which may be misleading given the company’s selective exploit of this claim.
Slack’s current opt-out approach has been met with significant opposition, and the company may need to change its policies to provide greater transparency and user control over their data.
The controversy came to featherlight when Corey Quinn, director of DuckBill Group, publicized the issue on social media. This sparked outrage among Slack users who were unaware of the company’s data usage practices. Many users have expressed frustration with the lack of transparency and difficulty in opting out of the AI training process.
To opt out, individual users must ask their organization’s Slack admin to contact the Slack customer support team via email with their workspace URL and the subject line “Slack Global Model Opt-Out Request.” This process has been criticized for putting the onus on users to protect their data without immediately requiring explicit consent.
Slack tried to clarify its position, stating that machine learning models are used for platform-level features such as channel and emoji recommendations and search results. The company also emphasized that customers can exclude their data from being used to train non-generative machine learning models.
However, inconsistencies in Slack’s privacy policy led to further confusion. While one section states that Slack cannot access core content when developing AI/ML models, the machine learning model training policy contradicts this claim. Additionally, Slack’s marketing for premium AI tools suggests that user data is not used for training purposes, which may be misleading given the company’s selective exploit of this claim.
Slack’s current opt-out approach has been met with significant opposition, and the company may need to change its policies to provide greater transparency and user control over their data.