May 172024
 

Enlarge (credit: Tim Robberts | DigitalVision)
After launching Slack AI in February, Slack appears to be digging its heels in, defending its vague policy that by default sucks up customers’ data—including messages, content, and files—to train Slack’s global AI models.
According to Slack engineer Aaron Maurer, Slack has explained in a blog that the Salesforce-owned chat service does not train its large language models (LLMs) on customer data. But Slack’s policy may need updating “to explain more carefully how these privacy principles play with Slack AI,” Maurer wrote on Threads, partly because the policy “was originally written about the search/recommendation work we’ve been doing for years prior to Slack AI.”
Maurer was responding to a…

External feed Read More at the Source: https://arstechnica.com/?p=2025179

 2024-05-17

Sorry, the comment form is closed at this time.