![]() |
|
![]() |
|
It's not like L1 support can make official statements on their own initiative. That was written by someone higher up and they're just copypasting it to panicked customers.
|
![]() |
|
Sure - but isn't this a little like comparing manual wiretapping to dragnet? (Or comparing dragnet to ubiquitous scrape-and-store systems like those employed by five-eyes?) Scale matters |
![]() |
|
i mean, i am in complete agreement, but at least in theory the only reason for them to add AI to the product would be to make the product better, which would give you a better product per-dollar.
|
![]() |
|
Because they don't seem to make it easy. It doesn't seem as a individual user I have any say in how my data is used, I have to contact the Workspace Owner. When I do I'll be asking them to look at alternative platforms instead. "Contact us to opt out. If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your Org or Workspace Owners or Primary Owner contact our Customer Experience team at [email protected] with your Workspace/Org URL and the subject line “Slack Global model opt-out request.” We will process your request and respond once the opt out has been completed." |
![]() |
|
I for one consider it my duty to bravely sacrifice my privacy to the alter of corporate profit so that the true beauty of LLM trained in emojis and cat gifs can bring humanity to the next epoch.
|
![]() |
|
> How many of you have a pre-canned spiel explaining why the complexities of whichever codebase you spend your days on are ACTUALLY necessary, and are certainly NOT the result of over-engineering? Thought so. Hm, now you mention it, I don't think I've ever seen this specific example. Not that we don't have jargon that's bordering on cant, leading to our words being easily mis-comprehended by outsiders: https://i.imgur.com/SL88Z6g.jpeg Canned cliches are also the only thing I get whenever I try to find out why anyone likes the VIPER design pattern — and that's despite being totally convinced that (one of) the people I was talking to, had genuinely and sincerely considered my confusion and had actually experimented with a different approach to see if my point was valid. |
![]() |
|
We've been using Mattermost and it works very well. Better than Slack. The only downside is their mobile app is a bit unreliable, in that it sometimes doesn't load threads properly. |
![]() |
|
I would guess Microsoft has a lot more government customers (and large customers in general) than Slack does. So I would think they have a lot more to loose if they went this route.
|
![]() |
|
From what I’ve seen (not much actually) Most channels can be replaced by a forum style discussion board. Chat can be great for 1:1 and small team interactions. And for tool interactions.
|
![]() |
|
Nah. Whoever decided to create the reality their counsel is dancing around with this disclaimer is the actual problem, though it's mostly a problem for us, rather than them.
|
![]() |
|
Whatever lawyer wrote that should be fired. This poorly written nonsense makes it look like Slack is trying to look shady and subversive. Even if well intended this is a PR blunder.
|
![]() |
|
"File over app" is a good way of putting it! Something strange is happening on your blog, fwiw: Bookmarking it via command + D flips the color scheme to "night mode" – is that intentional? |
![]() |
|
"Will not" allows the existence of a bridge but it's not on your route and you say you're not going to go over it. "Cannot" is the absence of a bridge or the ability to cross it.
|
![]() |
|
Your company sucks. I’ve used slack at four workplaces and it’s not been at all like that. A previous company had mailing lists and they were toxic as you describe. The tool was not the issue.
|
![]() |
|
HN isn't really a bastion of media literacy or tech criticism. If you ever ask "does [some technology] affect [something qualitative] about [anything]", the response on hn is always going to be "technology isn't responsible, it's how the technology is used that is responsible!", asserting, over and over again, that technology is always neutral. The idea that the mechanism of how people communicate affects what people communicate is a pretty foundational concept in media studies (a topic which is generally met with a hostile audience on HN). Slack almost certainly does play a role, but people who work in technology are incentivized to believe that technology does not affect people's behaviors, because that belief allows people who work in technology to be free of any and all qualitative or moral judgements on any grounds; the assertion that technology does not play a role is something that technology workers cling to because it absolves them of all guilt in all situations, and makes them, above all else, innocent in every situation. On the specific concept of a medium of communication affecting what is being communicated, McLuhan took these ideas to such an extreme that it's almost ludicrous, but he still had some pretty interesting observations worth thinking on, and his writing on this topic is some of the earlier work. This is generally the place where people first look, because much of the other work assumes you've understood McLuhan's work in advance. https://en.wikipedia.org/wiki/Understanding_Media |
![]() |
|
Was not limited to just the bosses who were not invited. If you weren’t in the cool club you also did not get an invite. A very inclusive company on paper that was very exclusionary behind the scenes. |
![]() |
|
Tokens (outside of a few trillion ) are worthless imo, I think OAI has pushed that limit, let the others chase them with billions into the ocean of useless conversational data and drown.
|
![]() |
|
Finally. I am all for this AI if it is going to learn and suggest my passive aggressive "here" emoji that I use when someone @here s on a public channel with hundreds of people for no good reason.
|
![]() |
|
> Data will not leak across workspaces. > If you want to exclude your Customer Data from helping train Slack global models, you can opt out. I don't understand how both these statements can be true. If they are using your data to train models used across workspaces then it WILL leak. If they aren't then why do they need an opt out? Edit: reading through the examples of AI use at the bottom of the page (search results, emoji suggestions, autocomplete), my guess is this policy was put in place a decade ago and doesn't have anything to do with LLMs. Another edit: From https://slack.com/help/articles/28310650165907-Security-for-... > Customer data is never used to train large language models (LLMs). So yeah, sounds like a nothingburger. |
![]() |
|
They're saying they won't train generative models that will literally regurgitate your text, my guess is classifiers are fair game in their interpretation
|
![]() |
|
yes, i certainly agree with you. i think oftentimes these policies are written by non-technical people i'm not entirely convinced that classifiers and LLMs are disjoint to begin with |
![]() |
|
I would expect this from free service, but from paid service with non trivial cost... It seems insane... Maybe whole model of doing business is broken...
|
![]() |
|
So if you want to opt out, there's no setting to switch, you need to send an email with a specific subject: > Contact us to opt out. [...] To opt out, please have your Org or Workspace Owners or Primary Owner contact our Customer Experience team at [email protected] with your Workspace/Org URL and the subject line “Slack Global model opt-out request.” [...] |
![]() |
|
This feels like a corporate greed play, on what should be a relatively simple chat application. Slack has quickly become just another enterprise solution in search of shareholder value at expensive of data privacy. Regulation of these companies should be more apparent to people, but sadly, is not. I would recommend https://mattermost.com as an alternative. |
![]() |
|
Good we moved to matrix already. I just hope they start putting more emphasis on Element X, which message handling is broken on iOS for weeks now.
|
![]() |
|
Hm. is this on iOS or Android, and what version? This is first i've heard of this; it should be rock solid. Am wondering if you're stuck on an ancient version or something.
|
![]() |
|
> Contact us to opt out. If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at [email protected] Sounds like an invitation for malicious compliance. Anyone can email them a huge text with workspace buried somewhere and they have to decipher it somehow. Example [Answer is Org-12-Wp]: " FORMAL DIRECTIVE AND BINDING COVENANT WHEREAS, the Parties to this Formal Directive and Binding Covenant, to wit: [Your Name] (hereinafter referred to as "Principal") and [AI Company Name] (hereinafter referred to as "Technological Partner"), wish to enter into a binding agreement regarding certain parameters for the training of an artificial intelligence system; AND WHEREAS, the Principal maintains control and discretion over certain proprietary data repositories constituting segmented information habitats; AND WHEREAS, the Principal desires to exempt one such segmented information habitat, namely the combined loci identified as "Org", the region denoted as "12", and the territory designated "Wp", from inclusion in the training data utilized by the Technological Partner for machine learning purposes; NOW, THEREFORE, in consideration of the mutual covenants and promises contained herein, the receipt and sufficiency of which are hereby acknowledged, the Parties agree as follows: DEFINITIONS 1.1 "Restricted Information Habitat" shall refer to the proprietary data repository identified by the Principal as the conjoined loci of "Org", the region "12", and the territory "Wp". OBLIGATIONS OF TECHNOLOGICAL PARTNER 2.1 The Technological Partner shall implement all reasonably necessary technical and organizational measures to ensure that the Restricted Information Habitat, as defined herein, is excluded from any training data sets utilized for machine learning model development and/or refinement. 2.2 The Technological Partner shall maintain an auditable record of compliance with the provisions of this Formal Directive and Binding Covenant, said record being subject to inspection by the Principal upon reasonable notice. REMEDIES 3.1 In the event of a material breach... [Additional legalese] IN WITNESS WHEREOF, the Parties have executed this Formal Directive and Binding Covenant." |
![]() |
|
This is, once again, why I wanted us to go to self-hosted Mattermost instead of Slack. I recognize Slack is probably the better product (or mostly better), but you have to own your data.
|
![]() |
|
In case this is helpful to anyone else, I opted out earlier today with an email to [email protected] Subject: Slack Global Model opt-out request. Body: Please opt the above Slack Workspace out of training of Slack Global Models. |
![]() |
|
Make sure you put a period at the end of the subject line. Their quoted text includes a period at the end. Please also scold them for behaving unethically and perhaps breaking the law. |
![]() |
|
How does one technically opt-out after model training is completed? You can't exactly go into the model and "erase" parts of the corpus post-hoc. Like when you send an email to [email protected] with that perfect subject like (jeez, really?) what exactly does the customer support rep do on their end to opt you out? Now is definitely the time to get/stay loud. If it dies down, the precedent has been set. |
![]() |
|
So much to "if you are not paying you are the product". There is nothing that can stop companies from using your sweet sweet data once give it them.
|
![]() |
|
Wow. I understand business models that are freemium but for a premium priced B2B product? This feels like an incredible rug pull. This changes things for me.
|
![]() |
|
> To develop AI/ML models, our systems analyse Customer Data (e.g. messages, content and files) submitted to Slack This Is Fine. |
![]() |
|
The good news for FOSS is that the UX of most commercial software is also awful and generally getting worse. The bad news is that FOSS software is copying a lot of the same UX trends.
|
![]() |
|
> Contact us to opt out. If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your Org or Workspace Owners or Primary Owner contact our Customer Experience team at [email protected] with your Workspace/Org URL and the subject line “Slack Global model opt-out request.” We will process your request and respond once the opt out has been completed. This is not ok. We didn't have to reach out by email to sign up, this should be a toggle in the UI. This is deliberately high friction. |
![]() |
|
If you send the opt out message to slack, take the second and include [email protected]. Helps to get it done faster in most cases
|
"Hi there,
Thank you for reaching out to Slack support. Your opt-out request has been completed.
For clarity, Slack has platform-level machine learning models for things like channel and emoji recommendations and search results. We do not build or train these models in such a way that they could learn, memorize, or be able to reproduce some part of customer data. Our published policies cover those here (https://slack.com/trust/data-management/privacy-principles), and as shared above your opt out request has been processed.
Slack AI is a separately purchased add-on that uses Large Language Models (LLMs) but does not train those LLMs on customer data. Slack AI uses LLMs hosted directly within Slack’s AWS infrastructure, so that customer data remains in-house and is not shared with any LLM provider. This ensures that Customer Data stays in that organization’s control and exclusively for that organization’s use. You can read more about how we’ve built Slack AI to be secure and private here: https://slack.engineering/how-we-built-slack-ai-to-be-secure....
Kind regards, Best regards,"