![]() |
|
![]() |
| The brand that sells is the brand at fault.
Securing it or knowing to secure it or testing it or never releasing it until it was secure is all things that are with the brand making the sale. |
![]() |
| Software from 2019 is horrifyingly outdated? If updates with security patches exist but haven't been applied, sure, but that's not really a default scenario depending on the stack. |
![]() |
| I’ve only used 2020 because of the example in question. Security patches might or might not have been applied in both my imaginary example and in real world. |
![]() |
| In my experience, "deprecated" is often taken as "we can still use that, it is not removed yet", which I find somewhat disheartening sometimes. |
![]() |
| Hard multi tenancy can't realistically be achieved in the same logical K8s cluster. And it is a moving target, which makes trying to secure it by admission controllers... not a great plan.
One needs to look into things like VirtualClusters to even begin to consider hard multi-tenancy with potential hostile tenants(https://github.com/kubernetes-sigs/cluster-api-provider-nest...). That is just about the control plane. It doesn't even touch the data plane. How secure that is even with the extra layer, I do not know. Even in the VM land we have seen crazy VM escape exploits over the years.T |
![]() |
| > K8S done right is literally designed for multi tenancy
No it is not. Not in the way they are using it. There are two main use-cases. One is multiple teams, in which case they are bound by their company's policies and guardrails The second is multiple customers. But that also assumes they have no direct access to the cluster. A vendor would, instead, deploy multiple instances of a workload; the customers would not. Straight from the horse's mouth: https://kubernetes.io/docs/concepts/security/multi-tenancy/ There's also nothing that says that multiple clusters need to be expensive, if they are sized right. They can be as small as you need both in number of instances and instance size. The overhead here is the control plane but, for small clusters, the resource requirements are similarly small. That said, if hard multi tenancy is what you need, then you need to use things like this: https://github.com/kubernetes-sigs/cluster-api-provider-nest... (for the control plane - you still need to worry about the data plane) |
![]() |
| when you say yearly I assume you're not conducting regular internal pentests?
any pentesting companies that you could recommend which do more than just drive-by shooting with metasploit? |
![]() |
| Excellent write up. This wasn’t a sophisticated attack. Seems like there is very little discipline at Salesforce when it comes to deploying production systems. |
While I get that it's the AI product, the vulnerability here is the k8s configuration. It really has nothing to do with the AI product itself or AI training or anything related to machine learning or generative AI, it's more about poor cloud computing platform security.