A brief intro to Kubernetes Role-Based Access Control
Actually, this should have been the introduction to a blogpost about RBAC auditing and pentesting. However, after the first few paragraphs, I noticed that it is big enough of a topic on its own. So, will cover the basic concepts of RBAC here and follow up with another blogpost.
Much of this information can also be found on different places in the Kubernetes documentation, e.g. here [1]. I tried to summarize the most important points regarding security and add more details in cases where the documentation is not clear.
The RBAC module is can enforce access control on a granular level. However, this granularity and indirect bindings can make it difficult to grasp for non-experienced users.
Typical for Kubernetes’ philosophy of modularity, RBAC is not the only authorization module, but together with the node authorization module, it’s the one enabled by default and used in most setups. You can confirm if RBAC is used by examining the „–authorization-mode“ parameter to the apiserver process. If multiple authorizers are defined, one is run after the other and access is granted if any of them grants access.
With the RBAC module you get several Kubernetes resources such as Roles and RoleBindings, ClusterRoles and ClusterRoleBindings. Roles contain authorization rules and RoleBindings bind a principal, such as a user, group or serviceaccount, to a role. Whereas Roles and RoleBindings are namespaced and can grant privileges to resources within of a namespace only, ClusterRoles and ClusterRoleBindings are not namespace and can grant privileges across all namespaces and to non-namespaces resources.
Authorization rules can only grant privileges. If there is no grant for an access attempt, it is denied by default.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: podreader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
As you can see, the role name is „podreader“ and it contains a rule permitting read-access to pods with the verbs „get“, „watch“, and „list“. Access to subresources could also be configured in the “resources” list delimited from the parent-resource with a slash, such as: [“pods”], [“pods/log”]
This is a RoleBinding referencing the Role from above and binding it to service account “myappuser”:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: podreaderrolebinding
subjects:
- kind: ServiceAccount
name: myappuser
apiGroup: ""
roleRef:
kind: Role
name: podreader
apiGroup: rbac.authorization.k8s.io
If the subject kind is a user, the username is provided by the authentication module. Kubernetes does not manage user accounts, which means that there is no username resource. With X.509 certificate authentication, the subject name taken from the Common Name (CN) and the group from the Organization (O). With serviceaccounts it’s less complicated, because they are managed by Kubernetes and are a Kubernetes resource with a name attribute.
If a ClusterRole is bound with a RoleBinding (instead of a ClusterRoleBinding), privileges are granted only for the RoleBinding’s namespace. This can be useful if you want to configure a common set of roles (implemented as ClusterRoles) and want to reuse these definitions in multiple namespaces.
So far, we have only restricted access by resource type. It is possible to restrict access even further based on resource names using the resourceNames list in a rule. With this a user can be granted access to a particular instance of a resource, e.g. read access to the specific pod “myapp” only:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: myapppodreader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
resourceNames: ["myapp"]
verbs: ["get", "watch", "list"]
However, it is currently not possible to use wildcards in resourceName list elements or label-selectors for rules (there have been some discussions going on around this topic, see also [2] and [3]).
There are a bunch of ClusterRoles and ClusterRoleBindings created by the apiserver itself, which are labeled with “kubernetes.io/bootstrapping=rbac-defaults”. Generally, the ones which are only used by internally by Kubernetes components are also prefixed with “system:”, the others are user-facing roles (e.g. for admins).
The RBAC API prevents users from escalating their privileges (providing that RBAC rules are configured correctly). Users can only add and change (Cluster)Roles if they are allowed to modify role resources AND (if they already have all permissions contained in the role OR they have the “escalate” verb assigned for the role resource). Parentheses in my last sentence are to be interpreted as logic symbols to show the order of operation.
For RoleBindings this is similar: Users can only add and change (Cluster)RoleBindings if they are allowed to modify (Cluster)RoleBinding resources AND (if they already have all permissions contained in the role to be bound OR they have the “bind” verb assigned for the (Cluster)Role resource).
This makes it possible to allow a user to grant privileges to others, but not more than he possesses himself. It prevents users from escalating their privileges. So even if a user has the privileges to create roles, an attempt to modify a role with more privileges than the user currently holds will result in an error like:
... is forbidden: user "system:serviceaccount:default:restricteduser" (groups=["system:serviceaccounts" "system:serviceaccounts:default" "system:authenticated"]) is attempting to grant RBAC permissions not currently held: ...
Now you should understand the basic concept behind RBAC. Be aware that there are some pitfalls that could lead to unexpected and dangerous vulnerabilities, e.g. related to the create pod privilege. This, together with some other practical hints about securing and auditing Kubernetes access control, I will cover in my next blogpost. So stay tuned!
[1] https://kubernetes.io/docs/reference/access-authn-authz/rbac/
[2] https://github.com/kubernetes/kubernetes/issues/56582
[3] https://github.com/kubernetes/kubernetes/issues/44703