Policy Exceptions
Warning
PolicyExceptions are disabled by default. To enable them, set theenablePolicyException
flag to true
. When enabling PolicyExceptions, you must also specify which namespaces they can be used in by setting the exceptionNamespace
flag.
For more information, see Container Flags.Although Kyverno policies contain multiple methods to provide fine-grained control as to which resources they act upon in the form of match
/exclude
blocks, preconditions at multiple hierarchies, anchors, and more, all these mechanisms have in common that the resources which they are intended to exclude must occur in the same rule definition. This may be limiting in situations where policies may not be directly editable, or doing so imposes an operational burden.
For example, in organizations where multiple teams must interact with the same cluster, a team responsible for policy authoring and administration may not be the same team responsible for submission of resources. In these cases, it can be advantageous to decouple the policy definition from certain exclusions. Additionally, there are often times where an organization or team must allow certain exceptions which would violate otherwise valid rules but on a one-time basis if the risks are known and acceptable.
Imagine a validate policy exists in Enforce
mode which mandates all Pods must not mount host namespaces. A separate team has a legitimate need to run a specific tool in this cluster for a limited time which violates this policy. Normally, the policy would block such a “bad” Pod if the policy was not previously altered in such a way to allow said Pod to run. Rather than making adjustments to the policy, an exception may be granted. Both of these examples are use cases for a PolicyException resource described below.
A PolicyException
is a Namespaced Custom Resource which allows a resource(s) to be allowed past a given policy and rule combination. It can be used to exempt any resource from any Kyverno rule type although it is primarily intended for use with validate rules. A PolicyException encapsulates the familiar match
/exclude
statements used in Policy
and ClusterPolicy
resources but adds an exceptions{}
object to select the policy and rule name(s) used to form the exception. A conditions{}
block (optional) uses common expressions similar to those found in preconditions and deny rules to query the contents of the selected resources in order to refine the selection process. The logical flow of how a PolicyException works in tandem with a validate policy is depicted below.
graph TD Start --> id1["Validate policy in enforce mode exists"] id1 --> id2["User/process sends violating resource"] id2 --> Need{"Matching PolicyException exists?"} Need -- No --> id3["Resource blocked"] Need -- Yes --> id4["Resource allowed"]
An example set of resources is shown below.
A ClusterPolicy exists containing a single validate rule in Enforce
mode which requires all Pods must not use any host namespaces via the fields hostPID
, hostIPC
, or hostNetwork
. If any of these fields are defined, they must be set to a value of false
.
1apiVersion: kyverno.io/v2beta1
2kind: ClusterPolicy
3metadata:
4 name: disallow-host-namespaces
5spec:
6 background: false
7 rules:
8 - name: host-namespaces
9 match:
10 any:
11 - resources:
12 kinds:
13 - Pod
14 validate:
15 failureAction: Enforce
16 message: >-
17 Sharing the host namespaces is disallowed. The fields spec.hostNetwork,
18 spec.hostIPC, and spec.hostPID must be unset or set to `false`.
19 pattern:
20 spec:
21 =(hostPID): "false"
22 =(hostIPC): "false"
23 =(hostNetwork): "false"
A cluster administrator wishes to grant an exception to a Pod or Deployment named important-tool
which will be created in the delta
Namespace. A PolicyException resource is created which specifies the policy name and rule name which should be bypassed as well as the resource kind, Namespace, and name which may bypass it.
Note
Auto-generated rules for Pod controllers must be specified along with the Pod controller requesting exception, if applicable. Since Kyverno’s auto-generated rules are additive in nature, when specifying specific resource names of Pod controllers, it may be necessary to use a wildcard (*
) to allow the Pods emitted from those controllers to be exempted as components of the Pod name include ReplicaSet hash and Pod hash. 1apiVersion: kyverno.io/v2
2kind: PolicyException
3metadata:
4 name: delta-exception
5 namespace: delta
6spec:
7 exceptions:
8 - policyName: disallow-host-namespaces
9 ruleNames:
10 - host-namespaces
11 - autogen-host-namespaces
12 match:
13 any:
14 - resources:
15 kinds:
16 - Pod
17 - Deployment
18 namespaces:
19 - delta
20 names:
21 - important-tool*
22 conditions:
23 any:
24 - key: "{{ request.object.metadata.labels.app || '' }}"
25 operator: Equals
26 value: busybox
A Deployment matching the characteristics defined in the PolicyException, shown below, will be allowed creation even though it technically violates the rule’s definition.
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: important-tool
5 namespace: delta
6 labels:
7 app: busybox
8spec:
9 replicas: 1
10 selector:
11 matchLabels:
12 app: busybox
13 template:
14 metadata:
15 labels:
16 app: busybox
17 spec:
18 hostIPC: true
19 containers:
20 - image: busybox:1.35
21 name: busybox
22 command: ["sleep", "1d"]
PolicyExceptions are always Namespaced yet may provide an exception for a cluster-scoped resource as well. There is no correlation between the Namespace in which the PolicyException exists and the Namespace where resources may be excepted.
Exceptions against a ClusterPolicy and those against a (Namespaced) Policy can be disambiguated by specifying the value of the exceptions[].policyName
field in the format <namespace>/<policy-name>
.
1exceptions:
2- policyName: team-a/disallow-host-namespaces
3 ruleNames:
4 - host-namespaces
PolicyExceptions also support background scanning, enabled by default. An exception which either explicitly defines spec.background=true
or does not define the field at all, will influence Policy Reports when the exception is processed, allowing report results to change from a Fail
to a Skip
result. When background scans are enabled, PolicyExceptions forbid matching on the same types of fields as those forbidden by validate rules including Roles, ClusterRoles, and user information.
Wildcards ("*"
) are supported in the value of the ruleNames[]
field allowing exception from any/all rules in the policy without having to name them explicitly.
Since PolicyExceptions are just another Custom Resource, their use can and should be controlled by a number of different mechanisms to ensure their creation in a cluster is authorized including:
- Kubernetes RBAC
- Specific Namespace for PolicyExceptions (see Container Flags)
- Existing GitOps governance processes
- Kyverno validate rules
- YAML manifest validation
PolicyExceptions may be subjected to Kyverno validate policies which can be used to provide additional guardrails around how they may be crafted. For example, it is considered a best practice to only allow very narrow exceptions to a much broader rule. Given the case shown earlier, only a Pod or Deployment with the name important-tool
would be allowed by the exception, not any Pod or Deployment. Kyverno policy can help ensure, both in the cluster and in a CI/CD process via the CLI, that PolicyExceptions conform to your design standards. Below is an example of a sample policy to illustrate how a Kyverno validate rule ensure that a specific name must be used when creating an exception. For other samples, see the policy library.
1apiVersion: kyverno.io/v2beta1
2kind: ClusterPolicy
3metadata:
4 name: policy-for-exceptions
5spec:
6 background: false
7 rules:
8 - name: require-match-name
9 match:
10 any:
11 - resources:
12 kinds:
13 - PolicyException
14 validate:
15 failureAction: Enforce
16 message: >-
17 An exception must explicitly specify a name for a resource match.
18 pattern:
19 spec:
20 match:
21 =(any):
22 - resources:
23 names: "?*"
24 =(all):
25 - resources:
26 names: "?*"
Pod Security Exemptions
Kyverno policies can be used to apply Pod Security Standards profiles and controls via the validate.podSecurity subrule. However, there are cases where certain Pods need to be exempted from these policies. For example, a Pod may need to run as root
or require privileged access. In such cases, a PolicyException can be used to define an exemption for the Pod through the podSecurity{}
block. It can be used to define controls that are exempted from the policy.
Given the following policy that enforces the latest version of the Pod Security Standards restricted profile in a single rule across the entire cluster.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: psa
5spec:
6 background: true
7 rules:
8 - name: restricted
9 match:
10 any:
11 - resources:
12 kinds:
13 - Pod
14 validate:
15 failureAction: Enforce
16 podSecurity:
17 level: restricted
18 version: latest
In this use case, all Pods in the delta
Namespace need to run as a root. A PolicyException can be used to exempt all Pods whose Namespace is delta
from the policy by excluding the runAsNonRoot
control.
1apiVersion: kyverno.io/v2
2kind: PolicyException
3metadata:
4 name: pod-security-exception
5 namespace: policy-exception-ns
6spec:
7 exceptions:
8 - policyName: psa
9 ruleNames:
10 - restricted
11 match:
12 any:
13 - resources:
14 namespaces:
15 - delta
16 podSecurity:
17 - controlName: "Running as Non-root"
The following Pod satisfies all controls in the restricted profile except the Running as Non-root
control but it matches the exception. Hence, it will be successfully created.
1apiVersion: v1
2kind: Pod
3metadata:
4 name: nginx-pod
5 namespace: delta
6spec:
7 containers:
8 - name: nginx
9 image: nginx
10 args:
11 - sleep
12 - 1d
13 securityContext:
14 seccompProfile:
15 type: RuntimeDefault
16 runAsNonRoot: false
17 allowPrivilegeEscalation: false
18 capabilities:
19 drop:
20 - ALL
PolicyExceptions podSecurity{}
block has the same functionality as the validate.podSecurity.exclude block in the policy itself. They can be used to exempt controls that can only be defined in the container level fields.
For example, the following PolicyException exempts the containers running either the nginx
or redis
image from following the Capabilities control.
1apiVersion: kyverno.io/v2
2kind: PolicyException
3metadata:
4 name: pod-security-exception
5 namespace: policy-exception-ns
6spec:
7 exceptions:
8 - policyName: psa
9 ruleNames:
10 - restricted
11 match:
12 any:
13 - resources:
14 namespaces:
15 - delta
16 podSecurity:
17 - controlName: Capabilities
18 images:
19 - nginx*
20 - redis*
There might be a case where it is required to have specific values for the controls in the PodSecurity profile. In such cases, the podSecurity.restrictedField
field can be used to define these values for the controls that are exempted from the policy.
For example, service meshes like Istio and Linkerd employ an initContainer
that requires some privileges which are very often problematic in security-conscious clusters. Minimally, these initContainers must add two Linux capabilities which allow them to make modifications to the networking stack: NET_ADMIN
and NET_RAW
. These initContainers may go even further by running as a root user, something which is a big no-no in the world of containers.
In this case, the podSecurity.restrictedField
can be used to enforce the entire baseline profile of the Pod Security Standards but only exclude Istio’s and Linkerd’s images from specifically the initContainers list.
The following PolicyException grants an exemption to the initContainers
that use Istio or Linkerd images, allowing them to bypass the Capabilities
control. This is achieved by permitting the values of NET_ADMIN
and NET_RAW
in the securityContext.capabilities.add
field.
1apiVersion: kyverno.io/v2
2kind: PolicyException
3metadata:
4 name: pod-security-exception
5 namespace: policy-exception-ns
6spec:
7 exceptions:
8 - policyName: psa
9 ruleNames:
10 - baseline
11 match:
12 any:
13 - resources:
14 kinds:
15 - Pod
16 podSecurity:
17 - controlName: Capabilities
18 images:
19 - "*/istio/proxyv2*"
20 - "*/linkerd/proxy-init*"
21 restrictedField: spec.initContainers[*].securityContext.capabilities.add
22 values:
23 - NET_ADMIN
24 - NET_RAW
The following Pod meets all requirements outlined in the baseline profile, except the Capabilities
control in the initContainer
. However, it matches the exception that permits the configuration of spec.initContainers[*].securityContext.capabilities.add
to include NET_ADMIN
and NET_RAW
. Hence, it will be successfully created.
1apiVersion: v1
2kind: Pod
3metadata:
4 name: istio-pod
5spec:
6 initContainers:
7 - name: istio-init
8 image: docker.io/istio/proxyv2:1.20.2
9 args:
10 - istio-iptables
11 - -p
12 - "15001"
13 - -z
14 - "15006"
15 - -u
16 - "1337"
17 - -m
18 - REDIRECT
19 - -i
20 - '*'
21 - -x
22 - ""
23 - -b
24 - '*'
25 - -d
26 - 15090,15021,15020
27 - --log_output_level=default:info
28 securityContext:
29 allowPrivilegeEscalation: false
30 capabilities:
31 add:
32 - NET_ADMIN
33 - NET_RAW
34 drop:
35 - ALL
36 privileged: false
37 readOnlyRootFilesystem: false
38 runAsGroup: 0
39 runAsNonRoot: false
40 runAsUser: 0
41 containers:
42 - name: busybox
43 image: busybox:1.35
44 args:
45 - sleep
46 - infinity
The following Pod meets all requirements outlined in the baseline profile, except the Capabilities
control in the initContainer
and it matches the exception but it sets the spec.initContainers[*].securityContext.capabilities.add
to SYS_ADMIN
which isn’t an allowed value. Hence, it will be rejected.
1apiVersion: v1
2kind: Pod
3metadata:
4 name: istio-pod
5spec:
6 initContainers:
7 - name: istio-init
8 image: docker.io/istio/proxyv2:1.20.2
9 args:
10 - istio-iptables
11 - -p
12 - "15001"
13 - -z
14 - "15006"
15 - -u
16 - "1337"
17 - -m
18 - REDIRECT
19 - -i
20 - '*'
21 - -x
22 - ""
23 - -b
24 - '*'
25 - -d
26 - 15090,15021,15020
27 - --log_output_level=default:info
28 securityContext:
29 allowPrivilegeEscalation: false
30 capabilities:
31 add:
32 - NET_ADMIN
33 - NET_RAW
34 - SYS_ADMIN
35 drop:
36 - ALL
37 privileged: false
38 readOnlyRootFilesystem: false
39 runAsGroup: 0
40 runAsNonRoot: false
41 runAsUser: 0
42 containers:
43 - name: busybox
44 image: busybox:1.35
45 args:
46 - sleep
47 - infinity
PolicyExceptions with CEL Expressions
Since Kyverno 1.14.0, PolicyExceptions, introduced in the new group policies.kyverno.io
, support CEL expressions to selectively skip policy enforcement for policy types like ValidatingPolicy
and ImageValidatingPolicy
in both admission and background modes.
- A CEL expression under
matchConditions
dynamically matches target resources (e.g., by name, namespace, or labels). - The
policyRefs
field specifies the policy name and policy kind being excluded from enforcement. - If the match condition evaluates to
true
, the referenced rule is skipped and logged accordingly in PolicyReports.
Using PolicyException with ValidatingPolicy in Admission Mode
The following ValidatingPolicy
enforce that all Deployment
resources must include the label env=prod
. If this condition is not met, the policy denies the request.
1apiVersion: policies.kyverno.io/v1alpha1
2kind: ValidatingPolicy
3metadata:
4 name: require-prod-label
5spec:
6 validationActions:
7 - Deny
8 matchConstraints:
9 resourceRules:
10 - apiGroups: ["apps"]
11 apiVersions: ["v1"]
12 resources: ["deployments"]
13 operations: ["CREATE", "UPDATE"]
14 validations:
15 - expression: >-
16 has(object.metadata.labels) && object.metadata.labels.env == 'prod'
17 messageExpression: "'Deployment must have label env=prod.'"
To exclude a specific Deployment
from the above policy enforcement, a PolicyException
can be defined. This example uses a CEL expression to match the Deployment
named skipped-deployment
, allowing it to bypass the validation.
1apiVersion: policies.kyverno.io/v1alpha1
2kind: PolicyException
3metadata:
4 name: exclude-skipped-deployment
5 namespace: default
6spec:
7 policyRefs:
8 - name: require-prod-label
9 kind: ValidatingPolicy
10 matchConditions:
11 - name: skip-by-name
12 expression: "object.metadata.name == 'skipped-deployment'"
When the exception is triggered during a live admission request, Kyverno logs the decision in a PolicyReport
. Below is an example showing the policy rule was skipped due to the matching PolicyException
.
1apiVersion: wgpolicyk8s.io/v1alpha2
2kind: PolicyReport
3metadata:
4 namespace: default
5 labels:
6 app.kubernetes.io/managed-by: kyverno
7 ownerReferences:
8 - apiVersion: apps/v1
9 kind: Deployment
10 name: skipped-deployment
11results:
12 - policy: vpol-report-background-sample
13 rule: exception
14 result: skip
15 message: "rule is skipped due to policy exception: default/exclude-skipped-deployment"
16 properties:
17 exceptions: exclude-skipped-deployment
18 process: admission review
19 source: KyvernoValidatingPolicy
20 scored: true
21scope:
22 apiVersion: apps/v1
23 kind: Deployment
24 name: skipped-deployment
25 namespace: default
26summary:
27 pass: 0
28 fail: 0
29 warn: 0
30 error: 0
31 skip: 1
Interpreting PolicyReport Results from PolicyException
- result: skip — indicates the rule was not applied.
- process: admission review — confirms the evaluation occurred during a live admission request.
- exceptions: exclude-skipped-deployment — references the applied
PolicyException
.
Just like in admission mode, PolicyException
also functions in background mode and supports other policy type ImageValidatingPolicy
.
Using PolicyException with ImageValidatingPolicy in Background Mode
In this example, a Pod named skipped-pod
meets the match criteria of the policy. It is located in the default namespace, includes the label prod: true
, and references an unsigned image from ghcr.io. as result,this image should fail the background policy evaluation due to missing or invalid attestations and signatures.
1apiVersion: v1
2kind: Pod
3metadata:
4 name: skipped-pod
5 namespace: default
6 labels:
7 prod: "true"
8spec:
9 containers:
10 - name: nginx
11 image: 'ghcr.io/kyverno/test-verify-image:unsigned'
The ImageValidatingPolicy
shown below is configured to run only during background scans, not during admission. It targets Pod resources have the label prod: true
. When such a resource is encountered, the policy performs three layers of validation: it verifies the image signature using a provided notary certificate, checks for the presence of an SBOM attestation of type CycloneDX
, and confirms that the payload format matches the expected structure.
1apiVersion: policies.kyverno.io/v1alpha1
2kind: ImageValidatingPolicy
3metadata:
4 name: ivpol-sample
5spec:
6 webhookConfiguration:
7 timeoutSeconds: 20
8 failurePolicy: Ignore
9 evaluation:
10 admission:
11 enabled: false
12 background:
13 enabled: true
14 validationActions:
15 - Audit
16 matchConstraints:
17 resourceRules:
18 - apiGroups: [""]
19 apiVersions: ["v1"]
20 operations: ["CREATE"]
21 resources: ["pods"]
22 matchConditions:
23 - name: "check-prod-label"
24 expression: >-
25 has(object.metadata.labels) && has(object.metadata.labels.prod) && object.metadata.labels.prod == 'true'
26 matchImageReferences:
27 - glob: ghcr.io/*
28 attestors:
29 - name: notary
30 notary:
31 certs:
32 value: |-
33 -----BEGIN CERTIFICATE-----
34 MIIDTTCCAjWgAwIBAgIJAPI+zAzn4s0xMA0GCSqGSIb3DQEBCwUAMEwxCzAJB
35 ...
36 -----END CERTIFICATE-----
37 attestations:
38 - name: sbom
39 referrer:
40 type: sbom/cyclone-dx
41 validations:
42 - expression: >-
43 images.containers.map(image, verifyImageSignatures(image, [attestors.notary])).all(e, e > 0)
44 message: failed to verify image with notary cert
45 - expression: >-
46 images.containers.map(image, verifyAttestationSignatures(image, attestations.sbom, [attestors.notary])).all(e, e > 0)
47 message: failed to verify attestation with notary cert
48 - expression: >-
49 images.containers.map(image, extractPayload(image, attestations.sbom).bomFormat == 'CycloneDX').all(e, e)
50 message: sbom is not a cyclone dx sbom
This PolicyException
is defined to exempt this pod from enforcement. The exception uses a CEL expression—object.metadata.name == 'skipped-pod'
to identify the specific resource. It links to the ImageValidatingPolicy
named ivpol-sample
, and when the background controller processes the pod, it detects that the exception applies. As a result, none of the image validation rules are executed for this resource.
1apiVersion: policies.kyverno.io/v1alpha1
2kind: PolicyException
3metadata:
4 name: check-name
5spec:
6 policyRefs:
7 - name: ivpol-sample
8 kind: ImageValidatingPolicy
9 matchConditions:
10 - name: "check-name"
11 expression: "object.metadata.name == 'skipped-pod'"
Kyverno background controller evaluates the pod. It detects that the resource matches the PolicyException
and Kyverno logs the decision in a PolicyReport
1apiVersion: wgpolicyk8s.io/v1alpha2
2kind: PolicyReport
3metadata:
4 namespace: default
5 labels:
6 app.kubernetes.io/managed-by: kyverno
7 ownerReferences:
8 - apiVersion: v1
9 kind: Pod
10 name: skipped-pod
11results:
12 - policy: ivpol-sample
13 rule: exception
14 result: skip
15 message: "rule is skipped due to policy exception: "
16 properties:
17 exceptions: check-name
18 process: background scan
19 source: KyvernoImageValidatingPolicy
20 scored: true
21scope:
22 apiVersion: v1
23 kind: Pod
24 name: skipped-pod
25 namespace: default
26summary:
27 pass: 0
28 fail: 0
29 warn: 0
30 error: 0
31 skip: 1
Interpreting PolicyReport Results from PolicyException
- result: skip — indicates the rule was not applied.
- process: background scan — confirms the evaluation occurred during a background policy check.
- exceptions: check-name — references the applied
PolicyException
.
This enables fine-grained, declarative exemptions without modifying the core policy logic, keeping your security posture strong while allowing flexibility for exceptional cases.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.