Skip to main content
Kubernetes Events provide a record of what is happening inside the cluster. They report state changes, errors, and other significant occurrences across all resources, making them essential for troubleshooting and monitoring cluster health.

Key Concepts

Event

A record of a state change or occurrence in the cluster, such as pod scheduling, image pulling, or container failures.

Event Type

Events are classified as Normal (informational) or Warning (potential issues).

Reason

A short, machine-readable code indicating what happened (e.g., Scheduled, Pulled, Failed).

Involved Object

The Kubernetes resource (Pod, Deployment, Node, etc.) that the event relates to.
Events are automatically collected from all namespaces in the cluster. They provide real-time visibility into cluster activity without requiring namespace selection.

Required Permissions

ActionPermission
View Eventsiam:project:infrastructure:kubernetes:read
Events are read-only in Kubernetes. They are generated automatically by the system and cannot be created, edited, or deleted through the API.

Event Types

TypeDescription
NormalInformational events indicating successful operations (scheduling, pulling images, starting containers)
WarningEvents indicating potential problems that may require attention (failures, errors, resource issues)

Event Fields

FieldDescription
NamespaceThe namespace where the event occurred
Last SeenWhen the event was last observed
TypeEvent severity: Normal or Warning
ReasonMachine-readable code for the event cause
ObjectThe resource (Kind/Name) that the event relates to
MessageHuman-readable description of what happened

How to View Events

1

Select Cluster

Choose a cluster from the cluster dropdown.
2

View Events

Events from all namespaces are displayed automatically with real-time updates.
3

Filter and Search

Use the search box to find events by message content. Filter by event reason to focus on specific event types.
4

Review Statistics

Check the summary cards for:
  • Total Events: All events in the cluster
  • Warnings: Events that may indicate problems
  • Normal: Informational events
  • Namespaces: Number of namespaces with events

Common Event Reasons

Pod Events

ReasonTypeDescription
ScheduledNormalPod assigned to a node
PulledNormalContainer image pulled successfully
CreatedNormalContainer created
StartedNormalContainer started
KillingNormalContainer being terminated
FailedWarningContainer failed to start
BackOffWarningContainer restarting after failure
FailedSchedulingWarningPod could not be scheduled
UnhealthyWarningLiveness/readiness probe failed
FailedMountWarningVolume mount failed

Node Events

ReasonTypeDescription
NodeReadyNormalNode became ready
NodeNotReadyWarningNode is not ready
NodeHasDiskPressureWarningNode has disk pressure
NodeHasMemoryPressureWarningNode has memory pressure
NodeHasNoDiskPressureNormalDisk pressure resolved
NodeHasNoMemoryPressureNormalMemory pressure resolved

Deployment Events

ReasonTypeDescription
ScalingReplicaSetNormalDeployment scaling replicas
SuccessfulCreateNormalReplicaSet created
SuccessfulDeleteNormalReplicaSet deleted
DeploymentRollbackNormalDeployment rolled back

Other Common Events

ReasonTypeDescription
ProvisioningSucceededNormalPVC provisioning completed
ProvisioningFailedWarningPVC provisioning failed
FailedBindingWarningPVC binding failed
ExternalProvisioningNormalExternal provisioner working
LeaderElectionNormalLeader election occurred

Troubleshooting with Events

Look for events with reason:
  • FailedScheduling - Check for resource constraints or node selectors
  • FailedMount - Check PVC and volume configuration
  • NodeNotReady - Check node health
Events will show the specific reason for scheduling failure.
Look for events with reason:
  • BackOff - Container crashing, check logs
  • Unhealthy - Probe failures, check probe configuration
  • Failed - Container start failure, check image and command
Check the event message for specific error details.
Look for events with reason:
  • Failed - With message about image pull
  • BackOff - ImagePullBackOff state
Common causes: wrong image name, missing credentials, network issues.
Look for events with reason:
  • FailedMount - Volume could not be mounted
  • FailedAttachVolume - Volume attachment failed
  • ProvisioningFailed - Dynamic provisioning failed
Check StorageClass and PVC configuration.
Look for events with reason:
  • NodeNotReady - Node health problems
  • NodeHasDiskPressure - Disk space issues
  • NodeHasMemoryPressure - Memory issues
  • Rebooted - Node was rebooted
Events show when conditions changed.
  • Verify the cluster is ready and accessible
  • Check if you have read permission for the cluster
  • Events expire after a default retention period (1 hour by default in Kubernetes)
  • Try refreshing the page

FAQ

By default, Kubernetes retains events for 1 hour. After this time, events are automatically garbage collected. This retention period is configurable at the cluster level via the --event-ttl flag on the API server.
Events can be deduplicated by Kubernetes. The “count” field shows how many times an event occurred. In the UI, you see the most recent occurrence with its timestamp.
Events are generated automatically by Kubernetes components and controllers. They cannot be manually created, edited, or deleted through normal means. They are purely informational records.
Last Seen indicates when the event was most recently observed. For recurring events (like repeated failures), this shows the latest occurrence, not when the problem first started.
Warning events indicate potential problems that may need attention:
  • Failed operations (scheduling, volume mounting, image pulling)
  • Resource constraints (disk/memory pressure)
  • Health check failures
Monitoring Warning events helps identify issues before they become critical.
Use the search box to search for the pod name. Events related to that pod will be shown. You can also search by namespace, event message, or any other field content.
Events are cluster-level records of significant occurrences (state changes, errors). They are short-lived and structured.Logs are continuous output from containers. They provide detailed application-level information and are typically retained longer.Use events for quick troubleshooting of Kubernetes issues; use logs for application debugging.
Events are displayed cluster-wide to provide complete visibility into cluster activity. This helps identify cross-namespace issues and understand the overall cluster state. Use search and filters to focus on specific namespaces.

Best Practices

Monitor Warning Events: Set up alerts for Warning events, especially FailedScheduling, Unhealthy, and Failed events. These often indicate problems that need attention.
Use Events for Initial Troubleshooting: When a workload has issues, events are the first place to look. They provide immediate context about what Kubernetes is doing with your resources.
Correlate Events with Resource Status: Events explain why a resource is in a certain state. If a Pod is Pending, events will show the scheduling failure reason.