KodeShift pipelines are defined as YAML files in a .kodeshift/ directory at the root of your Git repository. This guide covers every configuration option available.
Directory Structure
.kodeshift/
├── .kodeshift.yaml # Entry point — stages + includes
├── pipeline/
│ ├── .kodeshift-pipeline.yaml # Job definitions
│ ├── .kodeshift-common.yaml # Reusable script blocks
│ └── .kodeshift-trigger.yaml # Webhook triggers + environment mapping
└── chart/ # Helm chart for deployment
├── Chart.yaml
├── values.yaml # Base values
├── values-dev.yaml # Dev overrides
├── values-staging.yaml # Staging overrides
└── values-prod.yaml # Production overrides
When you initialize a pipeline through the UI, this entire structure is generated automatically on a dedicated kodeshift branch in your repository.
Main Configuration (.kodeshift.yaml)
The entry point defines which stages exist and which files to include.
stages :
- gitleaks
- analysis
- build
- deploy
includes :
- pipeline/.kodeshift-pipeline.yaml
- pipeline/.kodeshift-common.yaml
- pipeline/.kodeshift-trigger.yaml
Fields
Field Type Description stageslist[string] or dictOrdered list of stage names (recommended), or legacy dict mapping environments to stage lists includeslist[string]Relative paths to included YAML files
Use the list format for stages. The dict format (stages: {dev: [...], qa: [...]}) is legacy and limits dynamic environment selection.
Job Definitions (.kodeshift-pipeline.yaml)
Each top-level key is a job name. The key must match one of the stages declared in .kodeshift.yaml.
build :
stage : build
image :
name : shiftlabsdev/kaniko-executor:latest
command :
- /kaniko/executor
args :
- "--dockerfile=${DOCKERFILE}"
- "--context=dir:///workspace/"
- "--destination=${REGISTRY}/${REGISTRY_REPO}/${APP_NAME}:${ENV}-${IMAGE_TAG}"
env :
- name : DOCKER_CONFIG
value : /kaniko/.docker
script : []
init :
- "@clone_scripts"
environments :
- "*"
allow_fail : true
timeout : 600
Field Reference
Field Type Required Default Description stagestring yes — Must match a name in the stages list imageobject yes — Container image configuration (see below) scriptlist yes — Main commands to execute initlist no []Setup commands that run before script environmentslist yes — Environment patterns this job applies to allow_failbool or list no falsetrue for all envs, or a list of env namestriggerstring no "auto""auto" or "manual"timeoutint no 600Max execution time in seconds parallel_executionbool no falseRun in parallel with other jobs in the same group stage_groupstring no null Group name for parallel execution stage_group_orderint no null Execution order within the group job_namestring no null Display name in the UI
Both snake_case and camelCase field names are accepted (e.g. stage_group or stageGroup).
Image Configuration
Field Type Description namestring Container image (e.g. shiftlabsdev/builder:latest) commandlist Override container entrypoint argslist Arguments passed to the entrypoint envlist Environment variables as {name, value} pairs
For standard jobs, only name is needed. The command, args, and env fields are used for specialized containers like Kaniko that require a custom entrypoint.
Script References
Scripts in init and script fields support several formats.
Reference syntax (@)
Reference a reusable block defined in .kodeshift-common.yaml:
init :
- "@clone_scripts"
script :
- "@security_scripts"
The legacy use_ prefix also works (use_clone_scripts) but @ is preferred.
Inline commands
script :
- "echo 'Hello'"
- |
echo "Multiline"
echo "Commands"
Scripts can also be objects:
script :
- command : "npm"
args : [ "run" , "build" ]
- run : "echo done"
- shell : "bash"
command : "echo $SHELL"
Common Scripts (.kodeshift-common.yaml)
Define reusable script blocks that jobs reference with @:
clone_scripts :
- "echo '--- Repository Clone ---'"
- |
git clone https://x-access-token:${GIT_TOKEN}@${GIT_REPO_URL} /workspace
- "cd /workspace"
- "git checkout ${BRANCH_CODE}"
security_scripts :
- "cd /workspace"
- |
gitleaks detect \
--source /workspace \
--report-format json
Each key becomes a referenceable script name. Values are command lists following the same formats described above.
Environment Pattern Matching
The environments field on each job uses pattern matching to determine which environments a job runs in.
Pattern Matches Example *All environments ["*"]prodExact match ["prod"]prod*Starts with ["prod*"] matches prod, prod-eu, prod-us*-euEnds with ["*-eu"] matches prod-eu, staging-eu*prod*Contains ["*prod*"] matches preprod, prod-eu
Conditional allow_fail
allow_fail can be a list of environment names instead of a boolean:
analysis :
allow_fail : [ "dev" , "qa" ] # Failures ignored in dev/qa, fatal in prod
environments :
- "*"
Trigger Configuration (.kodeshift-trigger.yaml)
Defines which Git events trigger the pipeline and how branches/tags map to environments.
on :
push :
branches :
- develop
- feature/*
- feature/**
- release/*
- hotfix/*
tag :
patterns :
- v*
- release-*
merge_request :
branches :
- main
- master
types :
- merged
environment_mapping :
develop : dev
feature/* : dev
feature/** : dev
release/* : staging
hotfix/* : staging
v* : prod
release-* : prod
main : prod
master : prod
Trigger Types
Trigger Fields Description pushbranchesFires on push to matching branches tagpatternsFires when a tag matching the pattern is pushed merge_requestbranches, typesFires on merge request events to matching branches
Branch Patterns
feature/* — matches one path segment (e.g. feature/login)
feature/** — matches nested segments (e.g. feature/user/auth)
v* — matches tags like v1.0.0, v2.1.3-beta
Environment Mapping
Maps the branch or tag pattern to an environment name. When a webhook fires, the pipeline runner looks up the matching pattern to determine which environment to deploy to.
Workflow Models
Development happens on short-lived branches merged to main. Production deploys are triggered by tagging. feature/* → dev
main → staging (via push)
v* → prod (via tag)
Long-lived develop and main branches with release branches for staging. feature/* → dev
release/* → staging
main → prod (via merge request)
Parallel Execution (Stage Groups)
Jobs that share the same stage_group and stage_group_order run in parallel:
sast_scan :
stage : analysis
stage_group : "Security"
stage_group_order : 0
job_name : "SAST Scan"
parallel_execution : true
# ...
secret_scan :
stage : analysis
stage_group : "Security"
stage_group_order : 0
job_name : "Secret Scan"
parallel_execution : true
# ...
Both jobs execute simultaneously within the analysis stage.
Helm Chart Integration
The .kodeshift/chart/ directory holds a standard Helm chart used by ArgoCD for deployment.
chart/
├── Chart.yaml
├── values.yaml # Base values (shared across environments)
├── values-dev.yaml # Dev-specific overrides
├── values-staging.yaml # Staging-specific overrides
└── values-prod.yaml # Production-specific overrides
During deployment, ArgoCD applies values-${ENV}.yaml on top of the base values.yaml, so you only need to specify overrides per environment.
Pipeline Lifecycle
Status Description pendingPipeline created, waiting to start runningCurrently executing stages successAll stages completed successfully success_with_warningsCompleted, but one or more allow_fail stages failed failedA required stage failed cancelledStopped by user waitingPaused at a trigger: manual stage
Built-in Variables
These variables are available in all scripts via ${VARIABLE} syntax.
Variable Description ENVTarget environment name (e.g. dev, staging, prod) IMAGE_TAGDocker image tag BRANCH_CODEGit branch for the application source code BRANCH_PIPELINEGit branch for pipeline config (usually kodeshift) APP_NAMEApplication / project name NAMESPACEKubernetes namespace for deployment CLUSTER_NAMETarget Kubernetes cluster RUN_NAMESPACENamespace where the pipeline job pod runs GIT_REPO_URLRepository URL (without protocol) GIT_TOKENGit access token GIT_USERNAMEGit username ARGOCD_SERVERArgoCD server URL ARGOCD_TOKENArgoCD authentication token ARGOCD_USERNAMEArgoCD username ARGOCD_PASSWORDArgoCD password ARGOCD_VERSIONArgoCD CLI version ARGOCD_PROJECTArgoCD project name ARGOCD_INSTANCE_IDArgoCD instance identifier REGISTRYContainer registry URL REGISTRY_REPORegistry repository path DOCKERFILEPath to Dockerfile (default: /workspace/Dockerfile) SONAR_HOST_URLSonarQube server URL SONAR_TOKENSonarQube authentication token DOCKER_CONFIGDocker config directory path
Sensitive variables like GIT_TOKEN, ARGOCD_PASSWORD, and SONAR_TOKEN are injected from Vault at runtime and are never stored in your YAML files.
Validation Rules
When a pipeline is saved or triggered, the system validates:
Main YAML schema — stages and includes must conform to the expected structure
Included files exist — every path in includes must resolve to a valid file
Stage references — each job’s stage field must match a declared stage name
Environment consistency (legacy dict format only) — environments declared in job files must match those in the stages dict
Common scripts — YAML syntax is validated but references are resolved at runtime
Validation errors include the file path, stage name, and specific mismatch details to help you fix issues quickly.
Complete Example
A minimal but complete pipeline with four stages:
.kodeshift.yaml
.kodeshift-pipeline.yaml
.kodeshift-trigger.yaml
stages :
- gitleaks
- analysis
- build
- deploy
includes :
- pipeline/.kodeshift-pipeline.yaml
- pipeline/.kodeshift-common.yaml
- pipeline/.kodeshift-trigger.yaml