Infrastructure
How `chalk apply` packages a Chalk project, builds container images, and rolls them out into the Data Plane.
chalk apply is the command that turns a local Chalk project (Python source plus the
schemas of features and resolvers it defines) into a running deployment in the Data Plane.
This page describes the end-to-end pipeline that runs when you invoke it: what the CLI
sends, what the Metadata Plane builds, where the artifacts live, and how the rollout
reaches your Kubernetes cluster.
For the lower-level RPC sequence (BuilderService, DeployService, GraphQL calls), see
chalk apply — RPC Sequence.

At a high level, chalk apply performs five stages:
When you run chalk apply, the CLI first introspects the local project:
chalkpy runs locally to enumerate every feature class and
resolver and produce a serialized representation of the feature graph.--force is not
passed, the CLI also fetches a diff against the currently deployed graph and prompts
the user to confirm.BuilderService.CreateDeployment with git
metadata, Python requirements, and project settings. The Metadata Plane responds with a
deployment_id.BuilderService.UploadSource, keyed by deployment_id.Only schemas, source code, and project metadata transit to the Metadata Plane at this stage — no feature values are involved.
The Metadata Plane delegates the actual image build to Argo Workflows running in the Metadata Plane cluster. Each deployment kicks off a workflow that:
Build progress is observable in real time via BuilderService.GetDeploymentSteps, which
the CLI polls when --await is set (the default).
The uploaded source archive is persisted to object storage in the customer’s cloud
account, in the source bucket provisioned during installation:
These archives are retained for audit and reproducibility — any past deployment can be re-built or inspected from its source archive.
Built images are pushed to a container registry in the customer’s cloud account, so that image bytes never leave the customer’s cloud boundary:
The Data Plane cluster’s nodes pull images directly from this registry using IRSA (AWS), Workload Identity (GCP), or Managed Identity (Azure). The Metadata Plane never proxies image pulls.
Once the build is ready, the Metadata Plane uses its EKS/GKE/AKS API access to roll the new deployment into the Data Plane cluster:
DeployService.GetDeployment until the
new pods report ready. Terminal statuses are SUCCESS, FAILURE, INTERNAL_ERROR,
BOOT_ERRORS, TIMEOUT, and CANCELLED.For details on the Metadata Plane → Data Plane connectivity that enables this rollout, see Metadata Plane & Data Plane Communication.
chalk apply --branch <name> follows a slimmed-down version of the same pipeline,
optimized for fast iteration:
BuilderService.StartBranch to ensure a branch server is running, then
uploads the source archive directly to DeployService.DeployBranch.Even though chalk apply is orchestrated by the Metadata Plane, the heavy artifacts
stay inside the customer’s cloud account:
The Metadata Plane retains only deployment metadata (deployment ID, git SHA, status, build logs) and the feature-graph schema needed to plan and validate queries.