Chalk CLI Reference

This reference page documents every command and flag available in Chalk's command-line interface.

The Chalk CLI allows you to create, update, and manage your feature pipelines directly from your terminal.

With the Chalk CLI, you can:

  • Create, update, and validate feature pipelines, including migrating existing feature data
  • Query for online and offline data from your feature pipelines, right in your terminal
  • Retrieve detailed information about your projects, environments, and data

Getting Started

Installing Chalk is easy! On Linux and Mac, run the command below to install the latest version of Chalk.

$ curl -s -L https://api.chalk.ai/install.sh | sh

On Windows, download the latest version of the executable from one of the following links:

https://storage.googleapis.com/cli-go.chalk-prod.chalk.ai/latest/chalk-windows-amd64

https://storage.googleapis.com/cli-go.chalk-prod.chalk.ai/latest/chalk-windows-386

Subsequent updates for Mac, Linux, and Windows can be installed using chalk update. After installing, you may need to restart your terminal, or source your shell's rc file (e.g. ~/.bashrc, ~/.zshrc, etc.)

$ curl -s -L https://api.chalk.ai/install.sh | sh
Installing Chalk...
Downloading binary for Darwin arm64... Done!
Version: v1.33.3
Platform:
Hash: 2e819766273dbc14ab2cd1094a5fbe5ed8fb388b
Build Time: 2025-07-27T16:29:24+00:00
OS: darwin
Arch: arm64
Chalk was installed successfully to /Users/emarx/.chalk/bin/chalk-v1.33.3
Run 'chalk --help' to get started.
You may need to open a new terminal window for these changes to take effect.

The Chalk CLI supports a number of flags for every command.

Get help for any command in CLI with chalk help [command].

Every command also accepts the -h or --help flag to see help for that command.

Flags

Send the request to a given branch.

Override the environment id, if desired. To see the default environment, run chalk environment.

Override the API host. This flag is only relevant if you're running on a Chalk Isolated deployment. You can also set the API host with the environment variable CHALK_API_HOST. Even then, it is not required unless this is your first time connecting your terminal to Chalk.

Override the GRPC API host. This flag is only relevant if you're running on a Chalk Isolated deployment. You can also set the API host with the environment variable CHALK_GRPC_API_HOST. Even then, it is not required unless this is your first time connecting your terminal to Chalk.

Override the client id.

Override the client secret.

Override the access token. When provided, this token will be used directly without performing a token exchange.

Override the path to the auth config file. By default, this is ~/.chalk.yml.

Output all commands in JSON instead of the default human-readable format.

Hides loading spinners. This flag can be useful in CI and other non-TTY environments, where spinners may render poorly.

Enable verbose logging.

By default, Chalk checks for updates to the CLI every 24 hours. If --skip-version-check is set, version checks will not be performed.

Set the working directory for finding chalk.yml and auth config files.

Setup

Connect the CLI to your Chalk account by logging in to persist your client ID and secret locally.

The Chalk CLI runs commands using a global configuration or project-specific configurations. To configure the CLI globally, run:

$ chalk login

You'll be redirected to the dashboard to confirm that you want to give the CLI access to your account.

All configurations are stored in ~/.config/chalk.yml but you can use the XDG_CONFIG_HOME environment variable to override this location.

Flags

If set, Chalk will open the CLI authorization window in your browser without asking. Useful for automated scripts.

$ chalk login
✓ Created session
Open the authorization page in your browser? Yes
Complete authorization at https://chalk.ai/cli/ls-cldtwriw700ds62e07sr55
✓ Opened login window in browser
⡿ Waiting...

Use the environment command to view and edit the current environment.

In Chalk, each team has many projects, and each project has many environments. Within an environment, you may define different data sources and different resolvers.

Arguments
<env>
string

The name of the environment to activate. When <env> is not specified, this command prints information about the current environment.

$ chalk environment prod
✓ Fetched available environments
✓ Set environment to 'prod'

Use the config command to view information about the authorization configuration that the CLI is using to make requests.

When you connect the CLI to your Chalk account by logging in with chalk login, Chalk stores your client ID and secret in ~/.config.yml (unless overridden by the XDG_CONFIG_HOME environment variable).

The config will change according to the active environment (determined by chalk environment and active project (determined by chalk project.

$ chalk config
Name Value Source
─────────────────────────────────────────────────────────────────────────
Client ID token-392c737aa1e467e42e85ae3e8417a003 default token
Client Secret ts-6307f46a00a68436b0f955b82b7fb30075d16 default token
Environment btfxgaqqxbt7z default token
Environment Name default default token
API Server https://api.chalk.ai default token
GRPC API Server https://api.chalk.ai default token
Query Server https://api.chalk.ai default token

Use the init command to set up a new Chalk project. This command creates two files for you: chalk.yaml and .chalkignore. The first file, chalk.yaml, contains configuration information about your project. The second file, .chalkignore, tells the CLI which files to ignore when deploying to your Chalk environment. You can edit this file and use it just like a .gitignore file.

To configure AI assistant prompts, use chalk init agent-prompt [provider].

$ chalk init
Created project config file chalk.yaml
Created .chalkignore file

Configure AI assistant prompts with Chalk-specific context for your preferred AI coding assistant.

This command creates a configuration file tailored to your AI provider that includes:

  • Comprehensive Chalk feature engineering best practices - Learn the Chalk way of building features
  • Code patterns and examples - Feature classes, resolvers, and expressions with real examples
  • DataFrame operations - Windowed aggregations, filtering, and projections
  • SQL resolver patterns - Data source integration and optimization techniques
  • Performance optimization - Expression compilation to C++ for low-latency execution
  • Deployment workflows - Branch deployments, testing strategies, and CI/CD patterns
  • LLM integration patterns - Prompt engineering and model integration

Why Use Agent Prompts?

These prompts are carefully crafted and continuously evaluated to ensure AI assistants provide accurate, idiomatic Chalk code that follows best practices. The prompts help AI assistants understand Chalk's unique approach to feature engineering:

  • Python-based feature definitions without DSLs
  • Expression compilation to C++ for low-latency execution
  • Point-in-time correctness for batch queries
  • Unified feature and prompt engineering

Supported Providers

  • cursor - Creates AGENTS.md for Cursor IDE
  • claude - Creates CLAUDE.md for Claude AI
  • copilot - Creates .github/copilot-instructions.md for GitHub Copilot
  • augment - Creates .augment/rules/chalk-guidelines.md for Augment

All providers receive the same comprehensive Chalk context, ensuring consistent quality assistance regardless of which AI tool you use.

Arguments

The AI assistant provider to configure. Supported values: cursor, claude, copilot, and augment.

$ chalk init agent-prompt cursor
Created AGENTS.md with Chalk-specific context for Cursor IDE
$ chalk init agent-prompt claude
Created CLAUDE.md with Chalk-specific context for Claude AI
$ chalk init agent-prompt copilot
Created .github/copilot-instructions.md with Chalk-specific context for GitHub Copilot
$ chalk init agent-prompt augment
Created .augment/rules/chalk-guidelines.md with Chalk-specific context for Augment

Use the project command to view and edit the current project.

In Chalk, each team has many projects, and each project has many environments. A project is defined by a folder with a chalk.yml or chalk.yaml file. The contents within that folder define the features and resolvers of the project. That code is deployed once per environment.

The project configuration file allows you to view and modify:

Your current working directory (or any parent directory) must contain a chalk.yml or chalk.yaml file.

Flags

Opens the configuration file in your default editor.

$ chalk project
Name: Credit
Environment: default
Requirements: requirements.txt
Runtime: python311
Environment: staging
Requirements: requirements-staging.txt
Runtime: python311

Use the infra command to manage and update various infrastucture elements of your chalk environment.

The infrastructure under management will change according to the active environment (determined by chalk environment and active project (determined by chalk project.

Basic Operations

Use the apply command to deploy your feature pipelines.

Chalk projects have a configuration file in the root of the project named chalk.yml or chalk.yaml. Typically, chalk apply is run from the project root, but it can also be run from any child directory of the project.

The deploy is composed of three steps:

  • Local validation: The Chalk CLI uses chalkpy to check for errors in any Chalk resolvers (for example, resolvers that take incompatible inputs and outputs).
  • Remote validation: Once local validation passes, Chalk sends the features and resolvers to the server to validate against the deployment.
  • Confirmation: Unless --force is specified, you'll see a diff of the features, and asked to confirm before deployment.
  • Code upload: The Chalk CLI bundles the code in the project root and sends it to Chalk's servers. Chalk does not upload files matched in your .gitignore or .chalkignore.

Branch deploys

By default, the chalk apply command deploys your features and resolvers to a production serving environment. These deployments roll out gradually over the course of ~1 minute to eliminate downtime.

However, you may wish to iterate more quickly on your feature pipelines. For development, you can use the --branch flag to deploy to a named and ephemeral environment. Branch deployments are optimized for iterating on your feature and resolver definitions and deploy in ~5 seconds.

Once you've deployed to a branch, you can query new features and resolvers in the branch:

> chalk apply --branch feat1
> chalk query --in user.name --out user --branch feat1

Branch deployments also allow you to create or modify features and resolvers from a Jupyter notebook, in real time. You can also use the --reset flag to deploy a clean version of the working directory to a branch, resetting any changes that may have been made from a notebook.

> chalk apply --branch feat1 --reset

File watches

We can make the iteration loop even tighter by adding the --watch flag to the chalk apply --branch command. This will watch for changes to your feature and resolver definitions and deploy them to the branch automatically.

> chalk apply --branch --watch
Flags

If true, skip confirmation before creating a new deployment.

If true, wait for the deployment to go live before exiting. Otherwise, exit and print a deployment URL that tracks the deployment progress.

If true, the deployment will not execute. The diff will be printed, and validation errors will be reported.

Watch the current files for changes. Must also supply --branch to run with watch

Resets the given branch to the state of the working directory by removing any features or resolvers that were created/updated in a notebook. Must also supply --branch.

Add a tag to the deployment: for example, v0.1.4

Auto-target the inactive tag in blue/green deployments (only available when blue/green is configured)

Don't validate that the requirements.txt contains relevant SQL Source extras

Skip LSP diagnostic and validation errors, applying even if there are errors

Dump the chalk apply payload

$ chalk apply
✓ Found resolvers
✓ Successfully validated features and resolvers!
✓ Checked against live resolvers
Added Resolvers
Name ← →
────────────────────────────────────────────────────────────
+ features.underwriting.get_user_details_44 1 1
Added Features
Name Cache? Owner
────────────────────────────────────────────────────────────
+ example_user.id
+ example_user.name
+ example_user.fraud_score 30m fraud@company.com
Would you like to deploy? [y/n]

Use the query command to quickly test your features pipelines.

Chalk supports several API clients for production use. But in development, it is convenient to quickly pull feature values without using an API client or writing code. The chalk query command allows you to pull features and test deployments.

You can even request entire feature classes by asking for the feature namespace. For example,

$ chalk query --in user.id=4 --out user

will return all of the features on user. It will not, however, return all the features of all the has-one relationships of user.

If you do want to return has-one features, you can specify them in the query:

$ chalk query --in user.id=4 \
			 --out user \
			 --out user.card

In the above example, the Chalk CLI will return all the scalar features of user and of user.card.

If you want to query for features from multiple namespaces, you can use a . to indicate an absolute rather than a relative path. By default we will assume that input and output names are relative to the first namespace that we observe in a query, but you can specify different root namespaces:

$ chalk query --in user.id=4 \
              --in .organization.id=5 \
			  --out user.name \
			  --out .organization.name

The "Hit?" column that the command line displays indicates whether the feature was pulled from the online store or not. If it was pulled from the online store, it will be marked with a checkmark (✓).

The link printed out in the command output will take you to a page that shows how the query was executed.

Flags

Known feature value. Prefix the feature with . to disable namespace inference in --out. Use @file(path) to read the value from a file. Examples: --in user.id=1232, --in .user.id=1232, --in user.data=@file(data.json)

Treat all input values as raw strings, preventing automatic JSON deserialization. When set, values like '{"foo":"bar"}' will be sent as quoted strings instead of JSON objects.

Feature or feature namespace to compute. When a feature namespace is not specified, the namespace will be inferred from --in if possible. Examples: --out user.name, --out user, --out user.organization.name, --out name

Specify a max-staleness for a feature. Example: --staleness user.some_feature=5m

Deploy the code to a branch before running the query.

For applications where the underlying data is changing frequently (as in streaming applications), it can be helpful to poll the same query. When --repeat is specified, the Chalk CLI will run your query every period.

The deployment to query. This flag is useful when you wish to query code that was deployed with --no-promote.

Associate a query name with this request.

Optional. Overrides 'now' in resolver execution within the query. Pass as an ISO8601 instant, e.g.: '2023-01-01T09:30:00Z'

--tag <tag>
string[]

Tags to use when requesting these features.

Persistently store all intermediate computed values for debugging. This dramatically decreases performance.

Explain this query

Run a benchmark on the query. Specify parameters: --benchmark --qps=8000 --duration=10s

Enable tracing of your query results.

Queries per second. For use only when --benchmark is set

An immutable context that can be accessed from Python resolvers.

Include a planner option. Repeat this flag to provide many options. Example: --planner-option my_option=my_value

Duration to benchmark in seconds. For use only when --benchmark is set

File to which to save the output of a benchmark. For use only when --benchmark is set

Duration to warm up the benchmark benchmark in seconds. For use only when --benchmark is set

Use online query with multiple inputs per feature and has-many outputs. With --bulk, you can pass --in multiple times for the same feature to, for example, pass a list of ids.

Correlation ID to associate with the query request for tracking purposes.

StatsD host to send query metrics to. Both --statsd-host and --statsd-port must be set to enable metric emission.

StatsD port to send query metrics to. Both --statsd-host and --statsd-port must be set to enable metric emission.

$ chalk query --in user.id=1
Using '--out=user'
Results
https://chalk.ai/environments/dmo2ad5trrq3/query-runs/cmdsb2pmp01bj0vavq2f4zvew
Name Hit? Value
─────────────────────────────────────────────────────────
user.count_transfers[1d] 2
user.count_transfers[7d] 4
user.count_transfers[30d] 7
user.credit_report_id 123
user.denylisted false
user.dob "1977-08-27"
user.domain_age_days 10200
user.domain_name "nasa.gov"
user.email "nicoleam@nasa.gov"
user.email_age_days 2680
user.email_username "nicoleam"
user.id 1
user.llm_stability ✓ "average"
user.llm_manual_review ✓ false
user.name "Nicole Mann"
user.name_email_match_score 75.0
user.total_spend 5239.06

Use the lint command to check for errors in your feature pipelines.

The lint command is composed of two steps:

  • Local validation: The Chalk CLI uses chalkpy to check for errors in any Chalk resolvers (for example, resolvers that take incompatible inputs and outputs).
  • Remote validation: Once local validation passes, Chalk sends the features and resolvers to the server to validate against the deployment.
Flags

Only perform local validation. Do not call to Chalk for remote validation.

Only perform LSP validation; calls remote server.

$ chalk lint
✓ Found resolvers
Error[153]: SQL file resolver references an output feature
'user.address' which does not exist.
The closest options are 'user.denylisted', 'user.name',
'user.total_spend', 'user.domain_age_days', and 'user.count_transfers'.
--> src/users.chalk.sql:8:4
4 | id,
5 | email,
6 | dob,
7 | name,
8 | address
| ^^^^^^^ output feature not recognized

Use the delete command to remove features for specific primary keys.

This command is irreversible, and will drop all data for the given features and primary keys in the online and offline stores.

You can also drop data by tag, which can be helpful to meet GDPR requirements. For example, if you tag PII features with pii, you can run

$ chalk delete --tags pii --keys=user2342
Flags

The namespace of the feature to be deleted. If not provided, features are expected to be namespace (e.g. user.name).

The features to be deleted.

The tags to be deleted.

Primary keys to be deleted.

If true, skip confirmation before deleting features.

$ chalk delete --keys=1,2,3 --features user.name,email,age
Are you sure you want to delete these features? [y/n]
Successfully deleted features

Use the migrate-type command to migrate a feature to a new type.

This increments the internal feature version number of a feature—data is not deleted but it will no longer be queryable. This operation is not reversible.

Flags

The features to be deleted.

The namespace of the feature to be deleted. If not provided, features are expected to be namespace (e.g. user.name).

If true, skip confirmation before deleting features.

$ chalk migrate-type --features user.name,email,age
The following features will be migrated:
- user.name
- user.email
- user.age
Are you sure you want to migrate all instances of the features? [y/n]
Successfully migrated features

Use the drop-version command to drop all version data for a given set of features.

This command is irreversible, and will drop all version data for the given features. If you no longer want to have versions for a feature, you can drop all previous version data to simplify future deployments.

Flags

The namespace of the feature to be deleted. If not provided, features are expected to be namespace (e.g. user.name).

The features to be deleted.

$ chalk drop-version --features user.name,email,age
Feature version data will be dropped for the following 3 features:
- user.name
- user.email
- user.age
Are you sure you want to drop all version data for these features? [y/n]
✓ Successfully dropped version data

Batch Jobs

Use the trigger command to run resolvers from the CLI.

In addition to scheduling resolver executions, Chalk allows you to trigger resolver executions from the CLI. The trigger endpoint, also supported in Chalk's API clients, allows you to build custom integrations with other data orchestration tools like Airflow.

If you trigger a resolver that takes arguments, Chalk will sample the latest value of all temporally-consistent feature values. Then, it will execute the resolver for each of those arguments.

Flags

The resolver to trigger.

The Chalk deployment id to trigger.

Persist data to the online store, if features are eligible.

Persist data to the offline store.

Ensure that only one job will be kicked off per uniquely provided key.

$ chalk trigger --resolver my.module.fn
ID: j-2qtwuxpskm2pbg
Status: Received
URL: https://chalk.ai/runs/j-2qtwuxpskm2pbg

Use the incremental status command to get the current progress state for an incremental resolver.

Specifically, this returns the timestamps used by the resolver to only process recent input data.

  • Max Ingested Timestamp: the latest timestamp found in the input data on the resolver's previous run.
  • Last Execution Timestamp: the most recent time at which this resolver was run. By default, incremental resolvers rely only on the max ingested timestamp.
Flags

Name of an incremental resolver.

Name of a scheduled query.

$ chalk incremental status --resolver my.module.fn
Resolver: my.module.fn
Environment: my_environment_id
Max Ingested Timestamp: 2023-01-01T09:30:00+00:00
Last Execution Timestamp: N/A

Use the incremental drop command to clear the current progress state for an incremental resolver.

Specifically, this erases the timestamps the resolver uses to only ingest recent data. The next time the resolver runs, it will ingest all historical data that is available.

Flags

Name of an incremental resolver.

Name of a scheduled query.

$ chalk incremental drop --resolver my.module.fn
Successfully cleared incremental progress state for resolver: my.module.fn

Use the incremental set command to set the current progress state for an incremental resolver.

Specifically, this configures the timestamps used by the resolver to only process recent input data.

  • max_ingested_ts represents the latest timestamp found in the input data on the resolver's previous run.
  • last_execution_ts represents the most recent time at which this resolver was run.

Both of these values must be given as ISO-8601 timestamps with a time zone specified.

Flags

Name of an incremental resolver.

Name of a scheduled query.

Latest timestamp found in ingested data (ISO 8601 Timestamp).

Timestamp of latest run of this resolver (ISO 8601 Timestamp).

$ chalk incremental set --resolver my.module.fn --max_ingested_ts "2023-01-01T09:30:00Z"
Successfully updated incremental progress state for resolver: my.module.fn

Use the aggregate backfill command to backfill a materialized window aggregation.

Flags

The names of the feature to backfill. Chalk will backfill potentially many aggregations for a single feature

An ISO8601 instant string to set the lower bound on the feature time above which to backfill.

An ISO8601 instant string to set the upper bound on the feature time below which to backfill.

The resolver to use for backfilling.

If set, the command will print the plan of the backfill without executing it. The output will show the expected number of tiles to be backfilled and anticipated storage needs.

If true, the backfill will execute the underlying SQL source to determine the exact number of rows that need to migrate.

Specify the resource group to run the query against.

$ chalk aggregate backfill --feature user.transaction_sum
List all materializations.
$ chalk aggregate list
Series Namespace Group Agg Bucket Retention Aggregation Dependent Features
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 transaction user_id merchant_id amount 1d 30d sum user.txn_sum_by_merchant merchant.txn_sum_by_user
1 transaction user_id merchant_id amount 1d 30d count user.txn_count_by_merchant
2 transaction user_id amount 1d 30d sum user.txn_sum

Use the chalk jobs list command to list information about jobs in the queue for your environment.

Flags

Filter jobs by state (scheduled, running, completed, failed, canceled, not_ready)

Filter jobs by kind (async_offline_query)

Maximum number of jobs to return

Offset for pagination

$ chalk jobs list
$ chalk jobs list --state running
$ chalk jobs list --kind async_offline_query --limit 10

Tasks

Use the task run command to execute Python scripts or modules in your deployment.

'target' can be a file path (file.py), module name (my_module), or include a specific function (file.py::func, module::func) or class method (file.py::Class::method, module::Class::method).

Flags

Treat target as a module reference instead of a file path

Send the request to a given branch.

Use specific resource group

Launches script with ddprof profiling

Sets number of retry attempts. Defaults to 0.

--arg <arg>
string[]

Arguments to pass to the script (can be specified multiple times). Format: key=value for kwargs or plain values for positional args. Note: All arguments are passed as strings - function parameters must have 'str' type hints

Path to JSONL file where each line contains kwargs for a separate script task. Arguments preserve their JSON types (int, bool, etc.)

Skip the automatic chalk apply when running with --branch

Skip Python file validation checks

$ chalk task run my_script.py
$ chalk task run my_script.py::my_function
$ chalk task run my_module::MyClass::my_method
$ chalk task run my_script.py --branch feature-branch --watch

Cancel a script task by its task ID.

$ chalk task cancel abc123

Retrieve information about a script task by its task ID.

$ chalk task get st-abc123-def456

Rerun a script task by its task ID. Creates a new task with the same configuration as the original.

$ chalk task rerun st-abc123-def456

Scheduled Resolvers

Datasets

Use the chalk dataset list command to view information about datasets that have been created in this environment.

Flags

Limit the number of datasets to retrieve

Search for datasets by name

Use the 'chalk dataset get' command to view detailed information about a specific dataset.

Use the 'chalk dataset rename' command to change the name of an existing dataset.

Use the 'chalk dataset download' command to download files for a specific dataset revision.

Flags

Directory to save downloaded files

Include givens files in download

Include trace files in download

Offline Queries

Dev Tools

Use the topic push command to push an message to one of your configured Kafka topics.

This command can be useful for testing streaming applications.

The value field accepts special template syntax to generate random values. The following template functions are available:

  • rand(): Generate a random float between 0 and 1.
  • rand(max): Generate a random float between 0 and max.
  • rand(min, max): Generate a random float between min and max, inclusive.
  • randint(): Generate a random integer between 0 and 1,000,000, inclusive.
  • randint(max): Generate a random integer between 0 and max, inclusive.
  • randint(min, max): Generate a random integer between min and max, inclusive.
  • randstr(length): Generate a random string of length length.
  • now(): Generate the current datetime in ISO 8601 format.

For example, we might push a message of the form:

--value '{"ip": "121.32.randint(255).randint(255)"}'

which would generate messages like {"ip": "121.32.43.232"}.

Flags

Name of the topic to push to.

Value of the message to push to the topic.

Key of the message to push to the topic.

Name of the integration to use.

Format of the message. JSON or binary accepted.

Repeat sending the message. When --repeat is specified, the Chalk CLI will run your query every <duration> period.

$ chalk topic push --value '{"key": "value"}' --integration kafka --topic my-topic --key 1
✓ Asked Chalk to push message to topic

Use the chalk diff command to compare two branches or deployments in your Chalk environment. If no source is provided, the most recent main deployment will be the source. If a branch is provided, the most recent deployment will be used. This command will compare the different deployments using the diff command-line tool.

Arguments
<source>
string

The source branch or deployment ID to compare against. If not provided, the most recent main deployment will be used.

<target>
string

The target branch or deployment ID to compare with the source. If branch provided, the most recent deployment will be used.

Flags

Show unified diff (-u flag for diff/git diff)

$ chalk diff main new-branch
diff -r --color=always main/src/models.py new-branch/src/models.py
6a9,10
> new_feature: float
< old_feature: float
Only in new-branch/src: new_file.py

Use the chalk source command to download the source code of a Chalk deployment. This command will fetch the source code from the Chalk server and extract it to a temporary directory.

Arguments
<id>
string

The ID of the Chalk deployment to download the source code from. If not provided, the most recent main deployment will be used.

Flags

The directory to extract the source code to. If not provided, a temporary directory will be used.

$ chalk source

Use the chalk branch start command to start the branch server if it isn't already running. Both chalk apply --branch and chalk query --branch will automatically start the branch server if it isn't already running, so it isn't usually necessary to manually start the branch server.

$ chalk branch start
✓ Branch server is ready

Use the 'chalk shell' command to open an interactive shell session to a Kubernetes pod. This provides a debug terminal similar to 'kubectl exec -it'.

Flags

Kubernetes cluster name

Kubernetes namespace

Pod name

Container name (optional, uses first container if not specified)

Command to execute (default: ["/bin/sh"])

Log all streaming requests/responses to this file for debugging

$ chalk shell --cluster my-cluster --namespace default --pod my-pod-abc123
$ chalk shell -c my-cluster -n default -p my-pod-abc123 --container app
$ chalk shell -c my-cluster -n default -p my-pod-abc123 --command /bin/bash
Output feature and resolver metadata to a file
Flags

Filepath to output features

The id of the deployment.

Output format for the graph

$ chalk graph --out fraud_features.bytes

Benchmark

Use the benchmark run command to kick off a run to benchmark queries - this command is in alpha.

Provide the desired input features, output values, qps, and duration. Default QPS and warmup QPS is 1. Default duration and warmup duration is 1m.

Flags

Queries per second.

Duration to benchmark in seconds.

Known feature value. Examples: --in user.id=1232

Feature or feature namespace to compute. Examples: --out user.name, --out user, --out user.organization.name

The name of the named query to run with. Can provide instead of outputs.

The version of the named query to run with. Examples: --query-name my_named_query --query-name-version 1.1.0

Duration to warmup for benchmark in seconds.

Queries per second during the warmup for benchmarking.

Name of file containing inputs for benchmark

Whether to include percentiles over time in the result graph - latency distribution and histogram are unaffected*. Accepted values: 50, 95, 99, 999. *If 999 is provided, this will include p99.9 as an addition to the percentile calculations as well as the graph. Default behavior will display distribution across many percentiles (min - p99) but graph will only display QPS over time. Example: chalk benchmark run --in feature_class.id --percentile 50 --percentile 99

Override image of benchmark machine in kubernetes. Likely not useful unless advised by Chalk engineer.

Memory request of the kubernetes container running the benchmark For valid inputs, checkout: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes

Memory limit of the kubernetes container running the benchmark For valid inputs, checkout: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes

CPU request of the kubernetes container running the benchmark For valid inputs, checkout: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes

CPU limit of the kubernetes container running the benchmark warmup For valid inputs, checkout: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes

Memory request of the kubernetes container running the benchmark warmup For valid inputs, checkout: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes

Memory limit of the kubernetes container running the benchmark warmup For valid inputs, checkout: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes

CPU request of the kubernetes container running the benchmark warmup For valid inputs, checkout: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes

CPU limit of the kubernetes container running the benchmark warmup For valid inputs, checkout: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes

If true, skip confirmation before creating running benchmark.

$ chalk benchmark run --in user.id=1 --in user.id=2 --out user.email --qps 80 --duration 100s --warmup-qps 50 --warmup-duration 60s
$ chalk benchmark run --in-file query_abc_input_set_1.parquet --out user.email --percentile 99

Use the benchmark list-inputs command to view the availabile input file names for your run.

Provide a prefix to filter the input files names by.

Flags

The prefix of file(s) you would like to filter by.

$ chalk benchmark list-inputs
$ chalk benchmark list-inputs --prefix query_abc_inputs

Use the benchmark upload command to upload input files for use in benchmarking.

Provide the path to the desired file to upload. Will use the name of the file as the input-file in benchmark run. Please include the file extension in the file path upload. Only supports the following file types currently: parquet, json.

Flags

The local paths to the files to upload.

$ chalk benchmark upload --file-path path/to/input_file.parquet

Use the benchmark list-results command to view the availabile result file names from past benchmark runs.

Provide a prefix to filter the result file names by.

Flags

The prefix of file(s) you would like to filter by.

$ chalk benchmark list-results
$ chalk benchmark list-results --prefix 12345

Use the benchmark download command to download result files from benchmarking runs.

Provide the name to the desired file to download. Please include the file extension in the file path upload.

Flags

The name(s) of the file(s) to download.

$ chalk benchmark download --name 12345.tar.gz --name 67890.tar.gz

Traces

Retrieve a distributed trace from Chalk.

A trace represents the complete execution path of a request through the Chalk system, showing all operations and their timing information.

Flags

Operation ID to retrieve the trace for

Trace ID to retrieve the trace for

$ chalk trace get --operation-id abc123
$ chalk trace get --operation-id "my-operation-id"

List distributed traces from Chalk with optional filtering.

You can filter traces by time range, service name, and span name. By default, this command returns traces from the last hour. Use --all-pages to fetch all available results.

Flags

Start time for trace filtering (e.g., '2024-01-01T00:00:00Z' or relative like '24h')

End time for trace filtering (e.g., '2024-01-01T23:59:59Z')

Filter traces by service name

Filter traces by span name (operation name)

Maximum number of traces to return per page (default: 100, max: 1000)

Fetch all pages of results (by default only the first page is returned)

$ chalk trace list
$ chalk trace list --start-time 24h
$ chalk trace list --service-name my-service --span-name my-operation
$ chalk trace list --start-time "2024-01-01T00:00:00Z" --end-time "2024-01-01T23:59:59Z"
$ chalk trace list --limit 50 --all-pages

Commands for working with individual spans in traces

Queries

List recent query executions with optional filtering

Flags

Include latency information in the output

Filter queries with latency greater than specified milliseconds

Filter by query plan ID

Filter by meta query ID

Filter by meta query name

--id <id>
string

Filter by query ID (partial match)

Filter by branch name

Filter by agent ID

Filter by root namespace primary key

Time window to search (e.g., 1h, 24h, 7d)

Maximum number of results to return

Filter to only show queries with errors

Retrieve detailed information about a specific query run by its ID

Arguments

The ID of the query run to retrieve

Flags

Approximate timestamp for query lookup (RFC3339 format, e.g., 2024-01-15T10:30:00Z)

Codegen

Use the stubgen command to generate Python stubs using your feature types.

Flags

Watch the current files for changes and run stubgen when there are changes in the project.

The root directory of your project. By default, Chalk will find the root directory by looking for a chalk.yaml file.

$ chalk stubgen --watch
⡿ Waiting for changes...

This command generates features that can be used with the ChalkClient class in the chalkpy pip package.

Flags

Whether to include the dtype explicitly for each feature, e.g. feature(dtype=pa.int64())

Path of output file with the filename included. Creates the file if it does not exist.

$ chalk codegen python --out=output.py
Wrote features to file 'output.py'

This command generates Go structs that mirror the defined features, and create references to the available features. For details on using the resulting generated code, see the Go client.

Flags

Path of output file with the filename included. Creates the file if it does not exist.

Package name to use. If unspecified, we will guess the appropriate package based on neighboring files of the output file specified.

$ chalk codegen go --out=/codegen/features.go
✓ Wrote features to file '/codegen/features.go'

Java codegen generates classes that mirror the features classes defined in Python and creates Feature objects for each available feature. If that also fails, please use the --package flag to specify the package name explicitly. For details on using the resulting generated code, see the Java client.

Flags

Output directory to dump generated Java class files.

Java package name that the generated code should use. If not specified, we will infer from existing files in the output directory. If that fails, we will concatenate folder names in the path after src/main/java.

Whether feature names are 'snake' or 'camel' case.

$ chalk codegen java --out=/java_project/src/main/java/codegen/
✓ Wrote features to folder '/java_project/src/main/java/codegen/'

Typescript codegen generates TypeScript types that mirror the defined features. It will also generate a type FeaturesType that can be used to parameterize the Chalk TypeScript client in order to provide automatic autocomplete and type checking for queries.

Flags

Path of output file with the filename included. Creates the file if it does not exist.

Maximum depth of the feature path to generate types for: e.g. depth 4 => x.y.z.feature_1

$ chalk codegen typescript --out=/ts_project/src/codegen/features.ts
✓ Wrote features to file '/ts_project/src/codegen/features.ts'

This command will generate Pydantic models that can be used to describe streaming messages and structs from proto files given with --in.

Flags

Output filepath to dump the pydantic models. The path should include the filename, and the file will be created if it does not exist.

--in <in>
string

Path to the input proto file or directory of proto files.

$ chalk codegen pydantic --in proto/ --out=output.py
✓ Wrote Pydantic models to file 'output.py'

Information

Show the resolvers you have defined and the features they resolve

Get access tokens and manage service tokens.

List all available permissions that can be assigned to service tokens.

User the user command to print the currently logged-in user.

Useful for checking that your CLI can communicate with Chalk.

$ chalk user
User: cl0wpcey0770609l56lazvc52
Environment: 900nw89h0wz0613l57la8v958
Team: 131cpe52r000009l0662k3m11

Print the build, platform, and version information for the Chalk CLI tool.

Flags

Output only the tag, and no other information

Check the current version against the latest version available.

$ chalk version
chalk version
Version: v0.9.5
Platform:
Hash: a9297a32e5d2e6507f27d2ea98b831fbcb775e21
Build Time: 2023-01-23T21:19:53+00:00
$ chalk version --tag-only
v0.9.5

Print out Chalk's public changelog in an interactive viewer

$ chalk changelog
# Changelog
Improvements to Chalk are published here!
See our [public roadmap](https://github.com/orgs/chalk-ai/projects/1) for upcoming changes.
---
## January 26, 2023
### SQL File Resolvers
SQL-integrated resolvers can be completely written in SQL files: no Python required!
...

List Chalk releases from GitHub with their versions and release notes

Flags

Component to filter releases for (e.g., 'all')

$ chalk releases list

Use the chalk branch source command to download the source code for a branch.

Flags

The branch to download the source code from.

Download the source code of the deployment with this ID. If not specified, you will be asked to pick one.

Output folder where code should be output. If not specified, will use the deployment ID as the containing folder.

Use the latest branch deployment instead of prompting for a deployment ID.

$ chalk branch source
✓ Fetched branches
Which branch would you like to download?: elliot-test
Which deployment id would you like to download?: clkhcspz1000201or1uw21rrz
✓ Fetched download link
✓ Downloaded source
✓ Extracted source to clkhcspz1000201or1uw21rrz

Use the chalk branch list command to view information about branches that have been deployed in this environment.

Branches can be created with chalk apply --branch.

$ chalk branch list
✓ Fetched branches
Name Deploys Last
─────────────────────────────────────
elliot-test 3 2h ago
testing 33 3w ago
gabs 54 3w ago
test-mc 6 1mo ago
test_credit_score 1 1mo ago

Use the healthcheck command to view information on the health of Chalk's api server and its services.

The healthcheck will change according to the active environment (determined by chalk environment and active project (determined by chalk project.

$ chalk healthcheck
Name Status
───────────────────────────────────────────────────────
Metadata DB HEALTH_CHECK_STATUS_OK
gRPC Client Query HEALTH_CHECK_STATUS_OK
Logging Client HEALTH_CHECK_STATUS_OK
Metrics DB HEALTH_CHECK_STATUS_OK
Push Registry HEALTH_CHECK_STATUS_OK
Source Bucket HEALTH_CHECK_STATUS_OK

Search through Chalk logs using powerful filtering capabilities.

Searchable Fields:

  • resolver: Search by exact resolver name
  • query_name: Filter by specific query names
  • operation_id: Search using internal query IDs
  • correlation_id: Find logs by user-provided query IDs
  • trace_id: Filter by trace IDs
  • pod_name: Search logs from specific pods
  • component: Filter by Chalk components (engine, branch, offline-query)
  • message: Search through log message content
  • resource_group: Filter by Chalk resource groups
  • deployment: Search by Chalk deployment ID
  • app: Filter by Kubernetes deployment or statefulset
  • all_filter: Search across multiple fields simultaneously
  • resolver_filter: Find logs by similar resolver names

If the query by which you're filtering has a space or special character, like a ., enclose it in double quotes.

Flags

Query to search logs

Use aggregated search to get time series data

Start time for filtering logs (e.g., '1h ago', '2024-01-01T00:00:00Z')

End time for filtering logs (e.g., 'now', '2024-01-01T01:00:00Z')

Window period for aggregation buckets (e.g., '5m', '1h')

Continuously stream new logs

$ chalk logs --query "resolver:user_features"
$ chalk logs --query "component:engine message:error"
$ chalk logs --query "correlation_id:abc-123"
$ chalk logs --query "all_filter:user deployment:prod"
$ chalk logs --aggregate --query "component:engine" --start-time "2h ago" --window-period "10m"
$ chalk logs --follow --query "component:engine"
$ chalk logs -f --query "resolver:user_features"

Output metadata for features from the active deployment to a file. If no output flag is specified, the metadata will be written to 'chalk_features_{environment_id}_{deployment_id}.json' in the current directory.

Flags

Filepath to output features

$ chalk features --out fraud_features.json

Resolve metadata for specific features by their fully qualified names (FQNs). This command fetches metadata only for the features specified by the --fqns flag.

Flags

Fully qualified names of features to resolve

Filepath to output features

$ chalk features resolve --fqns user.age,user.email --out selected_features.json

Use the chalk pods command to display status of Kubernetes pods that support your Chalk deployment.

$ chalk pods

List all charts for your Chalk project and print their details in JSON format.

$ chalk charts list
{
"charts": [
{
"id": "chart-1",
"name": "My First Chart",
"description": "A sample chart"
}
]
}

Use the metrics export command to push an message to one of your configured Kafka topics.

$ chalk metrics export

Audit

List all audit logs for your environment.

$ chalk audit list

Errors

List recent query errors with optional filtering by operation ID, feature, resolver, or query name

Flags

Filter by operation ID

Filter by feature FQN

Filter by resolver FQN

Filter by query name

Time window to search (e.g., 1h, 24h, 7d)

Number of results per page

Fetch all pages of results (by default only the first page is returned)

List query errors aggregated by similar error patterns, showing counts and time ranges

Flags

Filter by operation ID

Filter by feature FQN

Filter by resolver FQN

Filter by query name

Time window to search (e.g., 1h, 24h, 7d)

Number of results per page

Fetch all pages of results (by default only the first page is returned)

Catalog

List all available catalogs that can be queried via ChalkSQL.

$ chalk catalog list
$ chalk catalog list --environment <env-id>

List available schemas and tables that can be queried via ChalkSQL.

Flags

Filter by catalog name

Filter by schema name pattern

Filter by table name pattern

Show tables for each schema

$ chalk catalog schemas
$ chalk catalog schemas --schema user%
$ chalk catalog schemas --table %features%

Containers

Use the 'chalk container run' command to create and run a new container as a Kubernetes pod.

The container will run in your Chalk environment's Kubernetes cluster with the specified image and configuration.

Flags

Docker image to run

Unique name for the container

Entrypoint command and arguments (comma-separated)

User-defined tags as key=value pairs (comma-separated)

How long the container should run before termination (e.g., '1h', '30m', '0' for infinite)

Optional cluster name (defaults to environment's cluster)

CPU limit (e.g., '2' or '500m')

Memory limit (e.g., '4Gi' or '512Mi')

$ chalk container run --image=nginx:latest --name=my-nginx
$ chalk container run -i python:3.9 -n my-script -e "python,script.py" --lifetime=1h
$ chalk container run -i postgres:14 -n my-db --cpu=2 --memory=4Gi --tags="env=test,version=1.0"

Use the 'chalk container get' command to retrieve the status of a specific container.

This will show detailed information about the container including its phase, image, tags, and creation time.

Flags

Name of the container to get

Optional cluster name (defaults to environment's cluster)

$ chalk container get --name=my-nginx
$ chalk container get -n my-container --cluster=my-cluster

Use the 'chalk container list' command to list all containers in your Chalk environment.

This will show a table of all containers with their status, image, phase, and other details.

Flags

Optional cluster name (defaults to environment's cluster)

$ chalk container list
$ chalk container list --cluster=my-cluster

Clusters

Use the 'chalk clusters list' command to view clusters available in this team.

Clusters define the Kubernetes infrastructure for your Chalk deployments.

$ chalk clusters list

Use the 'chalk clusters describe' command to view detailed information about a specific cluster.

Provide the cluster ID as an argument to retrieve its full configuration and metadata.

Arguments

The ID of the cluster to describe

$ chalk clusters describe clstr_abc123

Use the 'chalk clusters test' command to verify that Chalk can connect to a cluster.

This command tests the connection to an existing cluster by ID and reports success or failure.

Arguments

The ID of the cluster to test

$ chalk clusters test clstr_abc123

Cloud Accounts

Cloud Storage

Integrations

Use the chalk integration list command to view the integrations that are available in this environment.

Integrations can be deleted with chalk integration delete.

Flags

Decrypt the integration secrets.

$ chalk integration list --decrypt
✓ Fetched integrations
Name Kind Secrets Updated
────────────────────────────────────────────────────────────────
pgdemo POSTGRESQL PGDATABASE, PGHOST, PGPASSWORD 1mo ago
bqprod BIGQUERY CREDENTIALS_JSON 2mo ago

Use the 'chalk integration get' command to retrieve a specific integration by name or ID.

You can either specify the integration name/ID with the '--name' flag or select from a list.

Flags

Name or ID of the integration to get.

Decrypt the integration secrets.

$ chalk integration get --name my-integration --decrypt
✓ Fetched integration
Name: my-integration
Kind: POSTGRESQL
Secrets:
PGHOST: localhost
PGPORT: 5432
Updated: 1mo ago

Use the chalk integration insert command to create a new integration.

This command allows you to create integrations for various data sources like PostgreSQL, Snowflake, BigQuery, etc. You'll be prompted to provide the necessary environment variables for the integration type you select.

Interactive Mode (recommended)

If no arguments are provided, chalk integration insert will enter interactive mode, which will prompt you for the integration name, type, and required environment variables.

chalk integration insert

With Flags

You can also provide the integration name and kind as flags:

chalk integration insert --name my-postgres --kind postgresql

Available integration kinds:

  • athena
  • aws
  • bigquery
  • clickhouse
  • cohere
  • databricks
  • dynamodb
  • gcp
  • kafka
  • kinesis
  • mysql
  • openai
  • postgresql
  • pubsub
  • redshift
  • snowflake
  • spanner
  • trino
Flags

Name of the integration.

Kind of integration (e.g., postgresql, snowflake, bigquery).

$ chalk integration insert --name my-postgres --kind postgresql
✓ Integration created

Use the chalk integration delete command to delete an integration.

You can either specify the integration name or ID with the --name flag, select an integration from a list, or specify the integration names/IDs as arguments.

Flags

Names or IDs of the integrations to delete.

$ chalk integration delete --name my-integration
✓ Deleted integration

Webhooks

Use the 'chalk webhook list' command to view all webhooks configured in the current environment.

Webhooks can be created with 'chalk webhook create', updated with 'chalk webhook update', and deleted with 'chalk webhook delete'.

$ chalk webhook list
✓ Fetched webhooks
Name URL Subscriptions Updated
──────────────────────────────────────────────────────────────────────────
query-monitor https://hooks.example.com query.run 1mo ago
resolver-hook https://hooks.internal.net resolver.run 2w ago

Use the 'chalk webhook create' command to create a new webhook that subscribes to Chalk events.

You must provide:

  • name: A descriptive name for the webhook
  • url: The endpoint URL where events will be sent
  • subscriptions: Event types to subscribe to (e.g., query.run, resolver.run)

Optional:

  • secret: A secret string for webhook signature verification
  • headers: Custom HTTP headers as a JSON object
Flags

Name of the webhook.

URL endpoint for the webhook.

Event types to subscribe to (e.g., query.run, resolver.run).

Secret for webhook signature verification.

Custom headers as JSON object (e.g., '{"Authorization": "Bearer token"}').

$ chalk webhook create --name query-monitor --url https://hooks.example.com --subscriptions query.run,resolver.run
✓ Created webhook
$ chalk webhook create --name auth-webhook --url https://internal.example.com/hooks --subscriptions query.run --secret my-secret --headers '{"Authorization": "Bearer token123"}'
✓ Created webhook

Use the 'chalk webhook update' command to update an existing webhook.

You must provide the webhook ID, or select from a list if not provided.

All other fields are optional and will only be updated if provided:

  • name: New name for the webhook
  • url: New endpoint URL
  • subscriptions: New event types to subscribe to
  • secret: New secret for signature verification
  • headers: New custom HTTP headers as a JSON object
Flags
--id <id>
string

ID of the webhook.

Name of the webhook.

URL endpoint for the webhook.

Event types to subscribe to (e.g., query.run, resolver.run).

Secret for webhook signature verification.

Custom headers as JSON object (e.g., '{"Authorization": "Bearer token"}').

$ chalk webhook update --id abc123 --name updated-webhook --url https://new-url.example.com
✓ Updated webhook
$ chalk webhook update --id abc123 --subscriptions query.run,resolver.run,feature.computed
✓ Updated webhook

Use the 'chalk webhook delete' command to delete a webhook.

You can specify the webhook ID with the '--id' flag, or select from a list if not provided.

Flags
--id <id>
string

ID of the webhook.

$ chalk webhook delete --id abc123
✓ Deleted webhook
$ chalk webhook delete
# Presents an interactive list to select from

Secrets

Use the chalk secret list command to view the secrets that are available in this environment.

Secrets can be deleted with chalk secret delete.

Flags

Decrypt the secrets.

$ chalk secret list --decrypt
✓ Fetched secrets
Name Value Integration Updated
────────────────────────────────────────────────────
PGDATABASE abc pgdemo 1mo ago
PGHOST 44.444.444.444 pgdemo 1mo ago
PGPASSWORD abvdg$Qabw3-m!zP pgdemo 1mo ago
PGPORT 5432 pgdemo 2mo ago
PGUSER developer pgdemo 1mo ago
API_KEY PRZ2fyw3ynf.cvm! 1mo ago

Use the 'chalk secret get' command to retrieve a specific secret by name.

You can either specify the secret name with the '--name' flag or select from a list.

Flags

Name of the secret to get.

Decrypt the secrets.

$ chalk secret get --name my-secret --decrypt
✓ Fetched secret
Name: my-secret
Value: abc123
Updated: 1mo ago

Use the chalk secret set command to upsert secrets. This command can be used to set one or more secrets at once.

Interactive Mode (recommended)

If no arguments are provided, chalk secret set will enter interactive mode, which will prompt you for the secret name and value. You can also provide only the secret name as an argument, and the CLI will prompt you for the value.

chalk secret set

stdin (recommended)

Using stdin is helpful for setting secrets whose value is the contents of a file:

cat key.pem | chalk secrets set TLS_CERT

You can also use stdin to set the value of a secret to the output of a command or script:

base64 -i chalk.p12 | chalk secrets set PKCS12_CERT

Key-Value Pairs

You can also provide key-value pairs as arguments to chalk secret set.

chalk secrets set HOSTNAME=73.62.143.151
chalk secrets set API_KEY=0x9Xz4#3 PORT=1234

Note that using key-value pairs will cause the secret value to be stored in your shell history. To avoid this, use interactive mode, stdin, or the web dashboard. Or, follow the instructions below to exclude chalk secrets set commands from being stored in your shell history.

Exclude from Shell History

Bash and ZSH will ignore chalk secrets set commands in your shell history if you set the HISTIGNORE or HISTORY_IGNORE environment variables, respectively.

For Bash, add the following to your ~/.bashrc file:

export HISTIGNORE='*chalk secrets set*'

For ZSH, add the following to your ~/.zshrc file:

HISTORY_IGNORE="(chalk secrets set*)"
$ chalk secrets set HOSTNAME=73.62.143.151
✓ Secrets saved

Use the chalk secret delete command to delete a secret.

You can either specify the secret name with the --name flag, select a secret from a list, or specify the secret names as arguments.

Flags

Names of the secrets to delete.

$ chalk secret delete --name my-secret
✓ Deleted secret

Traffic

Use this command to get a list of tags and their traffic weights.

chalk traffic get
tags:
- deployment_id: cm9n46thy000bwe6y784y4j0l
tag: blue
weight: 50
mirror_weight: 10
- deployment_id: cm9n3vx7c0004we6yefnsw8kg
tag: green
weight: 50
mirror_weight: 0

Use this command to set the traffic weights and mirror weights for tags. Traffic weights must sum to 100.

Flags

weight of tags by name in the blue=10,green=90 format

mirror weight of tags by name in the blue=10,green=5 format

chalk traffic set --tags blue=10,green=90 --mirror blue=5
tags:
- deployment_id: cm9n46thy000bwe6y784y4j0l
tag: blue
weight: 10
mirror_weight: 5
- deployment_id: cm9n3vx7c0004we6yefnsw8kg
tag: green
weight: 90
mirror_weight: 0

Use this command to promote the lowest weighted blue or green tag to 100% traffic serving.

chalk traffic promote
tags:
- deployment_id: cm9n46thy000bwe6y784y4j0l
tag: blue
weight: 100
- deployment_id: cm9n3vx7c0004we6yefnsw8kg
tag: green
weight: 0

Use this command to set mirror traffic weights for tags. This is useful for testing new features by mirroring a percentage of traffic without affecting main traffic routing.

Flags

mirror weight of tags by name in the blue=10,green=5 format

chalk traffic mirror --mirror blue=15,green=5
tags:
- deployment_id: cm9n46thy000bwe6y784y4j0l
tag: blue
mirror_weight: 15
- deployment_id: cm9n3vx7c0004we6yefnsw8kg
tag: green
mirror_weight: 5

Use this command to suspend traffic for a specific deployment tag. This sets the traffic weight to nil, which is different from setting it to 0.

Flags

The tag to add to the deployment.

chalk traffic suspend --deployment-tag blue
tags:
- deployment_id: cm9n46thy000bwe6y784y4j0l
tag: blue
weight: null
- deployment_id: cm9n3vx7c0004we6yefnsw8kg
tag: green
weight: 100

Support

Use the flare command to send your code to a partner at Chalk for assistance.

$ chalk flare
Code uploaded successfully

Use the flare list command to list your flares.

$ chalk flare list

Use the chalk flare download command to download a flare by its ID and extract it to a directory. If no ID is provided, an interactive table will be shown for selection.

Flags

The ID of the flare to download. If not provided, shows an interactive table to select from.

Output directory to extract the flare to

$ chalk flare download
$ chalk flare download --id abc123
$ chalk flare download --id def456 --output ./flares/
$ chalk flare download --id xyz789 -o /tmp/flare-download/

Submit feedback to the Chalk team.

$ chalk feedback
_ _ _
| | | | |
___| |__ __ _| | | __ _____
/ __| '_ \ / _` | | |/ / | |
| (__| | | | (_| | | < | |
\___|_| |_|\__,_|_|_|\_\ |_____|
We love hearing your feedback and suggestions!
Feel free to add issues to our public roadmap: https://github.com/chalk-ai/roadmap/

Use the 'export' command to retrieve and export Chalk usage data based on specified criteria.

The 'export' command allows you to fetch usage data from Chalk and save it as a CSV file for external analysis. You can group the data by 'cluster' or 'instance' and specify the reporting period as either 'daily' or 'monthly'. Additionally, you can set custom date ranges to filter the exported data based on start and end dates.

Flags

Grouping for the usage chart, either cluster or instance.

The period for the usage chart. Expects 'daily' or 'monthly'.

Start time for the usage chart in ISO 8601 format. By default, 180 days ago.

End time for the usage chart in ISO 8601 format. By default, now.

Optionally save the usage chart to a file. If not specified, this command will print the chart to the terminal.

$ chalk usage export --period monthly
Date Credits Label
─────────────────────────────────────────────────────────────────
2024-05-17 00:00:00 UTC 8925.98 sandbox-eks-cluster
2024-06-17 00:00:00 UTC 8283.77 sandbox-eks-cluster
2024-07-17 00:00:00 UTC 3769.53 sandbox-eks-cluster
Get instance usage data
Flags

Output format for the instances command. Either 'pretty' or 'csv'.

$ chalk usage instances
Get the billing utilization rates for different machine types
$ chalk usage rates
Cloud Machine vCPUs Memory (gb) Credits/hr
──────────────────────────────────────────────────────────
GCP n2d-standard-4 4 16 0.85
GCP n2d-standard-8 8 32 1.7
GCP n2d-standard-16 16 64 3.4
GCP n2d-standard-32 32 128 6.8
Show credit bundles for your team
$ chalk usage bundles
Bundle ID Purchase Date Credits Price Remaining Expires On
─────────────────────────────────────────────────────────────────────────────
bundle_1000 2024-01-15 1000 $100.00 750 2024-07-15
bundle_5000 2024-02-20 5000 $450.00 2300 2024-08-20

Additional Commands

Open the Chalk dashboard in your web browser.

Flags

If true, print the URL instead of opening the browser.

$ chalk dashboard
Opening https://chalk.ai/projects

The chalk doctor command will remove all cached JWTs, leading to new credentials exchange for the next new request. This can be helpful if your ~/.config/chalk.yml file changed or has been corrupted.

Update the Chalk CLI tool. You can always switch back to your old build, stored in ~/.chalk/bin/.

Flags

The version of the CLI to install. By default latest is installed. Otherwise, the format should match something like v1.12.4

$ chalk update
Installing Chalk...
Downloading binary... Done!
Version: v1.12.4
Platform:
Hash: 01e05fcb93cfe81fcfb6a871e27f46299d536740
Build Time: 2023-02-08T07:28:55+00:00
Chalk was installed successfully to /Users/emarx/.chalk/bin/chalk-v1.12.4
Run 'chalk --help' to get started

Use the files command to see which of your files will be uploaded to Chalk.

Chalk respects .chalkignore and .gitignore files and only uploads files that don't match patterns in those files.

$ chalk files
/home/user/project_dir/.chalkignore
/home/user/project_dir/.gitignore
/home/user/project_dir/resolvers.py
/home/user/project_dir/features.py