Convergence Extensions
Convergence Extensions are custom workflows that can be added to the convergence of any service. Convergence Extension can be defined globally, where they may be reused by many Services across many Applications, or inlined in Service configs.
Configuring Convergence Extensions
Configuring Inlined Convergence Extensions
Convergence Extensions can be defined inline in Service configs.
service:
name: my-service
application: my-application
convergenceExtensions:
- inlined:
name: my-extension
kubernetesConfig:
type: KUBERNETES
local:
path: path-to-job-yaml
lifecycle: POST_APPROVAL
... # the rest of service configuration here, such as kubernetes/program configs
apiVersion: batch/v1
kind: Job
metadata:
generateName: my-prefix-
spec:
ttlSecondsAfterFinished: 600
template:
spec:
containers:
- name: my-container
image: my-image
command: ["command", ...]
restartPolicy: Never
backoffLimit: 0
In the example above, in each release channel, before deploying a service but after any requested approval have been submitted, Prodvana will run a Kubernetes job using the image my-image
, with the command command ...
, in the same runtime and namespace of the release channel. For services using Runtime Extensions, the release channel must be configured with both a custom runtime and a Kubernetes runtime, and Prodvana will run the Convergence Extension inside the Kubernetes runtime.
Apply the config with pvnctl configs apply
:
pvnctl configs apply my-service.pvn.yaml
For more information about configuring services, see Configuring Services.
(Coming Soon) Configuring Global Convergence Extensions
Valid Lifecycles
Convergence Extensions run in a lifecycle of the Service Instance being deployed in a Release Channel. When they fail, they prevent the Service Instance in that Release Channel from proceeding to the next lifecycle and will continuously retry. The following lifecycles are supported for Convergence Extensions:
- PRE_APPROVAL - run after releaseChannelStable preconditions are satisfied
- POST_APPROVAL - run after approvals and PRE_APPROVAL extensions
- If there are no approvals configured on the Release Channel, POST_APPROVAL extensions will run immediately after PRE_APPROVAL extensions are satisfied.
- POST_DEPLOYMENT - run after deployment is done before marking the Service Instance as converged in the Release Channel.
Injected Environment Variables
The following variables are injected into Convergence Extension jobs and can be used in the logic of the job.
Variable | What |
---|---|
PVN_DESIRED_STATE_ID | The desired state ID of the convergence, useful to determine, for example, what the starting and desired states for the convergence is. |
PVN_APISERVER_ADDR | The address to connect to Prodvana API. |
PVN_TOKEN | Temporary token which can be used to interact with Prodvana API. |
See our API documentation and examples for how to query Prodvana and achieve complex logic. The Python library will automatically connect to your instance of Prodvana with a temporary API token generated specifically for the Convergence Extension run.
Common Examples
Using Prodvana Backend-Agnostic Configurations
To avoid writing a Kubernetes config, you can use our configuration instead which takes docker image as an interface.
service:
name: my-service
application: my-application
convergenceExtensions:
- inlined:
name: my-extension
taskConfig:
program:
name: my-program-name
imageTag: my-tag
imageRegistryInfo:
containerRegistry: my-registry-name
imageRepository: my-repository
cmd: ['run-command-pre-deployment']
lifecycle: POST_APPROVAL
... # the rest of service configuration here, such as kubernetes/program configs
Using the Same Image As the Service Itself
A common workflow is to run a setup command from the same commit/source as the code being deployed, before updating the Service. For example, this workflow can be used to run migrations.
To use the same image for a Convergence Extension that is being used for the Service itself, use parameters. Parameter applications happen at the same time and just like any other part of the Service configuration.
service:
name: my-service
application: my-application
convergenceExtensions:
- inlined:
name: my-extension
kubernetesConfig:
type: KUBERNETES
local:
path: path-to-job-yaml
lifecycle: POST_APPROVAL
parameters:
- name: image
required: true
dockerImage:
imageRegistryInfo:
containerRegistry: my-registry-name
imageRepository: my-repository
... # the rest of service configuration here, such as kubernetes/program configs
apiVersion: batch/v1
kind: Job
metadata:
generateName: my-prefix-
spec:
ttlSecondsAfterFinished: 600
template:
spec:
containers:
- name: my-container
image: '{{.Params.image}}'
command: ["command", ...]
restartPolicy: Never
backoffLimit: 0
service:
name: my-service
application: my-application
convergenceExtensions:
- inlined:
name: my-extension
taskConfig:
program:
name: my-program-name
image: '{{.Params.image}}'
cmd: ['run-command-pre-deployment']
lifecycle: POST_APPROVAL
parameters:
- name: image
required: true
dockerImage:
imageRegistryInfo:
containerRegistry: my-registry-name
imageRepository: my-repository
... # the rest of service configuration here, such as kubernetes/program configs
Defining Dependencies on other Convergence Extensions
If you have two or more Convergence Extensions and one or more must be executed before the other, you can use the dependencies
fields to define this relationship\
service:
name: my-service
application: my-application
convergenceExtensions:
- inlined:
name: dependency
taskConfig:
program:
name: my-program-name
imageTag: my-tag
imageRegistryInfo:
containerRegistry: my-registry-name
imageRepository: my-repository
cmd: ['run-this-command-first']
lifecycle: POST_APPROVAL
- inlined:
name: my-extension
taskConfig:
program:
name: my-program-name
imageTag: my-tag
imageRegistryInfo:
containerRegistry: my-registry-name
imageRepository: my-repository
cmd: ['run-this-command-second']
dependencies:
- name: dependency
lifecycle: POST_APPROVAL
... # the rest of service configuration here, such as kubernetes/program configs
In the above example, the Convergence Extension named dependency
with command run-this-command-first
will execute first, and then the parent Convergence Extension named my-extension
with command run-this-command-second
will execute.
Defining Convergence Extensions Per Release Channel
It is possible to define Convergence Extension per Release Channel. The final Convergence Extensions list is the union of the Service-level list and the per-Release-Channel list.
service:
name: my-service
application: my-application
convergenceExtensions:
- inlined:
... # config here
lifecycle: POST_APPROVAL
perReleaseChannel:
- releaseChannel: staging
convergenceExtensions:
- inlined:
... # config here
lifecycle: POST_APPROVAL
... # the rest of service configuration here, such as kubernetes/program configs
Sharing and De-Duplicating Convergence Extensions
By default, when you define an inlined Convergence Extension like this:
service:
name: my-service
application: my-application
convergenceExtensions:
- inlined:
name: my-extension
taskConfig:
program:
name: my-program-name
imageTag: my-tag
imageRegistryInfo:
containerRegistry: my-registry-name
imageRepository: my-repository
cmd: ['migrate']
lifecycle: POST_APPROVAL
Prodvana will create one instance of the Convergence Extension per Release Channel. So if your Application has three Release Channels, the migrate
command will be run three distinct times.
If the Convergence Extension uses Release Channel specific Parameters or Constants, this may be the behavior you want. But if some Release Channels can share the same instance because they both pass the same Parameters, you can use the sharedInstanceKey
field to define a unique key that Prodvana can use to de-duplicate instances.
Let's look at this example application with three Release Channels:
application:
name: my-application
releaseChannels:
- name: beta
runtimes:
- runtime: my-runtime
constants:
- name: "datasource"
string:
value: "dev"
- name: staging
runtimes:
- runtime: my-runtime
constants:
- name: "datasource"
string:
value: "dev"
- name: production
runtimes:
- runtime: my-runtime
constants:
- name: "datasource"
string:
value: "prod"
Each Release Channel defines a Constant named datasource
. Note that the beta
and staging
Release Channels both have the same value for this constant, but production
is different. This Constant will be used as an input to the Convergence Extension defined below:
service:
name: my-service
application: my-application
convergenceExtensions:
- inlined:
name: my-extension
sharedInstanceKey: '{{.Constants.datasource}}'
taskConfig:
program:
name: my-program-name
imageTag: my-tag
imageRegistryInfo:
containerRegistry: my-registry-name
imageRepository: my-repository
cmd: ['migrate', '--source' '{{.Constants.datasource}}']
lifecycle: POST_APPROVAL
... # the rest of service configuration here, such as kubernetes/program configs
The Constant datasource
is templated into the command for this Convergence Extension (you can imagine this runs a migration on that datasource
). We also set sharedInstanceKey
with the datasource
value -- Prodvana will use this key to only generate one instance of the Convergence Extension per unique sharedInstanceKey
value. In this instance, that means two instances will be created, one that runs the command ['migrate', '--source' 'dev']
and one that runs the command ['migrate', '--source' 'prod']
.
The sharedInstanceKey
must include all Parameters and Constants that are used in the definition of the Convergence Extension.
By Default, the sharedInstanceKey
is set to the Release Channel's name; the default behavior is thus to generate one instance per Release Channel.
Missing Parameters in SharedInstanceKey
If a Convergence Extension uses Parameters or Constants that are NOT included in the
sharedInstanceKey
then Prodvana will generate fewer instances than needed, and some variants will not be generated. Make sure to include all Parameters that make a Convergence Extension instance unique in the key.
service:
name: my-service
application: my-application
convergenceExtensionInstances:
- name: migration
inlined:
kubernetesConfig:
type: KUBERNETES
local:
path: path-to-job-yaml
lifecycle: POST_APPROVAL
- name: close-jira
inlined:
kubernetesConfig:
type: KUBERNETES
local:
path: path-to-job-yaml
lifecycle: POST_APPROVAL
convergenceExtensions:
- instance: migration # Shared by all release channels
lifecycle: POST_APPROVAL
perReleaseChannel:
- releaseChannel: prod1
convergenceExtensions:
- instance: close-jira # Shared with prod2
lifecycle: POST_APPROVAL
- releaseChannel: prod2
convergenceExtensions:
- instance: close-jira
lifecycle: POST_APPROVAL
... # the rest of service configuration here, such as kubernetes/program configs
apiVersion: batch/v1
kind: Job
metadata:
generateName: my-prefix-
spec:
ttlSecondsAfterFinished: 600
template:
spec:
containers:
- name: my-container
image: my-image
command: ["command", ...]
restartPolicy: Never
backoffLimit: 0
Configuring Retry Policy
By default, Convergence Extension will not retry on failure. This is safe behavior for one-shot operations like database migrations where retrying is unlikely to succeed.
For scenarios where retrying is desired, use the following config.
service:
name: my-service
application: my-application
convergenceExtensions:
- inlined:
kubernetesConfig:
... # config here
retryPolicy:
maxAttempts: -1 # Defaults to 0. -1 for no limit to retry (never fail), 0 for no retries, positive integer for limiting retries exactly
baseInterval: 60s # required if maxAttempts != 0, Start first retry 1 minute after failure. Exponential backoff on subsequent failures.
maxInterval: 600s # required if maxAttempts != 0, Largest backoff delay between attempts limited to a max of 10 minutes.
lifecycle: POST_APPROVAL
Deprecated: convergenceExtensionInstances
Deprecated: ConvergenceExtensionInstances
The
convergenceExtensionInstances
mechanism for sharing Convergence Extension instances is deprecated.Instead, use
sharedInstanceKey
explained in the section above
It is possible to define a single Convergence Extension instance across multiple Release Channels. A typical example is running a task (closing a ticket or running migrations) before rolling out to any production clusters.
service:
name: my-service
application: my-application
convergenceExtensionInstances:
- name: migration
inlined:
kubernetesConfig:
type: KUBERNETES
local:
path: path-to-job-yaml
lifecycle: POST_APPROVAL
- name: close-jira
inlined:
kubernetesConfig:
type: KUBERNETES
local:
path: path-to-job-yaml
lifecycle: POST_APPROVAL
convergenceExtensions:
- instance: migration # Shared by all release channels
lifecycle: POST_APPROVAL
perReleaseChannel:
- releaseChannel: prod1
convergenceExtensions:
- instance: close-jira # Shared with prod2
lifecycle: POST_APPROVAL
- releaseChannel: prod2
convergenceExtensions:
- instance: close-jira
lifecycle: POST_APPROVAL
... # the rest of service configuration here, such as kubernetes/program configs
apiVersion: batch/v1
kind: Job
metadata:
generateName: my-prefix-
spec:
ttlSecondsAfterFinished: 600
template:
spec:
containers:
- name: my-container
image: my-image
command: ["command", ...]
restartPolicy: Never
backoffLimit: 0
Convergence Extensions vs. Delivery Extensions
Convergence Extensions used to be known as Delivery Extensions before February 2024. In your configurations, you can use either deliveryExtensions
or convergenceExtensions
and either deliveryExtensionInstances
or convergenceExtensionInstances
. They mean the exact same thing.
Updated 7 months ago