This is the multi-page printable view of this section. Click here to print.
Deprecated Components
- 1: cm2kc (clustermap to kubeconfig)
- 2: Phaino
- 3: Plank
- 4: tackle
1 - cm2kc (clustermap to kubeconfig)
Description
cm2kc
is a CLI tool used to convert a clustermap file to a kubeconfig file.
Usage
go run ./cmd/cm2kc <options>
The following is a list of supported options for cm2kc
:
-i, --input string Input clustermap file. (default "/dev/stdin")
-o, --output string Output kubeconfig file. (default "/dev/stdout")
Examples
Add a kubeconfig file in a secret: kubeconfig
from a clustermap file in another secret: build-cluster
for context: my-context
The following command will:
- Get a clustermap formatted secret:
build-cluster
in key:cluster
for context:my-context
. - Base64 decode the secret.
- Convert the clustermap data to a kubeconfig format.
- Create a kubeconfig formatted secret:
kubeconfig
in key:config
for context:my-context
from the converted data.
kubectl --context=my-context get secrets build-cluster -o jsonpath='{.data.cluster}' |
base64 -d |
go run ./cmd/cm2kc |
kubectl --context=my-context create secret generic kubeconfig --from-file=config=/dev/stdin
Lastly, to begin using this in Prow, update the volume mount and replace --build-cluster
with --kubeconfig
in the deployment of each relevant Prow component (e.g. crier, deck, plank, and sinker).
Create a kubeconfig file at path /path/to/kubeconfig.yaml
from a clustermap file at path /path/to/clustermap.yaml
Ensure the clustermap file exists at the specified --input
path:
# /path/to/clustermap.yaml
default:
clientCertificate: fake-default-client-cert
clientKey: fake-default-client-key
clusterCaCertificate: fake-default-ca-cert
endpoint: https://1.2.3.4
build:
clientCertificate: fake-build-client-cert
clientKey: fake-build-client-key
clusterCaCertificate: fake-build-ca-cert
endpoint: https://5.6.7.8
Execute cm2kc
specifying an --input
path to the clustermap file and an --output
path to the desired location of the generated kubeconfig file:
go run ./cmd/cm2kc --input=/path/to/clustermap.yaml --output=/path/to/kubeconfig.yaml
The following kubeconfig file will be created at the specified --output
path:
# /path/to/kubeconfig.yaml
apiVersion: v1
clusters:
- name: default
cluster:
certificate-authority-data: fake-default-ca-cert
server: https://1.2.3.4
- name: build
cluster:
certificate-authority-data: fake-build-ca-cert
server: https://5.6.7.8
contexts:
- name: default
context:
cluster: default
user: default
- name: build
context:
cluster: build
user: build
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: fake-default-ca-cert
client-key-data: fake-default-ca-cert
- name: build
user:
client-certificate-data: fake-build-ca-cert
client-key-data: fake-build-ca-cert
2 - Phaino
Run prowjobs on your local workstation with phaino
.
Plato believed that ideas and forms are the ultimate truth, whereas we only see the imperfect physical appearances of those idea.
He linkens this in his Allegory of the Cave to someone living in a cave who can only see the shadows projected on the wall from objects passing in front of a fire.
Phaino is act of making those imperfect shadows appear.
Phaino shares a prefix with Pharos, meaning lighthouse and in particular the ancient one in Alexandria.
Usage
Usage:
# Use a job from deck
go run ./cmd/phaino $URL # or /path/to/prowjob.yaml
# Use mkpj to create the job
go run ./cmd/mkpj --config-path=/path/to/prow/config.yaml --job-config-path=/path/to/prow/job/configs --job=foo > /tmp/foo
go run ./cmd/phaino /tmp/foo
Phaino is an interactive utility; it will prompt you for a local copy of any secrets or volumes that the Prow Job may require.
Common options
--grace=5m
controls how long to wait for interrupted jobs before terminating--print
the command that runs each job without running it--privileged
jobs are allowed to run instead of rejected--timeout=10m
controls how long to allow jobs to run before interrupting them--code-mount-path=/go
changes the path where code is mounted in the container--skip-volume-mounts=volume1,volume2
includes the unwanted volume mounts that are defined in the job spec--extra-volume-mounts=/go/src/sigs.k8s.io/prow=/Users/xyz/k8s-test-infra
includes the extra volume mounts needed for the container. Key is the mount path and value is the local path--skip-envs=env1,env2
includes the unwanted env vars that are defined in the job spec--extra-envs=env1=val1,env2=val2
includes the extra env vars needed for the container--use-local-gcloud-credentials
controls whether to use the same gcloud credentials as local or not--use-local-kubeconfig
controls whether to use the same kubeconfig as local or not
Common options usage scenarios
Phaino is smart at prompting for where repo is located, volume mounts etc., if it’s desired to save the prompts, use the following tricks instead:
-
If the repo needs to be cloned under GOPATH, use:
--code-mount-path==/whatever/go/src # Controls where source code is mounted in container --extra-volume-mounts=/whatever/go/src/sigs.k8s.io/prow=/Users/xyz/k8s-test-infra
-
If job requires mounting kubeconfig, assume the mount is named
kubeconfig
,use:--use-local-kubeconfig --skip-volume-mounts=kubeconfig
-
If job requires mounting gcloud default credentials, assume the mount is named
service-account
,use:--use-local-gcloud-credentials --skip-volume-mounts=service-account
-
If job requires mounting something else like
name:foo; mountPath: /bar
,use:--extra-volume-mounts=/bar=/Users/xyz/local/bar --skip-volume-mounts=foo
-
If job requires env vars,use:
--extra-envs=env1=val1,env2=val2
See go run ./cmd/phaino --help
for full option list.
Usage examples
URL example
- Go to your deck deployment
- Pick a job and click the rerun icon on the left
- Copy the URL (something like
https://prow.k8s.io/rerun?prowjob=d08f1ca5-5d63-11e9-ab62-0a580a6c1281
) - Paste it as a phaino arg
go run ./cmd/phaino https://prow.k8s.io/rerun?prowjob=d08f1ca5-5d63-11e9-ab62-0a580a6c1281
- Alternatively
go run ./cmd/phaino <(curl $URL)
Configuration example
- Use
mkpj
to create the job and pipe this tophaino
-
For prow.k8s.io jobs use
//config:mkpj
go run ./config:mkpj --job=pull-test-infra-bazel > /tmp/foo go run ./cmd/phaino /tmp/foo
-
Other deployments will need to clone that rule and/or pass in extra flags:
go run ./cmd/mkpj --config-path=/my/config.yaml --job=my-job go run ./cmd/phaino /tmp/foo
-
3 - Plank
Plank is the controller that manages the job execution and lifecycle for jobs running in k8s.
Usage
go run ./cmd/prow-controller-manager --help
Configuration
GCS and S3 are supported as the job log storage.
# config.yaml
plank:
# used to link to job results for decorated jobs (with pod utilities)
job_url_prefix_config:
'*': https://<domain>/view
# used to link to job results for non decorated jobs (without pod utilities)
job_url_template: 'https://<domain>/view/<bucket-name>/pr-logs/pull/{{.Spec.Refs.Repo}}/{{with index .Spec.Refs.Pulls 0}}{{.Number}}{{end}}/{{.Spec.Job}}/{{.Status.BuildID}}'
report_template: '[Full PR test history](https://<domain>/pr-history?org={{.Spec.Refs.Org}}&repo={{.Spec.Refs.Repo}}&pr={{with index .Spec.Refs.Pulls 0}}{{.Number}}{{end}})'
default_decoration_config_entries:
# All entries that match a job are used, later entries override previous values.
# Omission of 'repo' and 'cluster' fields makes this entry match all jobs.
- config:
timeout: 4h
grace_period: 15s
utility_images: # pull specs for container images used to construct job pods
clonerefs: gcr.io/k8s-prow/clonerefs:v20190221-d14461a
initupload: gcr.io/k8s-prow/initupload:v20190221-d14461a
entrypoint: gcr.io/k8s-prow/entrypoint:v20190221-d14461a
sidecar: gcr.io/k8s-prow/sidecar:v20190221-d14461a
gcs_configuration: # configuration for uploading job results to GCS
bucket: <bucket-name> or s3://<bucket-name>
path_strategy: explicit # or `legacy`, `single`
default_org: <github-org> # should not need this if `strategy` is set to explicit
default_repo: <github-repo> # should not need this if `strategy` is set to explicit
gcs_credentials_secret: <secret-name> # the name of the secret that stores cloud provider credentials
ssh_key_secrets:
- ssh-secret # name of the secret that stores the bot's ssh keys for GitHub, doesn't matter what the key of the map is and it will just uses the values
- repo: "^org/" # some regexp to match against <org/repo>
config:
timeout:2h
- cluster: "-trusted$" #some regexp to match against the cluster name
config:
# example override to use k8s SA with GCP workload identity rather than
# a GCP service account key file.
gcs_credentials_secret: ""
4 - tackle
Prow’s tackle
utility walks you through deploying a new instance of prow
in a couple of minutes, try it out!
Installing tackle
Tackle at this point in time needs to be built from source. The following steps will walk you through the process:
- Clone the
test-infra
repository:
git clone git@github.com:kubernetes/test-infra.git
- Build
tackle
(This requires a working go installation on your system)
cd test-infra/prow/cmd/tackle && go build -o tackle
- Optionally move
tackle
to your$PATH
sudo mv tackle /usr/sbin/tackle
Deploying prow
Note: Creating a cluster using the tackle
utility assumes you
have the gcloud
application in your $PATH
and are logged in. If you are
doing this on another cloud skip to the Manual deployment below.
Installing Prow using tackle
will help you through the following steps:
- Choosing a kubectl context (or creating a cluster on GCP / getting its credentials if necessary)
- Deploying prow into that cluster
- Configuring GitHub to send prow webhooks for your repos. This is where you’ll provide the absolute
/path/to/github/token
To install prow run the following and follow the on-screen instructions:
- Run
tackle
:
tackle
- Once your cluster is created, you’ll get a prompt to apply a
starter.yaml
. Before you do that open another terminal and apply the prow CRDs using:
kubectl apply --server-side=true -f https://raw.githubusercontent.com/kubernetes/test-infra/master/config/prow/cluster/prowjob-crd/prowjob_customresourcedefinition.yaml
-
After that specify the
starter.yaml
you want to use (please make sure to replace the values mentioned here). Once that is done some pods still won’t be in theRunning
state because we haven’t created the secret containing the credentials needed for our GCS bucket. To do that follow the steps in Configure a GCS bucket. -
Once that is done,
tackle
should show you the URL where you can access the prow dashboard. To use it with your repositories head over to the settings of the GitHub app you created and there under webhook secret, supply the HMAC token you specified in thestarter.yaml
. -
Once that is done, install the GitHub app on the repositories you want (this is only needed if you ran
tackle
with the--skip-github
flag) and you should now be able to use Prow :)
See the Next Steps section after running this utility.