build-python3-image-no-reqs ○ success
⏱
Duration: 47s
⏳
Queued: 0s
📁
Stage: docker-image
🖥
Runner: linux-aws-1
Average Duration
37s
This job: 47s
Failure Rate
1.5%
last 30 days
External Links
▶
Job Execution Phases
💡 Tip: Click on any phase bar to jump to that section in the log below
▶
Job Analysis
Job Status: Passed
Status: Job passed successfully
▶
Full Job Log
295 lines
Match - of 0
1
22:37:49
Running with gitlab-runner 18.9.0 (07e534ba)
2
22:37:49
on gitlab-runner-linux-1-746bdd58fd-cqwdq wRxjPbsJX, system ID: r_BbsT8E7thlM4
3
22:37:49
feature flags: FF_USE_FASTZIP:true, FF_USE_NEW_BASH_EVAL_STRATEGY:true, FF_USE_DYNAMIC_TRACE_FORCE_SEND_INTERVAL:true, FF_SCRIPT_SECTIONS:true, FF_USE_ADVANCED_POD_SPEC_CONFIGURATION:true, FF_PRINT_POD_EVENTS:true, FF_USE_DUMB_INIT_WITH_KUBERNETES_EXECUTOR:true, FF_LOG_IMAGES_CONFIGURED_FOR_JOB:true, FF_CLEAN_UP_FAILED_CACHE_EXTRACT:true, FF_GIT_URLS_WITHOUT_TOKENS:true, FF_WAIT_FOR_POD_TO_BE_REACHABLE:true, FF_USE_FLEETING_ACQUIRE_HEARTBEATS:true, FF_USE_JOB_ROUTER:true
4
22:37:49
Resolving secrets
5
22:37:49
section_start:1778020669:prepare_executor
6
22:37:49
+Preparing the "kubernetes" executor
7
22:37:49
"CPURequest" overwritten with "2"
8
22:37:49
"MemoryRequest" overwritten with "4G"
9
22:37:49
Using Kubernetes namespace: gitlab-runner
10
22:37:49
Using Kubernetes executor with image registry.scandit.com/dockerfiles/kaniko:v1.27.4-crane@sha256:fa662cefab90e8cde8767935540790733c85bd963f2c18b444d6595e3e91a0ff ...
11
22:37:49
Using attach strategy to execute scripts...
12
22:37:49
Using effective pull policy of [Always] for container build
13
22:37:49
Using effective pull policy of [Always] for container helper
14
22:37:49
Using effective pull policy of [Always] for container init-permissions
15
22:37:49
section_end:1778020669:prepare_executor
16
22:37:49
+section_start:1778020669:prepare_script
17
22:37:49
+Preparing environment
18
22:37:49
Using FF_USE_POD_ACTIVE_DEADLINE_SECONDS, the Pod activeDeadlineSeconds will be set to the job timeout: 1h0m0s...
19
22:37:49
WARNING: Advanced Pod Spec configuration enabled, merging the provided PodSpec to the generated one. This is a beta feature and is subject to change. Feedback is collected in this issue: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/29659 ...
20
22:37:50
Subscribing to Kubernetes Pod events...
21
22:37:50
Type Reason Message
22
22:37:50
Warning FailedScheduling 0/30 nodes are available: 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/30 nodes are available: 24 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
23
22:37:57
Warning FailedScheduling 0/30 nodes are available: 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/30 nodes are available: 24 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
24
22:37:57
Warning FailedScheduling 0/30 nodes are available: 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/30 nodes are available: 24 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
25
22:37:58
Warning FailedScheduling 0/31 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}, 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/31 nodes are available: 25 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
26
22:37:58
Warning FailedScheduling 0/31 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/31 nodes are available: 25 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
27
22:37:59
Warning FailedScheduling 0/31 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/31 nodes are available: 25 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
28
22:38:07
Warning FailedScheduling 0/31 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/31 nodes are available: 25 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
29
22:38:07
Warning FailedScheduling 0/31 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/31 nodes are available: 25 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
30
22:38:09
Normal Scheduled Successfully assigned gitlab-runner/runner-wrxjpbsjx-project-621-concurrent-6-lvh1v06x to ip-10-0-27-85.eu-central-1.compute.internal
31
22:38:09
Normal TaintManagerEviction Cancelling deletion of Pod gitlab-runner/runner-wrxjpbsjx-project-621-concurrent-6-lvh1v06x
32
22:38:12
Normal Pulling Pulling image "gitlab/gitlab-runner-helper:x86_64-v18.8.0"
33
22:38:13
Normal Pulled Successfully pulled image "gitlab/gitlab-runner-helper:x86_64-v18.8.0" in 1.318s (1.318s including waiting). Image size: 39949060 bytes.
34
22:38:16
Normal Created Created container: init-permissions
35
22:38:16
Normal Started Started container init-permissions
36
22:38:17
Normal Pulling Pulling image "498954711405.dkr.ecr.eu-central-1.amazonaws.com/dockerfiles/kaniko@sha256:fa662cefab90e8cde8767935540790733c85bd963f2c18b444d6595e3e91a0ff"
37
22:38:25
Normal Pulled Successfully pulled image "498954711405.dkr.ecr.eu-central-1.amazonaws.com/dockerfiles/kaniko@sha256:fa662cefab90e8cde8767935540790733c85bd963f2c18b444d6595e3e91a0ff" in 8.026s (8.026s including waiting). Image size: 50126433 bytes.
38
22:38:28
Normal Created Created container: build
39
22:38:28
Normal Started Started container build
40
22:38:28
Normal Pulled Container image "gitlab/gitlab-runner-helper:x86_64-v18.8.0" already present on machine
41
22:38:28
Normal Created Created container: helper
42
22:38:28
Normal Started Started container helper
43
22:38:30
Running on runner-wrxjpbsjx-project-621-concurrent-6-lvh1v06x via gitlab-runner-linux-1-746bdd58fd-cqwdq...
44
22:38:30
45
22:38:30
section_end:1778020710:prepare_script
46
22:38:30
+section_start:1778020710:get_sources
47
22:38:30
+Getting source from Git repository
48
22:38:31
Gitaly correlation ID: 01KQX4MNVV2AG2P8QHNGE1NKMS
49
22:38:31
Fetching changes with git depth set to 1...
50
22:38:31
Initialized empty Git repository in /build/internal/gitlab-templates/.git/
51
22:38:31
Created fresh repository.
52
22:38:32
Checking out 22f5b5c3 as detached HEAD (ref is refs/merge-requests/639/merge)...
53
22:38:32
54
22:38:32
Skipping Git submodules setup
55
22:38:32
56
22:38:32
section_end:1778020712:get_sources
57
22:38:32
+section_start:1778020712:step_script
58
22:38:32
+Executing "step_script" stage of the job script
59
22:38:32
section_start:1778020712:section_pre_build_script_0[hide_duration=true,collapsed=true]
$ function cleanup {
60
22:38:32
rv=$?
61
22:38:32
if [ $rv -ne 0 ]; then
62
22:38:32
echo ""
63
22:38:32
echo " Failure Cause Analysis might help, please open this link:"
64
22:38:32
echo " https://scout.scandit.io/analysis/projects/${CI_PROJECT_ID}/jobs/${CI_JOB_ID}"
65
22:38:32
echo ""
66
22:38:32
fi
67
22:38:32
echo ""
68
22:38:32
echo "Scout Analysis: https://scout.scandit.io/analysis/projects/${CI_PROJECT_ID}/jobs/${CI_JOB_ID}"
69
22:38:32
echo ""
70
22:38:32
echo ""
71
22:38:32
echo "Grafana Pod-View: https://grafana.scandit.com/d/k8s_views_pods/kubernetes-views-pods?orgId=1&refresh=1m&var-datasource=${GRAFANA_DATASOURCE}&var-host=${SC_K8S_NODE_NAME}&var-namespace=${SC_K8S_NAMESPACE}&var-pod=${HOSTNAME}&var-resolution=15&from=${__start_time}000&to=${EPOCHSECONDS}000"
72
22:38:32
echo "Grafana Node-View: https://grafana.scandit.com/d/k8s_views_nodes/kubernetes-views-nodes?orgId=1&refresh=1m&var-datasource=${GRAFANA_DATASOURCE}&var-node=${SC_K8S_NODE_NAME}&var-resolution=15s&from=${__start_time}000&to=${EPOCHSECONDS}000"
73
22:38:32
echo "Loki Logs: https://grafana.scandit.com/a/grafana-lokiexplore-app/explore/log_group/gitlab-runner/logs?var-ds=${LOKI_DATASOURCE}&var-filters=log_group|=|gitlab-runner&var-filters=source|=|${LOKI_LOGSOURCE}&var-filters=namespace|=|${SC_K8S_NAMESPACE}&var-filters=CI_PROJECT_ID|=|${CI_PROJECT_ID}&var-filters=CI_PIPELINE_ID|=|${CI_PIPELINE_ID}&var-filters=CI_JOB_ID|=|${CI_JOB_ID}&sortOrder=Ascending&from=${__start_time}000&to=${EPOCHSECONDS}000"
74
22:38:32
echo "Lilibet Statistics: https://lilibet.scandit.io/dashboard/204-job-drill-down?date_range=$(date -d '-7 days' +%Y-%m-%d)~$(date -d '+7 days' +%Y-%m-%d)&job_name=${CI_JOB_NAME}&project=${CI_PROJECT_PATH}"
75
22:38:32
echo ""
76
22:38:32
exit $rv
77
22:38:32
}
78
22:38:32
trap cleanup EXIT
79
22:38:32
echo "INFO: This is the CI job pre_build_script"
80
22:38:32
echo "INFO: It's defined in the backend/infra/aws repo."
81
22:38:32
echo "INFO: These additional Scandit variables are available to you:"
82
22:38:32
echo " SC_K8S_NODE_NAME: $SC_K8S_NODE_NAME"
83
22:38:32
echo " SC_K8S_IMAGE_ID: $SC_K8S_IMAGE_ID"
84
22:38:32
echo " SC_K8S_KYVERNO_PATCHES: |"
85
22:38:32
echo "$SC_K8S_KYVERNO_PATCHES" | sed 's/^/ /'
86
22:38:32
echo "cpu (r/l): ${SC_K8S_REQUESTS_CPU}/${SC_K8S_LIMITS_CPU}"
87
22:38:32
if command -v numfmt >/dev/null 2>&1; then
88
22:38:32
echo "memory (r/l): $(numfmt --to=iec --suffix=B $SC_K8S_REQUESTS_MEMORY)/$(numfmt --to=iec --suffix=B $SC_K8S_LIMITS_MEMORY)"
89
22:38:32
else
90
22:38:32
echo "memory (r/l): ${SC_K8S_REQUESTS_MEMORY}/${SC_K8S_LIMITS_MEMORY}"
91
22:38:32
fi
92
22:38:32
__start_time=${EPOCHSECONDS}
93
22:38:32
echo ""
94
22:38:32
echo "Grafana Pod-View: https://grafana.scandit.com/d/k8s_views_pods/kubernetes-views-pods?orgId=1&refresh=1m&var-datasource=${GRAFANA_DATASOURCE}&var-host=${SC_K8S_NODE_NAME}&var-namespace=${SC_K8S_NAMESPACE}&var-pod=${HOSTNAME}&var-resolution=15&from=${__start_time}000&to=now"
95
22:38:32
echo "Grafana Node-View: https://grafana.scandit.com/d/k8s_views_nodes/kubernetes-views-nodes?orgId=1&refresh=1m&var-datasource=${GRAFANA_DATASOURCE}&var-node=${SC_K8S_NODE_NAME}&var-resolution=15s&from=${__start_time}000&to=now"
96
22:38:32
echo "Loki Logs: https://grafana.scandit.com/a/grafana-lokiexplore-app/explore/log_group/gitlab-runner/logs?var-ds=${LOKI_DATASOURCE}&var-filters=log_group|=|gitlab-runner&var-filters=source|=|${LOKI_LOGSOURCE}&var-filters=namespace|=|${SC_K8S_NAMESPACE}&var-filters=CI_PROJECT_ID|=|${CI_PROJECT_ID}&var-filters=CI_PIPELINE_ID|=|${CI_PIPELINE_ID}&var-filters=CI_JOB_ID|=|${CI_JOB_ID}&sortOrder=Ascending&from=${__start_time}000&to=now"
97
22:38:32
echo "Lilibet Statistics: https://lilibet.scandit.io/dashboard/204-job-drill-down?date_range=$(date -d '-7 days' +%Y-%m-%d)~$(date -d '+7 days' +%Y-%m-%d)&job_name=${CI_JOB_NAME}&project=${CI_PROJECT_PATH}"
98
22:38:32
echo ""
99
22:38:32
echo "Setting up credentials for Gitlab Python registries"
100
22:38:32
mkdir -p ~
101
22:38:32
echo "machine gitlab.scandit.com" > ~/.netrc
102
22:38:32
echo "login gitlab-ci-token" >> ~/.netrc
103
22:38:32
echo "password ${CI_JOB_TOKEN}" >> ~/.netrc
104
22:38:32
chmod 600 ~/.netrc
105
22:38:32
if command -v git &> /dev/null && [ "$(id -u)" -ne 0 ]; then
106
22:38:32
git config --global --add safe.directory $CI_PROJECT_DIR
107
22:38:32
fi
108
22:38:32
# Sonarqube server is running on the same cluster. Use internal address
109
22:38:32
export SONAR_HOST_URL="http://sonarqube.sonarqube.svc.cluster.local:9000"
110
22:38:32
section_end:1778020712:section_pre_build_script_0
111
22:38:32
INFO: This is the CI job pre_build_script
112
22:38:32
INFO: It's defined in the backend/infra/aws repo.
113
22:38:32
INFO: These additional Scandit variables are available to you:
114
22:38:32
SC_K8S_NODE_NAME: ip-10-0-27-85.eu-central-1.compute.internal
115
22:38:32
SC_K8S_IMAGE_ID:
116
22:38:32
SC_K8S_KYVERNO_PATCHES: |
117
22:38:32
118
22:38:32
cpu (r/l): 2/4
119
22:38:32
memory (r/l): 4000000000/17179869184
120
22:38:32
121
22:38:32
Grafana Pod-View: https://grafana.scandit.com/d/k8s_views_pods/kubernetes-views-pods?orgId=1&refresh=1m&var-datasource=lu1rmx27z&var-host=ip-10-0-27-85.eu-central-1.compute.internal&var-namespace=gitlab-runner&var-pod=runner-wrxjpbsjx-project-621-concurrent-6-lvh1v06x&var-resolution=15&from=1778020712000&to=now
122
22:38:32
Grafana Node-View: https://grafana.scandit.com/d/k8s_views_nodes/kubernetes-views-nodes?orgId=1&refresh=1m&var-datasource=lu1rmx27z&var-node=ip-10-0-27-85.eu-central-1.compute.internal&var-resolution=15s&from=1778020712000&to=now
123
22:38:32
Loki Logs: https://grafana.scandit.com/a/grafana-lokiexplore-app/explore/log_group/gitlab-runner/logs?var-ds=nVsAo7UVk&var-filters=log_group|=|gitlab-runner&var-filters=source|=|k8s-ci.aws.scandit.io&var-filters=namespace|=|gitlab-runner&var-filters=CI_PROJECT_ID|=|621&var-filters=CI_PIPELINE_ID|=|1580356&var-filters=CI_JOB_ID|=|54442871&sortOrder=Ascending&from=1778020712000&to=now
124
22:38:32
date: invalid date '-7 days'
125
22:38:32
date: invalid date '+7 days'
126
22:38:32
Lilibet Statistics: https://lilibet.scandit.io/dashboard/204-job-drill-down?date_range=~&job_name=build-python3-image-no-reqs&project=internal/gitlab-templates
127
22:38:32
128
22:38:32
Setting up credentials for Gitlab Python registries
129
22:38:32
$ echo $DOCKER_CONFIG_JSON > /kaniko/.docker/config.json
130
22:38:33
$ mv /root/.netrc /kaniko/.netrc
131
22:38:33
section_start:1778020712:section_script_step_2[hide_duration=true,collapsed=true]
$ function copy_files() {
132
22:38:33
local src="$1"
133
22:38:33
local trg="$2"
134
22:38:33
for f in $src; do
135
22:38:33
t="$trg/`dirname $f`"
136
22:38:33
mkdir -p $t || true
137
22:38:33
echo "Copy $f"
138
22:38:33
cp -pr $f $trg/$f
139
22:38:33
done
140
22:38:33
}
141
22:38:33
function recursive_hash() {
142
22:38:33
local dir="$1"
143
22:38:33
find "$dir" -exec stat -c '%F|%a|%u:%g|%n' {} + -type f -exec sha256sum {} + | sort | sha256sum | cut -d ' ' -f1
144
22:38:33
}
145
22:38:33
function remote_docker_digest() {
146
22:38:33
local images="$1"
147
22:38:33
echo $images | xargs -n 1 crane digest
148
22:38:33
}
149
22:38:33
function remote_image_exists() {
150
22:38:33
local image="$1"
151
22:38:33
crane manifest $image > /dev/null 2>&1
152
22:38:33
}
153
22:38:33
function remote_images_are_identical() {
154
22:38:33
local imageA="$1"
155
22:38:33
local imageB="$2"
156
22:38:33
if [[ $(remote_docker_digest "$imageA") == $(remote_docker_digest "$imageB") ]]; then
157
22:38:33
return 0
158
22:38:33
else
159
22:38:33
return 1
160
22:38:33
fi
161
22:38:33
}
162
22:38:33
function copy_image() {
163
22:38:33
local image="$1"
164
22:38:33
local remotes="$2"
165
22:38:33
local backup_ext="$3"
166
22:38:33
echo "$image"
167
22:38:33
local source_digest=$(remote_docker_digest $image)
168
22:38:33
local target_digest
169
22:38:33
for registry in $remotes; do
170
22:38:33
if target_digest=$(remote_docker_digest $registry); then
171
22:38:33
if [ "$target_digest" != "$source_digest" ]; then
172
22:38:33
echo "image outdated, overwriting with newest version"
173
22:38:33
crane copy $image $registry
174
22:38:33
crane copy $image ${registry}${backup_ext}
175
22:38:33
fi
176
22:38:33
else
177
22:38:33
echo "image does not exist, writing newest version"
178
22:38:33
crane copy $image $registry
179
22:38:33
crane copy $image ${registry}${backup_ext}
180
22:38:33
fi
181
22:38:33
done
182
22:38:33
}
183
22:38:33
section_end:1778020712:section_script_step_2
184
22:38:33
section_start:1778020712:section_script_step_3[hide_duration=true,collapsed=true]
$ if [ "$CONTAINER_SUBDIR" != "" ]; then
185
22:38:33
echo "Entering subpath $CONTAINER_SUBDIR"
186
22:38:33
cd $CONTAINER_SUBDIR
187
22:38:33
fi
188
22:38:33
section_end:1778020712:section_script_step_3
189
22:38:33
$ copy_files "$CONTAINER_IMPLICIT_REQUIREMENTS $CONTAINER_REQUIREMENTS" "$CONTAINER_CONTEXT_PATH"
190
22:38:33
Copy Dockerfile.python-3-no-requirements
191
22:38:33
$ echo "$CONTAINER_BUILD_ENVIRONMENT" > $CONTAINER_CONTEXT_PATH/.docker-build-env
192
22:38:33
$ docker_checksum=$(recursive_hash $CONTAINER_CONTEXT_PATH)
193
22:38:33
section_start:1778020712:section_script_step_7[hide_duration=true,collapsed=true]
$ if [ "$CONTAINER_IMAGE_NAME" == "" ]; then
194
22:38:33
final_image_name=${CONTAINER_IMAGE_URL}
195
22:38:33
else
196
22:38:33
final_image_name=${CONTAINER_IMAGE_URL}/${CONTAINER_IMAGE_NAME}
197
22:38:33
fi
198
22:38:33
section_end:1778020712:section_script_step_7
199
22:38:33
$ final_image_url=${final_image_name}:${docker_checksum}
200
22:38:33
section_start:1778020712:section_script_step_9[hide_duration=true,collapsed=true]
$ if [ "${PIPELINE_IMAGE_REFS}" == "1" ]; then
201
22:38:33
echo $CONTAINER_IMAGE_VARIABLE=${final_image_url}-P${CI_PROJECT_ID}-${CI_PIPELINE_ID} > $CI_PROJECT_DIR/docker_image_build.env
202
22:38:33
else
203
22:38:33
echo $CONTAINER_IMAGE_VARIABLE=$final_image_url > $CI_PROJECT_DIR/docker_image_build.env
204
22:38:33
fi
205
22:38:33
section_end:1778020712:section_script_step_9
206
22:38:33
$ echo ${CONTAINER_IMAGE_VARIABLE}_HASH=$docker_checksum >> $CI_PROJECT_DIR/docker_image_build.env
207
22:38:33
section_start:1778020712:section_script_step_11[hide_duration=true,collapsed=true]
$ if [ "${FORCE_BUILD}" != "true" ] || command -v crane &> /dev/null; then
208
22:38:33
echo $REGISTRY_PASSWORD | crane auth login $REGISTRY -u $REGISTRY_USER --password-stdin
209
22:38:33
fi
210
22:38:33
section_end:1778020712:section_script_step_11
211
22:38:33
212
22:38:33
WARNING! Your credentials are stored unencrypted in '/kaniko/.docker/config.json'.
213
22:38:33
Configure a credential helper to remove this warning. See
214
22:38:33
https://docs.docker.com/go/credential-store/
215
22:38:33
216
22:38:33
2026/05/05 22:38:32 logged in via /kaniko/.docker/config.json
217
22:38:33
section_start:1778020712:section_script_step_12[hide_duration=true,collapsed=true]
$ if [ "${FORCE_BUILD}" != "true" ] && remote_image_exists "$final_image_url"; then
218
22:38:33
echo "Image already exists, skip the build."
219
22:38:33
echo "$final_image_url"
220
22:38:33
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
221
22:38:33
_EXT=""
222
22:38:33
_BACKUP_EXT="-CI${CI_JOB_ID}-$(date '+%Y%m%d')"
223
22:38:33
elif [[ -n "$CI_MERGE_REQUEST_ID" ]]; then
224
22:38:33
_EXT="-MR${CI_MERGE_REQUEST_IID}"
225
22:38:33
_BACKUP_EXT=""
226
22:38:33
elif [[ "$CI_COMMIT_REF_PROTECTED" == "true" ]]; then
227
22:38:33
_EXT="-${CI_COMMIT_REF_SLUG}"
228
22:38:33
_BACKUP_EXT="-CI${CI_JOB_ID}-$(date '+%Y%m%d')"
229
22:38:33
fi
230
22:38:33
for _TAG in $CONTAINER_IMAGE_TAG; do
231
22:38:33
echo "Copying ${final_image_url} to ${final_image_name}:${_TAG}${_EXT}"
232
22:38:33
copy_image "${final_image_url}" "${final_image_name}:${_TAG}${_EXT}" "${_BACKUP_EXT}"
233
22:38:33
done
234
22:38:33
if [ "${PIPELINE_IMAGE_REFS}" == "1" ]; then
235
22:38:33
_EXT="-P${CI_PROJECT_ID}-${CI_PIPELINE_ID}"
236
22:38:33
echo "Copying ${final_image_url} to ${final_image_url}${_EXT}"
237
22:38:33
copy_image "${final_image_url}" "${final_image_url}${_EXT}"
238
22:38:33
for _TAG in $CONTAINER_IMAGE_TAG; do
239
22:38:33
echo "Copying ${final_image_url} to ${final_image_name}:${_TAG}${_EXT}"
240
22:38:33
copy_image "${final_image_url}" "${final_image_name}:${_TAG}${_EXT}"
241
22:38:33
done
242
22:38:33
fi
243
22:38:33
exit 0
244
22:38:33
fi
245
22:38:33
section_end:1778020712:section_script_step_12
246
22:38:33
Image already exists, skip the build.
247
22:38:33
registry.scandit.com/internal/gitlab-templates:790e1277fd4fdf685cc65a116ad249da7f59d1cd0e013016720d24eed74c8d58
248
22:38:33
Copying registry.scandit.com/internal/gitlab-templates:790e1277fd4fdf685cc65a116ad249da7f59d1cd0e013016720d24eed74c8d58 to registry.scandit.com/internal/gitlab-templates:latest-MR639
249
22:38:33
registry.scandit.com/internal/gitlab-templates:790e1277fd4fdf685cc65a116ad249da7f59d1cd0e013016720d24eed74c8d58
250
22:38:33
image outdated, overwriting with newest version
251
22:38:33
2026/05/05 22:38:33 Copying from registry.scandit.com/internal/gitlab-templates:790e1277fd4fdf685cc65a116ad249da7f59d1cd0e013016720d24eed74c8d58 to registry.scandit.com/internal/gitlab-templates:latest-MR639
252
22:38:34
2026/05/05 22:38:33 existing blob: sha256:2933ecab0f11302fd71f29aa83ad2904683246f7a8320ad0dc3a60b23f05fee9
253
22:38:34
2026/05/05 22:38:33 existing blob: sha256:8fcda2b4d7993820b00c5488d173051e76d01ba6b85620617ba77001b0f9e2fa
254
22:38:34
2026/05/05 22:38:33 existing blob: sha256:e5203b2bfeff92b72e816dc6cbb1f16856f0cd592e521e3c0cfa195a78fe180e
255
22:38:34
2026/05/05 22:38:33 existing blob: sha256:db53381ee51f9e43304e236099ba097ae1b33ae41a8007e0b6319992eb55fd00
256
22:38:34
2026/05/05 22:38:33 existing blob: sha256:dfc792c67fd1c4f6f03f68173f31ce330935554f04aab717b390b7398f83e6c8
257
22:38:34
2026/05/05 22:38:33 existing blob: sha256:7e2b65e636fe1d2e8e87a94742e1f5a0f1174af50bc1930967df2d8f9d6e311a
258
22:38:34
2026/05/05 22:38:33 existing blob: sha256:6ada59ee1c4457a2478c31272454fb7a47283d9c90904ea2e9479488c4948f68
259
22:38:34
2026/05/05 22:38:33 existing blob: sha256:89732bc7504122601f40269fc9ddfb70982e633ea9caf641ae45736f2846b004
260
22:38:34
2026/05/05 22:38:33 existing blob: sha256:b701a5ba06fc9915d80093bf64473365613bedeb90a3bdc1b8d7ad255624d853
261
22:38:34
2026/05/05 22:38:33 existing blob: sha256:61152fe11b1910fd88b6cb94ac3def47843931bbde139c5b5d6f4873be09b337
262
22:38:34
2026/05/05 22:38:33 existing blob: sha256:2f62b52729a6b51cb6eae80480e22c8b009b7a710bb439726e6150accab169fd
263
22:38:34
2026/05/05 22:38:33 existing blob: sha256:5663813363de8d48681cc5339ac095a599777fa9d3ab0cd34cafc25ff51eb9d6
264
22:38:34
2026/05/05 22:38:33 existing blob: sha256:676556b2f906405f5be6e1221038949827cb26ce269580f283648e0d2dc6ffa0
265
22:38:34
2026/05/05 22:38:33 existing blob: sha256:0117f8fbedf883e99a7ede71eb566072ffe003ef529a01c2ddfd1190bd53e083
266
22:38:34
2026/05/05 22:38:34 registry.scandit.com/internal/gitlab-templates:latest-MR639: digest: sha256:e1b2556a9ef7f0b4306ccf42e0576a567f09a5f4724874622e3b0858a0ac6523 size: 3698
267
22:38:34
2026/05/05 22:38:34 Copying from registry.scandit.com/internal/gitlab-templates:790e1277fd4fdf685cc65a116ad249da7f59d1cd0e013016720d24eed74c8d58 to registry.scandit.com/internal/gitlab-templates:latest-MR639
268
22:38:34
2026/05/05 22:38:34 existing manifest: latest-MR639@sha256:e1b2556a9ef7f0b4306ccf42e0576a567f09a5f4724874622e3b0858a0ac6523
269
22:38:34
270
22:38:34
Scout Analysis: https://scout.scandit.io/analysis/projects/621/jobs/54442871
271
22:38:34
272
22:38:34
273
22:38:34
Grafana Pod-View: https://grafana.scandit.com/d/k8s_views_pods/kubernetes-views-pods?orgId=1&refresh=1m&var-datasource=lu1rmx27z&var-host=ip-10-0-27-85.eu-central-1.compute.internal&var-namespace=gitlab-runner&var-pod=runner-wrxjpbsjx-project-621-concurrent-6-lvh1v06x&var-resolution=15&from=1778020712000&to=1778020714000
274
22:38:34
Grafana Node-View: https://grafana.scandit.com/d/k8s_views_nodes/kubernetes-views-nodes?orgId=1&refresh=1m&var-datasource=lu1rmx27z&var-node=ip-10-0-27-85.eu-central-1.compute.internal&var-resolution=15s&from=1778020712000&to=1778020714000
275
22:38:34
Loki Logs: https://grafana.scandit.com/a/grafana-lokiexplore-app/explore/log_group/gitlab-runner/logs?var-ds=nVsAo7UVk&var-filters=log_group|=|gitlab-runner&var-filters=source|=|k8s-ci.aws.scandit.io&var-filters=namespace|=|gitlab-runner&var-filters=CI_PROJECT_ID|=|621&var-filters=CI_PIPELINE_ID|=|1580356&var-filters=CI_JOB_ID|=|54442871&sortOrder=Ascending&from=1778020712000&to=1778020714000
276
22:38:34
date: invalid date '-7 days'
277
22:38:34
date: invalid date '+7 days'
278
22:38:34
Lilibet Statistics: https://lilibet.scandit.io/dashboard/204-job-drill-down?date_range=~&job_name=build-python3-image-no-reqs&project=internal/gitlab-templates
279
22:38:34
280
22:38:34
281
22:38:34
section_end:1778020714:step_script
282
22:38:34
+section_start:1778020714:upload_artifacts_on_success
283
22:38:34
+Uploading artifacts for successful job
284
22:38:35
Uploading artifacts...
285
22:38:35
docker_image_build.env: found 1 matching artifact files and directories
286
22:38:35
Uploading artifacts as "dotenv" to coordinator... 201 Created correlation_id=01KQX4P2HRV84T60QFBG3X1141 id=54442871 responseStatus=201 Created token=64_csky_Y
287
22:38:35
288
22:38:35
section_end:1778020715:upload_artifacts_on_success
289
22:38:35
+section_start:1778020715:cleanup_file_variables
290
22:38:35
+Cleaning up project directory and file based variables
291
22:38:36
292
22:38:36
section_end:1778020716:cleanup_file_variables
293
22:38:36
+
294
22:38:36
Job succeeded
295