build ○ success
⏱
Duration: 1m 38s
⏳
Queued: 4s
📁
Stage: build
🖥
Runner: linux-aws-1
Average Duration
27s
This job: 1m 38s
Failure Rate
16.8%
last 30 days
External Links
▶
Job Execution Phases
💡 Tip: Click on any phase bar to jump to that section in the log below
▶
Job Analysis
Job Status: Passed
Status: Job passed successfully
▶
Full Job Log
302 lines
Match - of 0
1
22:37:32
Running with gitlab-runner 18.9.0 (07e534ba)
2
22:37:32
on gitlab-runner-linux-1-746bdd58fd-qdj4w wRxjPbsJX, system ID: r_BzoYrcI9lIJE
3
22:37:32
feature flags: FF_USE_FASTZIP:true, FF_USE_NEW_BASH_EVAL_STRATEGY:true, FF_USE_DYNAMIC_TRACE_FORCE_SEND_INTERVAL:true, FF_SCRIPT_SECTIONS:true, FF_USE_ADVANCED_POD_SPEC_CONFIGURATION:true, FF_PRINT_POD_EVENTS:true, FF_USE_DUMB_INIT_WITH_KUBERNETES_EXECUTOR:true, FF_LOG_IMAGES_CONFIGURED_FOR_JOB:true, FF_CLEAN_UP_FAILED_CACHE_EXTRACT:true, FF_GIT_URLS_WITHOUT_TOKENS:true, FF_WAIT_FOR_POD_TO_BE_REACHABLE:true, FF_USE_FLEETING_ACQUIRE_HEARTBEATS:true, FF_USE_JOB_ROUTER:true
4
22:37:32
Resolving secrets
5
22:37:32
section_start:1778020652:prepare_executor
6
22:37:32
+Preparing the "kubernetes" executor
7
22:37:32
"CPURequest" overwritten with "2"
8
22:37:32
"MemoryRequest" overwritten with "4G"
9
22:37:32
Using Kubernetes namespace: gitlab-runner
10
22:37:32
Using Kubernetes executor with image registry.scandit.com/dockerfiles/kaniko:v1.27.3-crane@sha256:72bdc063db14f38a45910d33ccf066ecb088d4833fb2437fef336e49b81fd4ac ...
11
22:37:32
Using attach strategy to execute scripts...
12
22:37:32
Using effective pull policy of [Always] for container build
13
22:37:32
Using effective pull policy of [Always] for container helper
14
22:37:32
Using effective pull policy of [Always] for container init-permissions
15
22:37:32
section_end:1778020652:prepare_executor
16
22:37:32
+section_start:1778020652:prepare_script
17
22:37:32
+Preparing environment
18
22:37:32
Using FF_USE_POD_ACTIVE_DEADLINE_SECONDS, the Pod activeDeadlineSeconds will be set to the job timeout: 1h0m0s...
19
22:37:32
WARNING: Advanced Pod Spec configuration enabled, merging the provided PodSpec to the generated one. This is a beta feature and is subject to change. Feedback is collected in this issue: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/29659 ...
20
22:37:33
Subscribing to Kubernetes Pod events...
21
22:37:33
Type Reason Message
22
22:37:33
Warning FailedScheduling 0/30 nodes are available: 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/30 nodes are available: 24 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
23
22:37:34
Warning FailedScheduling 0/30 nodes are available: 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/30 nodes are available: 24 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
24
22:37:35
Warning FailedScheduling 0/30 nodes are available: 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/30 nodes are available: 24 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
25
22:37:35
Warning FailedScheduling 0/30 nodes are available: 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/30 nodes are available: 24 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
26
22:37:37
Normal TriggeredScaleUp pod triggered scale-up: [{eks-ondemand-ci-x86-m6-32C-128G-v2-1a-f4ce9782-beda-81ca-d810-bec0ae0347a1 1->2 (max: 30)}]
27
22:37:57
Warning FailedScheduling 0/30 nodes are available: 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/30 nodes are available: 24 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
28
22:37:57
Warning FailedScheduling 0/30 nodes are available: 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/30 nodes are available: 24 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
29
22:37:58
Warning FailedScheduling 0/31 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/31 nodes are available: 25 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
30
22:37:59
Warning FailedScheduling 0/31 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/31 nodes are available: 25 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
31
22:38:07
Warning FailedScheduling 0/31 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/31 nodes are available: 25 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
32
22:38:08
Warning FailedScheduling 0/31 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) had untolerated taint {scandit.io/clickhouse: production}, 1 node(s) had untolerated taint {scandit.io/clickhouse: staging}, 1 node(s) had untolerated taint {scandit.io/sonarqube: dedicated}, 21 node(s) didn't match Pod's node affinity/selector, 6 Insufficient cpu. preemption: 0/31 nodes are available: 25 Preemption is not helpful for scheduling, 6 No preemption victims found for incoming pod.
33
22:38:09
Normal Scheduled Successfully assigned gitlab-runner/runner-wrxjpbsjx-project-621-concurrent-3-8h1z0ro8 to ip-10-0-27-85.eu-central-1.compute.internal
34
22:38:09
Normal TaintManagerEviction Cancelling deletion of Pod gitlab-runner/runner-wrxjpbsjx-project-621-concurrent-3-8h1z0ro8
35
22:38:11
Normal Pulling Pulling image "gitlab/gitlab-runner-helper:x86_64-v18.8.0"
36
22:38:13
Normal Pulled Successfully pulled image "gitlab/gitlab-runner-helper:x86_64-v18.8.0" in 1.865s (1.865s including waiting). Image size: 39949060 bytes.
37
22:38:13
Normal Created Created container: init-permissions
38
22:38:15
Normal Started Started container init-permissions
39
22:38:17
Normal Pulling Pulling image "498954711405.dkr.ecr.eu-central-1.amazonaws.com/dockerfiles/kaniko@sha256:72bdc063db14f38a45910d33ccf066ecb088d4833fb2437fef336e49b81fd4ac"
40
22:38:55
Normal Pulled Successfully pulled image "498954711405.dkr.ecr.eu-central-1.amazonaws.com/dockerfiles/kaniko@sha256:72bdc063db14f38a45910d33ccf066ecb088d4833fb2437fef336e49b81fd4ac" in 37.74s (37.74s including waiting). Image size: 49989654 bytes.
41
22:38:56
Normal Created Created container: build
42
22:38:56
Normal Started Started container build
43
22:38:56
Normal Pulled Container image "gitlab/gitlab-runner-helper:x86_64-v18.8.0" already present on machine
44
22:38:57
Normal Created Created container: helper
45
22:38:58
Normal Started Started container helper
46
22:39:01
Running on runner-wrxjpbsjx-project-621-concurrent-3-8h1z0ro8 via gitlab-runner-linux-1-746bdd58fd-qdj4w...
47
22:39:01
48
22:39:01
section_end:1778020741:prepare_script
49
22:39:01
+section_start:1778020741:get_sources
50
22:39:01
+Getting source from Git repository
51
22:39:02
Gitaly correlation ID: 01KQX4M54M8GC0XDVG2KT59A2G
52
22:39:02
Fetching changes with git depth set to 1...
53
22:39:02
Initialized empty Git repository in /build/internal/gitlab-templates/.git/
54
22:39:02
Created fresh repository.
55
22:39:04
Checking out 13881c4b as detached HEAD (ref is refs/merge-requests/638/merge)...
56
22:39:04
57
22:39:04
Skipping Git submodules setup
58
22:39:04
59
22:39:04
section_end:1778020744:get_sources
60
22:39:04
+section_start:1778020744:step_script
61
22:39:04
+Executing "step_script" stage of the job script
62
22:39:04
section_start:1778020744:section_pre_build_script_0[hide_duration=true,collapsed=true]
$ function cleanup {
63
22:39:04
rv=$?
64
22:39:04
if [ $rv -ne 0 ]; then
65
22:39:04
echo ""
66
22:39:04
echo " Failure Cause Analysis might help, please open this link:"
67
22:39:04
echo " https://scout.scandit.io/analysis/projects/${CI_PROJECT_ID}/jobs/${CI_JOB_ID}"
68
22:39:04
echo ""
69
22:39:04
fi
70
22:39:04
echo ""
71
22:39:04
echo "Scout Analysis: https://scout.scandit.io/analysis/projects/${CI_PROJECT_ID}/jobs/${CI_JOB_ID}"
72
22:39:04
echo ""
73
22:39:04
echo ""
74
22:39:04
echo "Grafana Pod-View: https://grafana.scandit.com/d/k8s_views_pods/kubernetes-views-pods?orgId=1&refresh=1m&var-datasource=${GRAFANA_DATASOURCE}&var-host=${SC_K8S_NODE_NAME}&var-namespace=${SC_K8S_NAMESPACE}&var-pod=${HOSTNAME}&var-resolution=15&from=${__start_time}000&to=${EPOCHSECONDS}000"
75
22:39:04
echo "Grafana Node-View: https://grafana.scandit.com/d/k8s_views_nodes/kubernetes-views-nodes?orgId=1&refresh=1m&var-datasource=${GRAFANA_DATASOURCE}&var-node=${SC_K8S_NODE_NAME}&var-resolution=15s&from=${__start_time}000&to=${EPOCHSECONDS}000"
76
22:39:04
echo "Loki Logs: https://grafana.scandit.com/a/grafana-lokiexplore-app/explore/log_group/gitlab-runner/logs?var-ds=${LOKI_DATASOURCE}&var-filters=log_group|=|gitlab-runner&var-filters=source|=|${LOKI_LOGSOURCE}&var-filters=namespace|=|${SC_K8S_NAMESPACE}&var-filters=CI_PROJECT_ID|=|${CI_PROJECT_ID}&var-filters=CI_PIPELINE_ID|=|${CI_PIPELINE_ID}&var-filters=CI_JOB_ID|=|${CI_JOB_ID}&sortOrder=Ascending&from=${__start_time}000&to=${EPOCHSECONDS}000"
77
22:39:04
echo "Lilibet Statistics: https://lilibet.scandit.io/dashboard/204-job-drill-down?date_range=$(date -d '-7 days' +%Y-%m-%d)~$(date -d '+7 days' +%Y-%m-%d)&job_name=${CI_JOB_NAME}&project=${CI_PROJECT_PATH}"
78
22:39:04
echo ""
79
22:39:04
exit $rv
80
22:39:04
}
81
22:39:04
trap cleanup EXIT
82
22:39:04
echo "INFO: This is the CI job pre_build_script"
83
22:39:04
echo "INFO: It's defined in the backend/infra/aws repo."
84
22:39:04
echo "INFO: These additional Scandit variables are available to you:"
85
22:39:04
echo " SC_K8S_NODE_NAME: $SC_K8S_NODE_NAME"
86
22:39:04
echo " SC_K8S_IMAGE_ID: $SC_K8S_IMAGE_ID"
87
22:39:04
echo " SC_K8S_KYVERNO_PATCHES: |"
88
22:39:04
echo "$SC_K8S_KYVERNO_PATCHES" | sed 's/^/ /'
89
22:39:04
echo "cpu (r/l): ${SC_K8S_REQUESTS_CPU}/${SC_K8S_LIMITS_CPU}"
90
22:39:04
if command -v numfmt >/dev/null 2>&1; then
91
22:39:04
echo "memory (r/l): $(numfmt --to=iec --suffix=B $SC_K8S_REQUESTS_MEMORY)/$(numfmt --to=iec --suffix=B $SC_K8S_LIMITS_MEMORY)"
92
22:39:04
else
93
22:39:04
echo "memory (r/l): ${SC_K8S_REQUESTS_MEMORY}/${SC_K8S_LIMITS_MEMORY}"
94
22:39:04
fi
95
22:39:04
__start_time=${EPOCHSECONDS}
96
22:39:04
echo ""
97
22:39:04
echo "Grafana Pod-View: https://grafana.scandit.com/d/k8s_views_pods/kubernetes-views-pods?orgId=1&refresh=1m&var-datasource=${GRAFANA_DATASOURCE}&var-host=${SC_K8S_NODE_NAME}&var-namespace=${SC_K8S_NAMESPACE}&var-pod=${HOSTNAME}&var-resolution=15&from=${__start_time}000&to=now"
98
22:39:04
echo "Grafana Node-View: https://grafana.scandit.com/d/k8s_views_nodes/kubernetes-views-nodes?orgId=1&refresh=1m&var-datasource=${GRAFANA_DATASOURCE}&var-node=${SC_K8S_NODE_NAME}&var-resolution=15s&from=${__start_time}000&to=now"
99
22:39:04
echo "Loki Logs: https://grafana.scandit.com/a/grafana-lokiexplore-app/explore/log_group/gitlab-runner/logs?var-ds=${LOKI_DATASOURCE}&var-filters=log_group|=|gitlab-runner&var-filters=source|=|${LOKI_LOGSOURCE}&var-filters=namespace|=|${SC_K8S_NAMESPACE}&var-filters=CI_PROJECT_ID|=|${CI_PROJECT_ID}&var-filters=CI_PIPELINE_ID|=|${CI_PIPELINE_ID}&var-filters=CI_JOB_ID|=|${CI_JOB_ID}&sortOrder=Ascending&from=${__start_time}000&to=now"
100
22:39:04
echo "Lilibet Statistics: https://lilibet.scandit.io/dashboard/204-job-drill-down?date_range=$(date -d '-7 days' +%Y-%m-%d)~$(date -d '+7 days' +%Y-%m-%d)&job_name=${CI_JOB_NAME}&project=${CI_PROJECT_PATH}"
101
22:39:04
echo ""
102
22:39:04
echo "Setting up credentials for Gitlab Python registries"
103
22:39:04
mkdir -p ~
104
22:39:04
echo "machine gitlab.scandit.com" > ~/.netrc
105
22:39:04
echo "login gitlab-ci-token" >> ~/.netrc
106
22:39:04
echo "password ${CI_JOB_TOKEN}" >> ~/.netrc
107
22:39:04
chmod 600 ~/.netrc
108
22:39:04
if command -v git &> /dev/null && [ "$(id -u)" -ne 0 ]; then
109
22:39:04
git config --global --add safe.directory $CI_PROJECT_DIR
110
22:39:04
fi
111
22:39:04
# Sonarqube server is running on the same cluster. Use internal address
112
22:39:04
export SONAR_HOST_URL="http://sonarqube.sonarqube.svc.cluster.local:9000"
113
22:39:04
section_end:1778020744:section_pre_build_script_0
114
22:39:04
INFO: This is the CI job pre_build_script
115
22:39:04
INFO: It's defined in the backend/infra/aws repo.
116
22:39:04
INFO: These additional Scandit variables are available to you:
117
22:39:04
SC_K8S_NODE_NAME: ip-10-0-27-85.eu-central-1.compute.internal
118
22:39:04
SC_K8S_IMAGE_ID:
119
22:39:04
SC_K8S_KYVERNO_PATCHES: |
120
22:39:04
121
22:39:04
cpu (r/l): 2/4
122
22:39:04
memory (r/l): 4000000000/17179869184
123
22:39:04
124
22:39:04
Grafana Pod-View: https://grafana.scandit.com/d/k8s_views_pods/kubernetes-views-pods?orgId=1&refresh=1m&var-datasource=lu1rmx27z&var-host=ip-10-0-27-85.eu-central-1.compute.internal&var-namespace=gitlab-runner&var-pod=runner-wrxjpbsjx-project-621-concurrent-3-8h1z0ro8&var-resolution=15&from=1778020744000&to=now
125
22:39:04
Grafana Node-View: https://grafana.scandit.com/d/k8s_views_nodes/kubernetes-views-nodes?orgId=1&refresh=1m&var-datasource=lu1rmx27z&var-node=ip-10-0-27-85.eu-central-1.compute.internal&var-resolution=15s&from=1778020744000&to=now
126
22:39:04
Loki Logs: https://grafana.scandit.com/a/grafana-lokiexplore-app/explore/log_group/gitlab-runner/logs?var-ds=nVsAo7UVk&var-filters=log_group|=|gitlab-runner&var-filters=source|=|k8s-ci.aws.scandit.io&var-filters=namespace|=|gitlab-runner&var-filters=CI_PROJECT_ID|=|621&var-filters=CI_PIPELINE_ID|=|1580354&var-filters=CI_JOB_ID|=|54442861&sortOrder=Ascending&from=1778020744000&to=now
127
22:39:04
date: invalid date '-7 days'
128
22:39:04
date: invalid date '+7 days'
129
22:39:04
Lilibet Statistics: https://lilibet.scandit.io/dashboard/204-job-drill-down?date_range=~&job_name=build&project=internal/gitlab-templates
130
22:39:04
131
22:39:04
Setting up credentials for Gitlab Python registries
132
22:39:04
$ echo $DOCKER_CONFIG_JSON > /kaniko/.docker/config.json
133
22:39:05
$ mv /root/.netrc /kaniko/.netrc
134
22:39:05
section_start:1778020745:section_script_step_2[hide_duration=true,collapsed=true]
$ function copy_files() {
135
22:39:05
local src="$1"
136
22:39:05
local trg="$2"
137
22:39:05
for f in $src; do
138
22:39:05
t="$trg/`dirname $f`"
139
22:39:05
mkdir -p $t || true
140
22:39:05
echo "Copy $f"
141
22:39:05
cp -pr $f $trg/$f
142
22:39:05
done
143
22:39:05
}
144
22:39:05
function recursive_hash() {
145
22:39:05
local dir="$1"
146
22:39:05
find "$dir" -exec stat -c '%F|%a|%u:%g|%n' {} + -type f -exec sha256sum {} + | sort | sha256sum | cut -d ' ' -f1
147
22:39:05
}
148
22:39:05
function remote_docker_digest() {
149
22:39:05
local images="$1"
150
22:39:05
echo $images | xargs -n 1 crane digest
151
22:39:05
}
152
22:39:05
function remote_image_exists() {
153
22:39:05
local image="$1"
154
22:39:05
crane manifest $image > /dev/null 2>&1
155
22:39:05
}
156
22:39:05
function remote_images_are_identical() {
157
22:39:05
local imageA="$1"
158
22:39:05
local imageB="$2"
159
22:39:05
if [[ $(remote_docker_digest "$imageA") == $(remote_docker_digest "$imageB") ]]; then
160
22:39:05
return 0
161
22:39:05
else
162
22:39:05
return 1
163
22:39:05
fi
164
22:39:05
}
165
22:39:05
function copy_image() {
166
22:39:05
local image="$1"
167
22:39:05
local remotes="$2"
168
22:39:05
local backup_ext="$3"
169
22:39:05
echo "$image"
170
22:39:05
local source_digest=$(remote_docker_digest $image)
171
22:39:05
local target_digest
172
22:39:05
for registry in $remotes; do
173
22:39:05
if target_digest=$(remote_docker_digest $registry); then
174
22:39:05
if [ "$target_digest" != "$source_digest" ]; then
175
22:39:05
echo "image outdated, overwriting with newest version"
176
22:39:05
crane copy $image $registry
177
22:39:05
crane copy $image ${registry}${backup_ext}
178
22:39:05
fi
179
22:39:05
else
180
22:39:05
echo "image does not exist, writing newest version"
181
22:39:05
crane copy $image $registry
182
22:39:05
crane copy $image ${registry}${backup_ext}
183
22:39:05
fi
184
22:39:05
done
185
22:39:05
}
186
22:39:05
section_end:1778020745:section_script_step_2
187
22:39:05
section_start:1778020745:section_script_step_3[hide_duration=true,collapsed=true]
$ if [ "$CONTAINER_SUBDIR" != "" ]; then
188
22:39:05
echo "Entering subpath $CONTAINER_SUBDIR"
189
22:39:05
cd $CONTAINER_SUBDIR
190
22:39:05
fi
191
22:39:05
section_end:1778020745:section_script_step_3
192
22:39:05
Entering subpath sc-uv-example
193
22:39:05
$ copy_files "$CONTAINER_IMPLICIT_REQUIREMENTS $CONTAINER_REQUIREMENTS" "$CONTAINER_CONTEXT_PATH"
194
22:39:05
Copy /build/internal/gitlab-templates/Dockerfile.uv
195
22:39:05
Copy uv.lock
196
22:39:05
Copy pyproject.toml
197
22:39:05
$ echo "$CONTAINER_BUILD_ENVIRONMENT" > $CONTAINER_CONTEXT_PATH/.docker-build-env
198
22:39:05
$ docker_checksum=$(recursive_hash $CONTAINER_CONTEXT_PATH)
199
22:39:05
section_start:1778020745:section_script_step_7[hide_duration=true,collapsed=true]
$ if [ "$CONTAINER_IMAGE_NAME" == "" ]; then
200
22:39:05
final_image_name=${CONTAINER_IMAGE_URL}
201
22:39:05
else
202
22:39:05
final_image_name=${CONTAINER_IMAGE_URL}/${CONTAINER_IMAGE_NAME}
203
22:39:05
fi
204
22:39:05
section_end:1778020745:section_script_step_7
205
22:39:05
$ final_image_url=${final_image_name}:${docker_checksum}
206
22:39:05
section_start:1778020745:section_script_step_9[hide_duration=true,collapsed=true]
$ if [ "${PIPELINE_IMAGE_REFS}" == "1" ]; then
207
22:39:05
echo $CONTAINER_IMAGE_VARIABLE=${final_image_url}-P${CI_PROJECT_ID}-${CI_PIPELINE_ID} > $CI_PROJECT_DIR/docker_image_build.env
208
22:39:05
else
209
22:39:05
echo $CONTAINER_IMAGE_VARIABLE=$final_image_url > $CI_PROJECT_DIR/docker_image_build.env
210
22:39:05
fi
211
22:39:05
section_end:1778020745:section_script_step_9
212
22:39:05
$ echo ${CONTAINER_IMAGE_VARIABLE}_HASH=$docker_checksum >> $CI_PROJECT_DIR/docker_image_build.env
213
22:39:05
section_start:1778020745:section_script_step_11[hide_duration=true,collapsed=true]
$ if [ "${FORCE_BUILD}" != "true" ] || command -v crane &> /dev/null; then
214
22:39:05
echo $REGISTRY_PASSWORD | crane auth login $REGISTRY -u $REGISTRY_USER --password-stdin
215
22:39:05
fi
216
22:39:05
section_end:1778020745:section_script_step_11
217
22:39:05
218
22:39:05
WARNING! Your credentials are stored unencrypted in '/kaniko/.docker/config.json'.
219
22:39:05
Configure a credential helper to remove this warning. See
220
22:39:05
https://docs.docker.com/go/credential-store/
221
22:39:05
222
22:39:05
2026/05/05 22:39:05 logged in via /kaniko/.docker/config.json
223
22:39:05
section_start:1778020745:section_script_step_12[hide_duration=true,collapsed=true]
$ if [ "${FORCE_BUILD}" != "true" ] && remote_image_exists "$final_image_url"; then
224
22:39:05
echo "Image already exists, skip the build."
225
22:39:05
echo "$final_image_url"
226
22:39:05
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
227
22:39:05
_EXT=""
228
22:39:05
_BACKUP_EXT="-CI${CI_JOB_ID}-$(date '+%Y%m%d')"
229
22:39:05
elif [[ -n "$CI_MERGE_REQUEST_ID" ]]; then
230
22:39:05
_EXT="-MR${CI_MERGE_REQUEST_IID}"
231
22:39:05
_BACKUP_EXT=""
232
22:39:05
elif [[ "$CI_COMMIT_REF_PROTECTED" == "true" ]]; then
233
22:39:05
_EXT="-${CI_COMMIT_REF_SLUG}"
234
22:39:05
_BACKUP_EXT="-CI${CI_JOB_ID}-$(date '+%Y%m%d')"
235
22:39:05
fi
236
22:39:05
for _TAG in $CONTAINER_IMAGE_TAG; do
237
22:39:05
echo "Copying ${final_image_url} to ${final_image_name}:${_TAG}${_EXT}"
238
22:39:05
copy_image "${final_image_url}" "${final_image_name}:${_TAG}${_EXT}" "${_BACKUP_EXT}"
239
22:39:05
done
240
22:39:05
if [ "${PIPELINE_IMAGE_REFS}" == "1" ]; then
241
22:39:05
_EXT="-P${CI_PROJECT_ID}-${CI_PIPELINE_ID}"
242
22:39:05
echo "Copying ${final_image_url} to ${final_image_url}${_EXT}"
243
22:39:05
copy_image "${final_image_url}" "${final_image_url}${_EXT}"
244
22:39:05
for _TAG in $CONTAINER_IMAGE_TAG; do
245
22:39:05
echo "Copying ${final_image_url} to ${final_image_name}:${_TAG}${_EXT}"
246
22:39:05
copy_image "${final_image_url}" "${final_image_name}:${_TAG}${_EXT}"
247
22:39:05
done
248
22:39:05
fi
249
22:39:05
exit 0
250
22:39:05
fi
251
22:39:05
section_end:1778020745:section_script_step_12
252
22:39:06
Image already exists, skip the build.
253
22:39:06
registry.scandit.com/internal/gitlab-templates/sc-uv-example-dev:a0627644f5b19a5299950b9d06386a69ed2ca73d2843193725069ae656cd6167
254
22:39:06
Copying registry.scandit.com/internal/gitlab-templates/sc-uv-example-dev:a0627644f5b19a5299950b9d06386a69ed2ca73d2843193725069ae656cd6167 to registry.scandit.com/internal/gitlab-templates/sc-uv-example-dev:latest-MR638
255
22:39:06
registry.scandit.com/internal/gitlab-templates/sc-uv-example-dev:a0627644f5b19a5299950b9d06386a69ed2ca73d2843193725069ae656cd6167
256
22:39:06
2026/05/05 22:39:06 HEAD request failed, falling back on GET: HEAD https://registry.scandit.com/v2/internal/gitlab-templates/sc-uv-example-dev/manifests/latest-MR638: unexpected status code 404 Not Found (HEAD responses have no body, use GET for details)
257
22:39:07
Error: GET https://registry.scandit.com/v2/internal/gitlab-templates/sc-uv-example-dev/manifests/latest-MR638: MANIFEST_UNKNOWN: manifest unknown; map[Tag:latest-MR638]
258
22:39:07
image does not exist, writing newest version
259
22:39:07
2026/05/05 22:39:06 Copying from registry.scandit.com/internal/gitlab-templates/sc-uv-example-dev:a0627644f5b19a5299950b9d06386a69ed2ca73d2843193725069ae656cd6167 to registry.scandit.com/internal/gitlab-templates/sc-uv-example-dev:latest-MR638
260
22:39:07
2026/05/05 22:39:07 existing blob: sha256:0e188e6a6d38dcb0203c2a32ea25e2521594a7619ba8b508f5b695684b9878a7
261
22:39:07
2026/05/05 22:39:07 existing blob: sha256:b40150c1c2717d324cdb17278c8efdfa4dfcd2ffe083e976f0bcedf31115f081
262
22:39:07
2026/05/05 22:39:07 existing blob: sha256:5957ac4c1fdd681289811aec6fdcfdab5af82ebf72bb1d095726a3db3ae49601
263
22:39:07
2026/05/05 22:39:07 existing blob: sha256:55cab8de839fd4db3f8ccd4a0ec43ad27c5e112834e9e6ee2b1a68ebb49b64d0
264
22:39:07
2026/05/05 22:39:07 existing blob: sha256:25c95bc50f6df2aef8cc6217d03c85fdb351d25e009a73aad25e8a8b1f69dda7
265
22:39:07
2026/05/05 22:39:07 existing blob: sha256:03e82c1b170c1fe933744d07af0abf9f491c7c8ca6488768c31e0929aa5cdd0f
266
22:39:07
2026/05/05 22:39:07 existing blob: sha256:23d857050f81a653f157eae1b84232244f89a72838eac1f926d8dcdaa49e1a18
267
22:39:07
2026/05/05 22:39:07 existing blob: sha256:333d6461d4fea818d630d9d534403cde23af442fe22aee2568f203373413a61f
268
22:39:07
2026/05/05 22:39:07 existing blob: sha256:89732bc7504122601f40269fc9ddfb70982e633ea9caf641ae45736f2846b004
269
22:39:07
2026/05/05 22:39:07 existing blob: sha256:23692376f9d8e7bf19aa0d7ad97364e53b9dc7f852b54764581ceeb9c68fb8b3
270
22:39:07
2026/05/05 22:39:07 existing blob: sha256:b553736ae00de6e3abc63c1e57a3a70491cc286fc0c23728700f93bfd808c955
271
22:39:07
2026/05/05 22:39:07 existing blob: sha256:deca4280ae11ddb2615567c8e8cde06d7377084155c328300fdd2512d598735c
272
22:39:07
2026/05/05 22:39:07 existing blob: sha256:ff4a66cef1b3543840300341b88667bb3ffdf769593d6a81185338deef3aae3e
273
22:39:08
2026/05/05 22:39:07 registry.scandit.com/internal/gitlab-templates/sc-uv-example-dev:latest-MR638: digest: sha256:7226d2c1d441111125df9452fc5a3a1c77f959d69ea2f55dd3acb75fcf5923e0 size: 3972
274
22:39:08
2026/05/05 22:39:07 Copying from registry.scandit.com/internal/gitlab-templates/sc-uv-example-dev:a0627644f5b19a5299950b9d06386a69ed2ca73d2843193725069ae656cd6167 to registry.scandit.com/internal/gitlab-templates/sc-uv-example-dev:latest-MR638
275
22:39:08
2026/05/05 22:39:08 existing manifest: latest-MR638@sha256:7226d2c1d441111125df9452fc5a3a1c77f959d69ea2f55dd3acb75fcf5923e0
276
22:39:08
277
22:39:08
Scout Analysis: https://scout.scandit.io/analysis/projects/621/jobs/54442861
278
22:39:08
279
22:39:08
280
22:39:08
Grafana Pod-View: https://grafana.scandit.com/d/k8s_views_pods/kubernetes-views-pods?orgId=1&refresh=1m&var-datasource=lu1rmx27z&var-host=ip-10-0-27-85.eu-central-1.compute.internal&var-namespace=gitlab-runner&var-pod=runner-wrxjpbsjx-project-621-concurrent-3-8h1z0ro8&var-resolution=15&from=1778020744000&to=1778020748000
281
22:39:08
Grafana Node-View: https://grafana.scandit.com/d/k8s_views_nodes/kubernetes-views-nodes?orgId=1&refresh=1m&var-datasource=lu1rmx27z&var-node=ip-10-0-27-85.eu-central-1.compute.internal&var-resolution=15s&from=1778020744000&to=1778020748000
282
22:39:08
Loki Logs: https://grafana.scandit.com/a/grafana-lokiexplore-app/explore/log_group/gitlab-runner/logs?var-ds=nVsAo7UVk&var-filters=log_group|=|gitlab-runner&var-filters=source|=|k8s-ci.aws.scandit.io&var-filters=namespace|=|gitlab-runner&var-filters=CI_PROJECT_ID|=|621&var-filters=CI_PIPELINE_ID|=|1580354&var-filters=CI_JOB_ID|=|54442861&sortOrder=Ascending&from=1778020744000&to=1778020748000
283
22:39:08
date: invalid date '-7 days'
284
22:39:08
date: invalid date '+7 days'
285
22:39:08
Lilibet Statistics: https://lilibet.scandit.io/dashboard/204-job-drill-down?date_range=~&job_name=build&project=internal/gitlab-templates
286
22:39:08
287
22:39:08
288
22:39:08
section_end:1778020748:step_script
289
22:39:08
+section_start:1778020748:upload_artifacts_on_success
290
22:39:08
+Uploading artifacts for successful job
291
22:39:09
Uploading artifacts...
292
22:39:09
docker_image_build.env: found 1 matching artifact files and directories
293
22:39:09
Uploading artifacts as "dotenv" to coordinator... 201 Created correlation_id=01KQX4Q3QTV6ZSCTBZ6PHNNFFT id=54442861 responseStatus=201 Created token=64_q6p9kb
294
22:39:09
295
22:39:09
section_end:1778020749:upload_artifacts_on_success
296
22:39:09
+section_start:1778020749:cleanup_file_variables
297
22:39:09
+Cleaning up project directory and file based variables
298
22:39:10
299
22:39:10
section_end:1778020750:cleanup_file_variables
300
22:39:10
+
301
22:39:10
Job succeeded
302