我有一个遍历 JSON 数组的 Argo 工作流。当列表变得太大时,我会收到如下错误:
time="some-time" level=fatal msg="Pod \"some-pod-name\" is invalid: metadata.annotations: Too long: must have at most 262144 characters"
或者,在较新版本的 Argo 中:
Output is larger than the maximum allowed size of 256 kB, only the last 256 kB were saved
如何在不达到大小限制的情况下遍历这个大型 JSON 数组?
我的工作流程看起来有点像这样,但 JSON 数组更大:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: loops-sequence-
spec:
entrypoint: loops-sequence
templates:
- name: loops-sequence
steps:
- - name: get-items
template: get-items
- - name: sequence-param
template: echo
arguments:
parameters:
- name: item
value: "{{item}}"
withParam: "{{steps.get-items.outputs.parameters.items}}"
- name: get-items
container:
image: alpine:latest
command: ["/bin/sh", "-c"]
args: ["echo '[\"a\", \"b\", \"c\"]' > /tmp/items"]
outputs:
parameters:
- name: items
valueFrom:
path: /tmp/items
- name: echo
inputs:
parameters:
- name: item
container:
image: stedolan/jq:latest
command: [echo, "{{inputs.parameters.item}}"]