Ok, now I understand, the problem is that the key name is the same in each iteration, so I made a “tricky way” to solve it: 1) I get the log file and use cat -n
to create different IDs for each line, 2) use this filter (.*)\s:\s(.+)
, 3) I get the data variables, take a look:
The logfile:
URL : https://x.abc.com
NAMESPACE: test
URL : https://x.cde.com
Header : "content-type": "application/json"
OR
URL : https://test-x.cde.com
Job definition:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 57c94148-a120-4c50-96cb-cda506e4a7e7
loglevel: INFO
name: HelloWorld
nodeFilterEditable: false
notification:
onstart:
format: null
httpMethod: null
urls: http://google.com
notifyAvgDurationThreshold: null
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- fileExtension: .sh
interpreterArgsQuoted: false
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
regex: (.*)\s:\s(.+)
type: key-value-data
script: cat -n /Users/user/Downloads/file.log | grep "URL"
scriptInterpreter: /bin/bash
- exec: echo "first URL is ${data._____1_URL}"
- exec: echo "second URL is ${data._____3_URL}"
- exec: echo "third URL is ${data._____6_URL}"
keepgoing: false
strategy: node-first
uuid: 57c94148-a120-4c50-96cb-cda506e4a7e7
Of course, this example is totally improvable.
Hope it helps!
Hey Marky,
Basically is to add an id to any “same name” data key (called “URL” in the example), let me put the job definition with the YAML indentation fixed (it was pasted pretty bad).
The log file (“printed” in the first step)
URL : https://x.abc.com
NAMESPACE: test
URL : https://x.cde.com
Header : "content-type": "application/json"
OR
URL : https://test-x.cde.com
The fixed job definition (in YAML format), you can import and test it.
- defaultTab: nodes
description: ''
executionEnabled: true
id: 8b753189-e851-4819-b54d-177d843de0e1
loglevel: INFO
name: HelloWorld
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: localhost '
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- description: 'just print the file content, the "logfile" and put an id on any
"URL" entry'
fileExtension: .sh
interpreterArgsQuoted: false
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
regex: (.*)\s:\s(.+)
type: key-value-data
script: cat -n /Users/variacode/Desktop/myfile.txt | grep "URL"
scriptInterpreter: /bin/bash
- description: print the values using data variables
fileExtension: .sh
interpreterArgsQuoted: false
script: |-
echo "first URL is @data._____1_URL@"
echo "second URL is @data._____3_URL@"
echo "third URL is @data._____6_URL@"
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: 8b753189-e851-4819-b54d-177d843de0e1
Regards.