Clarification on Nussknacker Job Deployment in Flink

4 views
Skip to first unread message

Arjun Sivakumar

unread,
Sep 15, 2025, 1:40:30 PM (4 days ago) Sep 15
to Nussknacker

Hi Team,

I am currently reviewing how Nussknacker handles the deployment of job scenarios in Flink.

Based on my understanding, when a user selects Deploy, Nussknacker uploads the required JAR files to the Flink JobManager. Along with the JAR, the DAG of the scenario is also passed. Subsequently, the Flink REST API is used to submit the job, executing the JAR with the DAG as an argument.

Kindly confirm if my understanding is correct. If not, I would appreciate it if you could explain how these JAR files function internally and how they are transformed into a Flink pipeline.

Thank you for your support.

Arjun Sivakumar

unread,
Sep 16, 2025, 12:43:54 AM (4 days ago) Sep 16
to Nussknacker
Hi team,

  Could anyone please assist with this? Your support would be greatly appreciated.

Thanks and Regards,
Arjun S

Arkadiusz Burdach

unread,
Sep 16, 2025, 4:53:04 AM (4 days ago) Sep 16
to Arjun Sivakumar, Nussknacker
Hi Arjun,

You are right. Small nuances that are worth mention:
- The JAR is deployed only once and is reused for subsequent deployments
- Next to DAG we also pass other things such as scenario metadata (version, labels etc.) and node parameters passed by the user (for example starting offset for kafka source)
- When user click deploy (in the cloud / new version "update"), for the currently running job, savepoint is saved, old job is stopped, and the new job is started from this savepoint

Arek
--
You received this message because you are subscribed to the Google Groups "Nussknacker" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nussknacker...@googlegroups.com.
To view this discussion, visit https://groups.google.com/d/msgid/nussknacker/4fa4983c-1b2b-49c9-b0a2-b78d4d5e1f94n%40googlegroups.com.

Reply all
Reply to author
Forward
0 new messages