How to Export Schedulix jobs from one physical machine to another

396 views
Skip to first unread message

Bharani Reddy

unread,
Jan 29, 2016, 8:00:13 AM1/29/16
to schedulix
Hello,

Any pointers on how to migrate jobs from one physical machine to another?

Thanks,
Bharani

Ronald Jeninga

unread,
Jan 29, 2016, 8:27:33 AM1/29/16
to schedulix
Hi Bharani,

In schedulix there's no other way than to do it manually yourself.

If you want to setup different environments for, let me say, development, test and production, the most common solution in schedulix is to set up 3 folder subtrees.
The use of folder parameters and conditions in the Environment Resource Requirements you can build a system that executes jobs from the production subtree on production jobservers, and the same job definition copied to the test subtree will be executed by a test jobserver.
Deployment in such a situation simply means to move or copy the batch from one folder to another.

If you want to move a scheduling server from one machine to another, you do the following:
1. shut down schedulix
2. make a database backup
3. restore the db backup on the target machine
4. move/copy the schedulix tree from the origin to the target
5. adjust the configuration
6. start the system

If the two machines happen to have a different architecture (32 bit vs. 64 bit intel, intel vs ppc, ...), you'll have to make a "logical" backup of your database.
The schedulix part is easy: recompile and configure. (basically install it; you know now how that works).
If you want to port from one brand DBMS to another brand, you'll need something to port the database. In some cases a "logical" backup is a good starting point.

If you still insist on this feature, have a look at BICsuite PROFESSIONAL, which is closed source and costs money.
(we have to live in order to be able to maintain schedulix).


Regards,

Ronald

Bharani Reddy

unread,
Jan 29, 2016, 8:51:21 AM1/29/16
to schedulix
Thank you Ronald. 
Where do Job and Batch definitions get stored in Schedulix? Is it a flat file that I can export? or is it stored in DB?

Thanks,
Bharani

Ronald Jeninga

unread,
Jan 29, 2016, 9:40:14 AM1/29/16
to schedulix
Hi Bharani,

welcome :-)

Everything, except for the few configuration files you've edited already, is stored within the database you created during the installation.
Job definitions are stored in the scheduling_entity table, the dependencies in the dependency_definition table, hierarchies in the hierarchy_definition, and so on.
There are about 70 tables involved. The definitions of the tables can be found in the sql/*_gen directories. (There's a directory for each supported DBMS).

Although not perfect, the column names are quite understandable after a bit of studying and thinking. Most object types have an abbreviation, like
SME = SubMittedEntity
SE = Scheduling Entity
FP = FootPrint
NE = Named Environment
R = Resource
NR = Named Resource
ESD = Exit State Definition
ESP = Exit State Profile
and so on.

95% of the data model is in BCNF (Boyce-Codd Normal Form). But we have a few violations to gain performance (main reason), or to reduce the number of required tables (Object Pooling; e.g. folder/scope/job/resource parameters all look the same, except for the object they belong to).

In most cases fields in the output structures (try some show and or list commands in sdmsh) have names that resemble the column names. Also the fields in the GUI can be matched with the data model.

Still, the data model is quite complex and it won't be easy to generate definition statements from the database.
Especially the huge number of relationships make it somewhat difficult to comprehend.

Important Note: DON'T USE SQL TO CHANGE THE DATABASE, USE schedulix METHODS INSTEAD!
Though not impossible, it's dangerous. We don't do it ourselves, unless there's no way to avoid it.

Regards,

Ronald

Dieter Stubler

unread,
Jan 29, 2016, 12:03:42 PM1/29/16
to schedulix
Hallo Bharani,

as Ronald stated out, deployment vi export/import from one scheduling server to another is a feature of the PROFESSIONAL Edition of BICsuite.
Before spending a lot of time and money implementing this on your own you should get in contact with us.

There is maybe a solution for your task which is not very handy but works for a small amount of things to move.

If you stop your zope application server and run it in foreground using the command 'runzope' found in the bin directory of your bicsuite zope instance and redirecting the output into a file,
the file will contain the commands to create or alter your objects.

You just have to 'save' all objects you want to move in the web gui and then pick the statements from the logfile.
Those statements you can execute to another schedulix instance via sdmsh afterwards.

Regards
Dieter

Bharani Reddy

unread,
Feb 2, 2016, 10:06:30 AM2/2/16
to schedulix
Thank you Dieter.. That is a very helpful. I would give this a try. 

Samadhan Gudekar

unread,
May 6, 2016, 6:49:08 AM5/6/16
to schedulix
Hi Ronald,

I tried above steps. Got backup of one db and restored it to another machine. after restarting I am able to see jobs from old machine on my new machine. But now here is interesting thing where I need clarification. My old machine was pointing to one job server where it was executing jobs. Now how I can communicate to job server which my old machine was using? i.e. when I am trying to execute/run copied job on my new machine, it is allowing me to submit it, but how job server will behave while responding back?  

Can multiple schedulix server can point to single job server? Say Schedulix server S1 requested for specific task to Job server J1 and there is another Schedulix server S2 requested for another task to Job Server J1. Does Job server J1 have capability to respond back with response specific to Schedulix server S1 and S2?

In step # 5 above, what adjustment we need to do in configuration?

Thanks.

Dieter Stubler

unread,
May 6, 2016, 7:04:20 AM5/6/16
to schedulix
Hi Samadhan,

You have to think the other way around.
The scheduling server is not pointing to jobservers.
Jobservers are connecting to the scheduling server to get the next job to execute.
A jobserver connects to one scheduling server only.
So you have to either give the new machine the ip address of the old one or you have to change the config files of your jobsevers to tell them the new ip address of your new scheduling server.

Regards
Dieter

Samadhan Gudekar

unread,
May 9, 2016, 1:05:33 AM5/9/16
to schedulix
Thanks Dieter for clarifying this. I got it now. One more question related to migration from one environment to other as I am very new to Schedulix.  I have looked at database tables where I noticed table SCOPE_CONFIG. My question is after taking backup of current DB (old one) I will get dump of it (say list of insert script), if I change configuration values in insert script as per new environment for SCOPE_CONFIG table and if I restored this to new environment. will it work by modifying single table? or there are other tables involved in it?

Ronald Jeninga

unread,
May 10, 2016, 2:22:52 AM5/10/16
to schedulix
Hi,

instead of patching directly in the database, it'll be a better idea to use schedulix commands (in this case "alter scope").
Of course, it's your database and you can do whatever you like, but don't ask us for help if you spoiled it. Maybe it'll be good to edit the database files directly and to circumvent the SQL interface?

In the past ten or more years there wasn't a single situation which required direct writes to the database (disregarding schema upgrades of course).

Bottom line: NEVER EVER.

Regards

Ronald
Message has been deleted

Samadhan Gudekar

unread,
May 10, 2016, 6:12:31 AM5/10/16
to schedulix
Thanks Ronald for your suggestion. Sure I will avoid updating database. I agree with you why someone is thinking to update DB if there is ready UI interface. The reason I asked this question because we are trying to configure more than 100 jobs (this no will increase in future for our case). Now say we are doing development in local environment. later on we may need to move code base to test environment then to pre-prod and then to prod etc. Backup and Restore will give us new copy of  environment with old settings. But manually re-configuring jobs at UI may consume some time. So I was trying to find out way where we can automate some script which will do this in one shot so just thought of checking with you guys if its doable. But sounds that is not good choice so we will prefer it using command option.I was not aware of it. I think that will help. Thanks again.

Dieter Stubler

unread,
May 10, 2016, 6:34:17 AM5/10/16
to schedulix
Have a look at the content of the repository views sci_...
To make mass changes without a lot of clicking in the ui, a common way doing this, is to use sql queries a script to generate the schedulix commands neccessary to do the changes and run them with the sdmsh command line utility against the server.
This is a perfectly supported way of doing automated changes.
As Ron states out: Never change any data of the repository directly, let the schedulix server do that.

Regards
Dieter

Samadhan Gudekar

unread,
May 20, 2016, 1:25:19 AM5/20/16
to schedulix
 Sure. Thanks Dieter. This will help

Vishal Kadam

unread,
Jun 17, 2016, 6:17:54 AM6/17/16
to schedulix
Hi,

I wanted to create a template DB which will contain only the batches,job definitions and their relationship so that i share them with the team. I tried to delete the sample jobs and folder which I had created for testing using GUI. But in the database still see those entries.

I quired FOLDER and SCHEDULING_ENTITY table in the db, see those folder name and job name which i deleted.

T.I.A
Regards,

Vishal Kadam

Dieter Stubler

unread,
Jun 17, 2016, 7:17:07 AM6/17/16
to sche...@googlegroups.com
Hi,

that's because all definitions are versioned.
There might exist run time information for submitted/executed jobs referring to former versions of the scheduling_entity you changed or deleted afterwards.
If you delete folder, batch/job (scheduling_entity), ...  objects, they will be closed but not deleted.
Any references of submitted batches/jobs to definitions are always resolved to the versions which were current the time the master submit was executed.
This also makes it possible to finish an already submitted batch using definition data valid at submit time even when you edit/delete any object of that batch after submit.
This implicates also that after job is submitted any changes to job (for example changed run program) will not affect the already submitted job.

Hope that explains the situation for you

Regards
Dieter

Vishal Kadam

unread,
Jun 29, 2016, 3:13:25 AM6/29/16
to schedulix
Hi,

Thanks Dieter, that information was very helpfull.

What will be the good way to create template DB which will contain only the batches,job definitions and their relationship so that i share them with the team.

Regards,

Vishal Kadam

Dieter Stubler

unread,
Jun 29, 2016, 5:36:47 AM6/29/16
to schedulix
Hi,

If you want to cleanup your database from any run time information and definition versions not longer current, you can shut down your schedulix server and run the following statements on your repository database.
Only do that, if you really know what you are doing !!!

DELETE FROM DEPENDENCY_DEFINITION WHERE VALID_TO < 9223372036854775807;
DELETE FROM DEPENDENCY_STATE WHERE VALID_TO < 9223372036854775807;
DELETE FROM ENVIRONMENT WHERE VALID_TO < 9223372036854775807;
DELETE FROM EXIT_STATE_DEFINITION WHERE VALID_TO < 9223372036854775807;
DELETE FROM EXIT_STATE_MAPPING_PROFILE WHERE VALID_TO < 9223372036854775807;
DELETE FROM EXIT_STATE_MAPPING WHERE VALID_TO < 9223372036854775807;
DELETE FROM EXIT_STATE_PROFILE WHERE VALID_TO < 9223372036854775807;
DELETE FROM EXIT_STATE WHERE VALID_TO < 9223372036854775807;
DELETE FROM EXIT_STATE_TRANS_PROFILE WHERE VALID_TO < 9223372036854775807;
DELETE FROM FOLDER WHERE VALID_TO < 9223372036854775807;
DELETE FROM IGNORED_DEPENDENCY WHERE VALID_TO < 9223372036854775807;
DELETE FROM NAMED_ENVIRONMENT WHERE VALID_TO < 9223372036854775807;
DELETE FROM OBJECT_COMMENT WHERE VALID_TO < 9223372036854775807;
DELETE FROM PARAMETER_DEFINITION WHERE VALID_TO < 9223372036854775807;
DELETE FROM RESOURCE_REQ_STATES WHERE VALID_TO < 9223372036854775807;
DELETE FROM RESOURCE_REQUIREMENT WHERE VALID_TO < 9223372036854775807;
DELETE FROM RESOURCE_STATE_MAPPING WHERE VALID_TO < 9223372036854775807;
DELETE FROM RESOURCE_STATE_MAP_PROF WHERE VALID_TO < 9223372036854775807;
DELETE FROM SCHEDULING_ENTITY WHERE VALID_TO < 9223372036854775807;
DELETE FROM SCHEDULING_HIERARCHY WHERE VALID_TO < 9223372036854775807;
DELETE FROM TEMPLATE_VARIABLE WHERE VALID_TO < 9223372036854775807;
DELETE FROM TRIGGER_DEFINITION WHERE VALID_TO < 9223372036854775807;
DELETE FROM TRIGGER_STATE WHERE VALID_TO < 9223372036854775807;
DELETE FROM ENTITY_VARIABLE;
DELETE FROM KILL_JOB;
DELETE FROM RESOURCE_ALLOCATION;
DELETE FROM RUNNABLE_QUEUE;
DELETE FROM SUBMITTED_ENTITY;
DELETE FROM TRIGGER_QUEUE;

after that you have a clean db you can use as a template for your team.

For evaluation purposes, this might be temporary solution but for production systems this is discouraged.

Updating a production system this way would be only possible if no batches or jobs are active.
You also would loose any past run time information of that system.
Changes to the production system for email notification triggers, changed priorities, time scheduling, ... would also be lost.
Additionally the target systems job server and resources setup would have to match the setup of you template db.

For life cycle management and deploying from/into development -> test -> production systems of batches, folders and other definitions you should take an upgrade to BICsuite PROFESSIONAL into account.

Regards
Dieter

   
Reply all
Reply to author
Forward
0 new messages