Re: [IQUG] IQ data migration

31 views
Skip to first unread message

Leonid Gvirtz

unread,
Aug 11, 2014, 5:07:54 AM8/11/14
to Asif Arif, Tomas Dalebjork, Jason L. Froebe, IQ Users Group, iq...@googlegroups.com, iq...@dssolutions.com
Hi Asif

In my experience, binary extract outputs are usually bigger than ascii ones, sometimes even much bigger. This is because the binary extract stores the data in a "fixed-width"-like internal format behind the scenes. The difference in the size between a binary extract output and an ascii one depends on your data types of your character columns and also from the average size of the actual content of such columns. The size of the extract outputs may affect the performance of both extract and load quite greatly. I remember cases when I had to switch from binary extracts to ascii ones just for this reason.

Hope it helps
Leonid Gvirtz


From: Asif Arif <asif...@sybaseproducts.com>
To: 'Tomas Dalebjork' <tomas.d...@repostor.com>; 'Jason L. Froebe' <jason....@gmail.com>
Cc: 'IQ Users Group' <iq...@iqug.org>; iq...@googlegroups.com; iq...@dssolutions.com
Sent: Sunday, August 10, 2014 7:49 PM
Subject: Re: [IQUG] IQ data migration

Jason – thanks!
 
Binary export/import is doing exactly what I need with regards to preservation of nulls, string nulls and blanks.
 
We are not changing endian, its solaris IQ to solaris IQ so I am not concerned with the endianness.
 
I will change my migration scripts and switch to binary extraction and load. Other than endianness, will this impact anything else? Why is ASCII the preferred choice over binary? Is there a performance penalty for using binary?
 
Binary seems to be the most simple and the way to go by the looks of it.
 
Asif.
 


From: Tomas Dalebjork [mailto:tomas.d...@repostor.com]
Sent: Sunday, August 10, 2014 8:42 PM
To: Jason L. Froebe; Asif Arif
Cc: IQ Users Group; iq...@googlegroups.com; iq...@dssolutions.com
Subject: Re: [IQUG] IQ data migration
 
Its true what Jason says.
Check also if you are using BLOB; these might not be possible to convert at all between different endians.

Regards Tomas
On 10 augusti 2014 18:38:57 CEST, "Jason L. Froebe" <jason....@gmail.com> wrote:
The binary export/import would be just that.  It's a literal export/import of the binary data for each column.  The only translation that would be performed is if you explicitly state a binary swap (switches endian)
Switching character sets is always problematic regardless of software.  I try to use unicode when possible.
jason
 
On Sun, Aug 10, 2014 at 12:31 PM, Asif Arif <asif...@sybaseproducts.com> wrote:
We need to migrate data between IQ servers as we are changing the database collation to CP1256ARA.
 
Backup/restore will not work.
 
Thanks,
Asif.
 
From: Tomas Dalebjork [mailto:tomas.d...@repostor.com]
Sent: Sunday, August 10, 2014 8:30 PM
To: Jason L. Froebe; Asif Arif
Cc: IQ Users Group; iq...@googlegroups.com; iq...@dssolutions.com

Subject: Re: [IQUG] IQ data migration
 
Hi


If you want to speed up backup and recoveries; than this solution might be useful: Data Protector for SybaseIQ using IBM Tivoli Storage Manager
Http://repostor.com

Regards Tomas
On 10 augusti 2014 18:26:28 CEST, "Jason L. Froebe" <jason....@gmail.com> wrote:
If same version and platform, a backup/restore would be easiest.  Otherwise, use export out in IQ (binary) mode swapping the byte order if going to a different endian.
jason
 
On Sun, Aug 10, 2014 at 10:08 AM, Asif Arif <asif...@sybaseproducts.com> wrote:
Hi Guys,
 
I am facing the following challenge.
 
We must migrate data from IQ to IQ and the data must be matching 100% including preservation of all blanks, nulls (character), string nulls, 0’s and nulls (numeric).
 
Consider the following as an example:
 
Create table test (a varchar(10) null, b varchar(10) null, c varchar(10) null, d int null, e int null) ;
Insert test values (‘’,’NULL’,NULL,0,NULL);
Insert test values (‘’,’NULL’,NULL,0,NULL);
Insert test values (‘’,’NULL’,NULL,0,NULL);
 
How can I migrate the above data using a standardized method of extraction & load across all my tables and columns in the database?
 
I have experimented using temp_extract_null_as_empty & the column attribute NULL(BLANKS) in my load table statement, but this will dump all real NULLS and blanks (‘’) as empty in my dump file so there is no differentiation. Hence, when loading using NULL(BLANKS) then the first column (‘’) will be loaded as NULL in the target which is incorrect.
 
The only way I have found is to customize each column attribute in the load table, but this will take forever as we have thousands of tables to migrate and cannot afford to customize each load table statement.
 
Please share your thoughts.
 
Best Regards,
Asif.

_______________________________________________
IQUG mailing list
IQ...@iqug.org
http://iqug.org/mailman/listinfo/iqug



Tomas Dalebjork
CIO : REPOSTOR AB
Forsta Langgatan 19
44321 Goteborg
Sweden



Tomas Dalebjork
CIO : REPOSTOR AB
Forsta Langgatan 19
44321 Goteborg
Sweden

_______________________________________________
IQUG mailing list
IQ...@iqug.org
http://iqug.org/mailman/listinfo/iqug


Leonid Gvirtz

unread,
Aug 11, 2014, 7:01:02 AM8/11/14
to Sengul Tasdemir, iq...@iqug.org, iq...@dssolutions.com, iq...@googlegroups.com
Hi Sengul

You may try to use views, at least as a temporary solution, see an example below. In a view, you can define any order of the columns of the base table. So, you can rename your base table after dropping and adding of the new column with a different datatype and create a view with the original order of columns.

1> create table test_tab_11(fid int, fname varchar(30))
go
2> 1> insert into test_tab_11 values (1, 'test_01')
go
2> (1 row affected)
1> create view test_view_11 as select fname, fid from test_tab_11
go
2> 1> insert into test_view_11 values ('test_02', 2)
go
2> (1 row affected)
1> select * from test_tab_11
go
2>  fid         fname                         
 ----------- ------------------------------
           1 test_01                       
           2 test_02                       

(2 rows affected)

Hope it helps
Leonid Gvirtz


From: Sengul Tasdemir <sen...@sybase.ae>
To: iq...@iqug.org; iq...@dssolutions.com; iq...@googlegroups.com
Sent: Monday, August 11, 2014 12:33 PM
Subject: [IQUG] Application migration from Oracle to IQ

Hi,
 
We are facing a challenge in an application migration project:
Application has embedded INSERT statements which doesn’t have column names as part of the statement, just follows the order of the columns in the table.
As part of development/implementation process application vendor sometimes needs to change the data type of columns. Since IQ doesn’t support data type changes for the columns, we need to add new column with new data type, move data and rename the column. But this method changes the location of the column  and makes existing INSERT statements invalid.
 
Is there any way to change the data type of column in place or to change the order of the columns?
 
Best Regards,
Sengul
 
 
 

Ron Watkins

unread,
Aug 11, 2014, 10:11:49 AM8/11/14
to Soundy, Richard, Sengul Tasdemir, iq...@iqug.org, iq...@dssolutions.com, iq...@googlegroups.com

2 ways to preserve order.

1)      Create new columns for every column following the one which changed, and move them all, then drop the old columns (Anoying, but it works).

2)      Copy the entire table with an select/into or pre-create a replacement ddl and migrate entire table. This is easier, but may require more time and space.

 

Ron Watkins

DSSI

3350 N. Arizona Ave, Suite 2

Chandler, AZ 85225

Office: 480.558.2203

Cell: 602.743.5272

 

From: iqug-b...@iqug.org [mailto:iqug-b...@iqug.org] On Behalf Of Soundy, Richard
Sent: Monday, August 11, 2014 2:57 AM
To: Sengul Tasdemir; iq...@iqug.org; iq...@dssolutions.com; iq...@googlegroups.com
Subject: Re: [IQUG] Application migration from Oracle to IQ

 

I don’t think so. The column order for the table is defined at the CREATE TABLE statement.  The addition of a new column will just add a new column on the end.

 

There is one thing that might work (and I am sorry, but I do not have an IQ server to test this on), and that is to add a new column (ALTER TABLE … ADD COLUM), then copy the data from the old column to the new column (UPDATE TABLE), then delete the old column (ALTER TABLE … DROP COLUMN), then immediately create a new column (with the new data-type) (ALTER TABLE … ADD COLUMN).  At this point see if the column number (not name) of the new column is the same as the original column.

 

I have no idea if this will work, I do not know if IQ (or actually SA) can and does “re-use” column numbers within a table, but if it does then it should solve the problem.

 

If anyone tries this, please let me know if it works.

 

 

Richard

 

Richard Soundy

 

EMEA Director Enterprise Systems Group

SAP Database and Technology Group

SAP UK | Sybase Court | Crown Lane | Maidenhead | SL6 8QZ | United Kingdom |

T +44 1628 597414 | F +44 1453 889122 |  M +44 7977 257414 |

http://www.sap.com

Ron Watkins

unread,
Aug 11, 2014, 10:13:18 AM8/11/14
to Jason L. Froebe, Leonid Gvirtz, iq...@googlegroups.com, iq...@dssolutions.com, IQ Users Group

Set the extract filename to a named pipe (FIFO) and stream the data trough gzip with your preferred compression level.

This is part of my normal binary extract script.

 

Ron Watkins

DSSI

3350 N. Arizona Ave, Suite 2

Chandler, AZ 85225

Office: 480.558.2203

Cell: 602.743.5272

 

From: iqug-b...@iqug.org [mailto:iqug-b...@iqug.org] On Behalf Of Jason L. Froebe
Sent: Monday, August 11, 2014 6:14 AM
To: Leonid Gvirtz
Cc: iq...@googlegroups.com; iq...@dssolutions.com; IQ Users Group
Subject: Re: [IQUG] IQ data migration

 

Agreed the file sizes are larger as would be expected. It is the only way to ensure the fields are the same though.  I wish SAP would support a compression layer here. 

Jason

Ron Watkins

unread,
Aug 11, 2014, 11:42:20 AM8/11/14
to Jason L. Froebe, IQ Users Group, iq...@dssolutions.com, iq...@googlegroups.com, Leonid Gvirtz

I found through experience that many production systems have filesystems which can support multiple dumps in parallel.

Ive done up to 10 extracts to named pipes in parallel, each going to a gzip and writing to the FS before the FS would reach 100% utilization.

Sometimes it only takes 4 or 5, but often more.

For a single large table, you can bracket the table into batches of row-id’s, run each batch into a separate named-pipe with gzip and get the whole thing extracted much faster than a single thread.

 

Ron Watkins

DSSI

3350 N. Arizona Ave, Suite 2

Chandler, AZ 85225

Office: 480.558.2203

Cell: 602.743.5272

 

From: Jason L. Froebe [mailto:jason....@gmail.com]
Sent: Monday, August 11, 2014 8:20 AM
To: Ron Watkins
Cc: IQ Users Group; iq...@dssolutions.com; iq...@googlegroups.com; Leonid Gvirtz
Subject: RE: [IQUG] IQ data migration

 

Definitely second using a named pipe.  :)

An alternative to using gzip would be parallel bzip2. (pbzip2). It will utilize multiple cpus for the compression. Uncompression is still single threaded but it uses much less CPU resources.

Jason

cjd

unread,
Aug 11, 2014, 12:08:21 PM8/11/14
to iqgoogle

Ron
It won't reuse the colid.
He's best bet to distinguish the string 'NULL' and NULL attribute  is via extract and load binary! For ASCII to work you would need to  be  able to  cast within isnull to any other datatype at extraction or an available  new temp extract option for null representation .

Cheers,
cjd

--
You received this message because you are subscribed to the Google Groups "iqug" group.
To unsubscribe from this group and stop receiving emails from it, send an email to iqug+uns...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Sengul Tasdemir

unread,
Aug 15, 2014, 9:47:29 AM8/15/14
to iq...@googlegroups.com, Soundy, Richard, iq...@iqug.org, iq...@dssolutions.com

Thanks Ron,

 

We have already thought about these options but we didn’t want to use because it would be a really long process based on the data size.

 

Regards,

Sengul

--

Sengul Tasdemir

unread,
Aug 15, 2014, 9:47:29 AM8/15/14
to iq...@iqug.org, iq...@dssolutions.com, iq...@googlegroups.com

Sengul Tasdemir

unread,
Aug 15, 2014, 9:47:29 AM8/15/14
to iq...@iqug.org, iq...@dssolutions.com, iq...@googlegroups.com

Hi All,

 

We are working on an BI application migration from Oracle to SAP IQ. As part of the BI application, vendor provide an interface to Business Users to change the data in some DW tables: For example they can define a new level in ledger and group the existing ones under that or change data in some tables that will effect financial reports. Although limited number of Business Users will have authority for such changes, it will require concurrent  access to SAP IQ tables.  For that reason we are using SAP IQ 16.x and we are planning to configure RLV store for concurrent access. Our concern is there will be around 300 RLV enabled table, which is almost 1/3 of entire DW model.

 

Is there anybody using RLV store in big scale?

What would be recommended configuration or usage for 300 RLV-enabled table?

 

Any best practice or recommendation based on experience will highly be appreciated.

 

Best Regards,

Sengul

 

 

From: iqug-b...@iqug.org [mailto:iqug-b...@iqug.org] On Behalf Of Sengul Tasdemir
Sent: 11 August 2014 13:33
To: iq...@iqug.org; iq...@dssolutions.com; iq...@googlegroups.com
Subject: [IQUG] Application migration from Oracle to IQ

 

Hi,

Mumy, Mark

unread,
Aug 19, 2014, 9:18:37 AM8/19/14
to Sengul Tasdemir, iq...@googlegroups.com, iq...@iqug.org, iq...@dssolutions.com

Sengul,

 

The SA engine simply looks for syntax errors.  I don’t believe it checks for function names.  Even though there may be functions in the code like DECODE, TO_STRING, TO_DATE, etc the SA parser isn’t checking any function list to see if they are valid.  They could quite easily be user defined function in C or SQL.

 

The net is that the syntax checking is done when the procedure/function is created AND at runtime.

 

Mark

 

Mark Mumy

Director, Enterprise Architecture, SAP HANA GCoE

M +1 347-820-2136 | E mark...@sap.com

My Blogs: http://scn.sap.com/people/markmumy/blog

 

https://sap.na.pgiconnect.com/I825063

Conference tel: 18663127353,,7090196396#


Description: Description: Description: Description: Description: sap_09_logo.JPG

 

From: iqug-b...@iqug.org [mailto:iqug-b...@iqug.org] On Behalf Of Sengul Tasdemir
Sent: Tuesday, August 19, 2014 07:08
To: iq...@googlegroups.com; iq...@iqug.org; iq...@dssolutions.com
Subject: [IQUG] IQ - No error at compilation

 

Dear All,

 

We are converting Oracle procedures to IQ and IQ doesn’t give any error at all at compilation time although procedure is full of Oracle functions.

Such behavior makes migration process more challenging, because it is easy to skip some conversions in 1000s of lines of PL-SQL code.

Is there any way to increase the sensitivity of IQ compiler?

 

Best Regards,

Sengul

 

Mumy, Mark

unread,
Aug 19, 2014, 9:24:04 AM8/19/14
to Sengul Tasdemir, iq...@googlegroups.com, Soundy, Richard, iq...@iqug.org, iq...@dssolutions.com

Personally, I would not try to convert that.  Oracle, and most OLTP engines, have to put in extensive partitioning schemes like this in order to drive performance.

 

I would likely just use HASH partitioning on the COUNTRY_CODE column.

 

Mark

 

Mark Mumy

Director, Enterprise Architecture, SAP HANA GCoE

M +1 347-820-2136 | E mark...@sap.com

My Blogs: http://scn.sap.com/people/markmumy/blog

 

https://sap.na.pgiconnect.com/I825063

Conference tel: 18663127353,,7090196396#


Description: Description: Description: Description: Description: sap_09_logo.JPG

 

From: iqug-b...@iqug.org [mailto:iqug-b...@iqug.org] On Behalf Of Sengul Tasdemir
Sent: Tuesday, August 19, 2014 03:54
To: iq...@googlegroups.com; Soundy, Richard; iq...@iqug.org; iq...@dssolutions.com
Subject: [IQUG] IQ - composite-partitioning-scheme

 

Dear All,

 

Any help to convert  following Oracle partition schema to IQ will highly be appreciated:

 

create table MULTI_LGM

(

   COUNTRY_CODE         NUMBER,

   INST_CODE            NUMBER,

   SCENARIO_CODE        NUMBER,

…….

)

partition by list                 (COUNTRY_CODE)

subpartition by list (INST_CODE)

    (partition PAR_EUROPE values (2)     (subpartition SUB_EU_INST values (1)),

    partition PAR_OTHER_EU values (3)    (subpartition SUB_OTHER_EU_INST values (1)),

    partition PAR_SE_ASIA values (19)    (subpartition SUB_SE_ASIA_INST values (1) ),

    partition PAR_BAHRAIN values (48)    (subpartition SUB_BAHRAIN_INST values (1) ),

    partition PAR_CANADA values (124)    (subpartition SUB_CANADA_INST values (1)  ),

    partition PAR_CHINA values (156)     (subpartition SUB_CHINA_INST values (1) ),

    partition PAR_CYPRUS values (196)    (subpartition SUB_CYPRUS_INST values (1) ),

    partition PAR_PALESTINE values (275) (subpartition SUB_PALESTINE_INST values (1)),

    partition PAR_IRAQ values (368)      (subpartition SUB_IRAQ_INST values (1)),

    partition PAR_JAPAN values (392)     (subpartition SUB_JAPAN_INST values (1)),

    partition PAR_JORDAN values (400)    (subpartition SUB_JORDAN_INST values (1)),

                                partition PAR_KUWAIT values (414)    (subpartition SUB_KUWAIT_INST values (1)),

    partition PAR_LEBANON values (422)   (subpartition SUB_LEBANON_INST values (1)),

    partition PAR_OMAN values (512)      (subpartition SUB_OMAN_INST values (1)),

    partition PAR_QATAR values (634)     (subpartition SUB_QATAR_INST values (1)),

    partition PAR_KSA values (682)       (subpartition SUB_KSA_INST values (1)),

    partition PAR_SYRIA values (760)     (subpartition SUB_SYRIA_INST values (1)),

    partition PAR_UAE values (784)       (subpartition SUB_UAE_INST values (1)),

    partition PAR_EGYPT values (818)     (subpartition SUB_EGYPT_INST values (1)),

    partition PAR_USA values (840)       (subpartition SUB_USA_INST values (1)),

    partition PAR_OTHERS values (21)     (subpartition SUB_OTHERS_INST values (1)),

    partition PAR_OTHER_CTRY values (8)  (subpartition SUB_OTHER_CTRY_INST values (1)))

 

Best Regards,

Sengul

Reply all
Reply to author
Forward
0 new messages