explicitly state a binary swap (switches endian). 2 ways to preserve order.
1) Create new columns for every column following the one which changed, and move them all, then drop the old columns (Anoying, but it works).
2) Copy the entire table with an select/into or pre-create a replacement ddl and migrate entire table. This is easier, but may require more time and space.
From: iqug-b...@iqug.org [mailto:iqug-b...@iqug.org] On Behalf Of Soundy, Richard
Sent: Monday, August 11, 2014 2:57 AM
To: Sengul Tasdemir; iq...@iqug.org; iq...@dssolutions.com; iq...@googlegroups.com
Subject: Re: [IQUG] Application migration from Oracle to IQ
I don’t think so. The column order for the table is defined at the CREATE TABLE statement. The addition of a new column will just add a new column on the end.
There is one thing that might work (and I am sorry, but I do not have an IQ server to test this on), and that is to add a new column (ALTER TABLE … ADD COLUM), then copy the data from the old column to the new column (UPDATE TABLE), then delete the old column (ALTER TABLE … DROP COLUMN), then immediately create a new column (with the new data-type) (ALTER TABLE … ADD COLUMN). At this point see if the column number (not name) of the new column is the same as the original column.
I have no idea if this will work, I do not know if IQ (or actually SA) can and does “re-use” column numbers within a table, but if it does then it should solve the problem.
If anyone tries this, please let me know if it works.
Richard
Richard Soundy
EMEA Director Enterprise Systems Group
SAP Database and Technology Group
SAP UK | Sybase Court | Crown Lane | Maidenhead | SL6 8QZ | United Kingdom |
T +44 1628 597414 | F +44 1453 889122 | M +44 7977 257414 |
Set the extract filename to a named pipe (FIFO) and stream the data trough gzip with your preferred compression level.
This is part of my normal binary extract script.
From: iqug-b...@iqug.org [mailto:iqug-b...@iqug.org] On Behalf Of Jason L. Froebe
Sent: Monday, August 11, 2014 6:14 AM
To: Leonid Gvirtz
Cc: iq...@googlegroups.com; iq...@dssolutions.com; IQ Users Group
Subject: Re: [IQUG] IQ data migration
Agreed the file sizes are larger as would be expected. It is the only way to ensure the fields are the same though. I wish SAP would support a compression layer here.
Jason
I found through experience that many production systems have filesystems which can support multiple dumps in parallel.
Ive done up to 10 extracts to named pipes in parallel, each going to a gzip and writing to the FS before the FS would reach 100% utilization.
Sometimes it only takes 4 or 5, but often more.
For a single large table, you can bracket the table into batches of row-id’s, run each batch into a separate named-pipe with gzip and get the whole thing extracted much faster than a single thread.
From: Jason L. Froebe [mailto:jason....@gmail.com]
Sent: Monday, August 11, 2014 8:20 AM
To: Ron Watkins
Cc: IQ Users Group; iq...@dssolutions.com; iq...@googlegroups.com; Leonid Gvirtz
Subject: RE: [IQUG] IQ data migration
Definitely second using a named pipe. :)
An alternative to using gzip would be parallel bzip2. (pbzip2). It will utilize multiple cpus for the compression. Uncompression is still single threaded but it uses much less CPU resources.
Jason
Ron
It won't reuse the colid.
He's best bet to distinguish the string 'NULL' and NULL attribute is via extract and load binary! For ASCII to work you would need to be able to cast within isnull to any other datatype at extraction or an available new temp extract option for null representation .
Cheers,
cjd
--
You received this message because you are subscribed to the Google Groups "iqug" group.
To unsubscribe from this group and stop receiving emails from it, send an email to iqug+uns...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Thanks Ron,
We have already thought about these options but we didn’t want to use because it would be a really long process based on the data size.
Regards,
Sengul
--
Hi All,
We are working on an BI application migration from Oracle to SAP IQ. As part of the BI application, vendor provide an interface to Business Users to change the data in some DW tables: For example they can define a new level in ledger and group the existing ones under that or change data in some tables that will effect financial reports. Although limited number of Business Users will have authority for such changes, it will require concurrent access to SAP IQ tables. For that reason we are using SAP IQ 16.x and we are planning to configure RLV store for concurrent access. Our concern is there will be around 300 RLV enabled table, which is almost 1/3 of entire DW model.
Is there anybody using RLV store in big scale?
What would be recommended configuration or usage for 300 RLV-enabled table?
Any best practice or recommendation based on experience will highly be appreciated.
Best Regards,
Sengul
From: iqug-b...@iqug.org [mailto:iqug-b...@iqug.org] On Behalf Of Sengul Tasdemir
Sent: 11 August 2014 13:33
To: iq...@iqug.org; iq...@dssolutions.com; iq...@googlegroups.com
Subject: [IQUG] Application migration from Oracle to IQ
Hi,
Sengul,
The SA engine simply looks for syntax errors. I don’t believe it checks for function names. Even though there may be functions in the code like DECODE, TO_STRING, TO_DATE, etc the SA parser isn’t checking any function list to see if they are valid. They could quite easily be user defined function in C or SQL.
The net is that the syntax checking is done when the procedure/function is created AND at runtime.
Mark
Mark Mumy
Director, Enterprise Architecture, SAP HANA GCoE
M +1 347-820-2136 | E mark...@sap.com
My Blogs: http://scn.sap.com/people/markmumy/blog
https://sap.na.pgiconnect.com/I825063
Conference tel: 18663127353,,7090196396#

From: iqug-b...@iqug.org [mailto:iqug-b...@iqug.org]
On Behalf Of Sengul Tasdemir
Sent: Tuesday, August 19, 2014 07:08
To: iq...@googlegroups.com; iq...@iqug.org; iq...@dssolutions.com
Subject: [IQUG] IQ - No error at compilation
Dear All,
We are converting Oracle procedures to IQ and IQ doesn’t give any error at all at compilation time although procedure is full of Oracle functions.
Such behavior makes migration process more challenging, because it is easy to skip some conversions in 1000s of lines of PL-SQL code.
Is there any way to increase the sensitivity of IQ compiler?
Best Regards,
Sengul
Personally, I would not try to convert that. Oracle, and most OLTP engines, have to put in extensive partitioning schemes like this in order to drive performance.
I would likely just use HASH partitioning on the COUNTRY_CODE column.
Mark
Mark Mumy
Director, Enterprise Architecture, SAP HANA GCoE
M +1 347-820-2136 | E mark...@sap.com
My Blogs: http://scn.sap.com/people/markmumy/blog
https://sap.na.pgiconnect.com/I825063
Conference tel: 18663127353,,7090196396#

From: iqug-b...@iqug.org [mailto:iqug-b...@iqug.org]
On Behalf Of Sengul Tasdemir
Sent: Tuesday, August 19, 2014 03:54
To: iq...@googlegroups.com; Soundy, Richard; iq...@iqug.org; iq...@dssolutions.com
Subject: [IQUG] IQ - composite-partitioning-scheme
Dear All,
Any help to convert following Oracle partition schema to IQ will highly be appreciated:
create table MULTI_LGM
(
COUNTRY_CODE NUMBER,
INST_CODE NUMBER,
SCENARIO_CODE NUMBER,
…….
)
partition by list (COUNTRY_CODE)
subpartition by list (INST_CODE)
(partition PAR_EUROPE values (2) (subpartition SUB_EU_INST values (1)),
partition PAR_OTHER_EU values (3) (subpartition SUB_OTHER_EU_INST values (1)),
partition PAR_SE_ASIA values (19) (subpartition SUB_SE_ASIA_INST values (1) ),
partition PAR_BAHRAIN values (48) (subpartition SUB_BAHRAIN_INST values (1) ),
partition PAR_CANADA values (124) (subpartition SUB_CANADA_INST values (1) ),
partition PAR_CHINA values (156) (subpartition SUB_CHINA_INST values (1) ),
partition PAR_CYPRUS values (196) (subpartition SUB_CYPRUS_INST values (1) ),
partition PAR_PALESTINE values (275) (subpartition SUB_PALESTINE_INST values (1)),
partition PAR_IRAQ values (368) (subpartition SUB_IRAQ_INST values (1)),
partition PAR_JAPAN values (392) (subpartition SUB_JAPAN_INST values (1)),
partition PAR_JORDAN values (400) (subpartition SUB_JORDAN_INST values (1)),
partition PAR_KUWAIT values (414) (subpartition SUB_KUWAIT_INST values (1)),
partition PAR_LEBANON values (422) (subpartition SUB_LEBANON_INST values (1)),
partition PAR_OMAN values (512) (subpartition SUB_OMAN_INST values (1)),
partition PAR_QATAR values (634) (subpartition SUB_QATAR_INST values (1)),
partition PAR_KSA values (682) (subpartition SUB_KSA_INST values (1)),
partition PAR_SYRIA values (760) (subpartition SUB_SYRIA_INST values (1)),
partition PAR_UAE values (784) (subpartition SUB_UAE_INST values (1)),
partition PAR_EGYPT values (818) (subpartition SUB_EGYPT_INST values (1)),
partition PAR_USA values (840) (subpartition SUB_USA_INST values (1)),
partition PAR_OTHERS values (21) (subpartition SUB_OTHERS_INST values (1)),
partition PAR_OTHER_CTRY values (8) (subpartition SUB_OTHER_CTRY_INST values (1)))
Best Regards,
Sengul