> You wrote that it's a "performance problem". You also wrote that you
> need "compact database at runtime". Are you *sure* that compacting the
> database at runtime would solve the performance problem? If yes, why
> do you think so?
>
Hi Thomas,
Thanks for your support. I think maintaining the data locality and
keep the data page more compact at runtime would keep the database
performance. Here are some investigations that explains why.
== PART 1 =======================================================
This is a continuous investigation for the slow H2 1.2.135 database
dump (2.6GB) I posted earlier that has a huge table with 990401 rows.
I took a closer look at the .sql file that generated by the H2 1.2.144
recover tool. I found some interesting statistics about the data pages
(data leaf and data node) based on the slow database dump and the
defraged slow database dump.
Each row represents the statistic of the page data of a particular
table. The following are the meaning of each column.
1. table_id: The table ID of the page data.
2. total_dist: The sum of the distance from parent page to child page.
Let's define distance as abs(parent page id - child page id).
3. total_entry: The sum of the entries in each pages that belong to
the table.
4. total_pages: The number of pages used by the table.
5. ave_distance_p: The average distance from parent to child of a data
page.
6. ave_entries_p: The average entries(rows) per page for the table.
7. table_scan_dist: (total_entry / ave_entries_p) * ave_distance_p.
This number showed the distance for scan through every entry in every
page.
Slow H2 database dump before defrag.
---Data leaf statistic------------------------------------------
table_id total_dist total_entry total_pages
ave_distance_p ave_entries_p table_scan_dist num_of_columns
23 2.66223E+09 1.53001E+06
341736 7790.31 4.48 2.66223E+09 7
26 1.04526E+08 13382 2433
42961.85 5.50 1.04526E+08 15
8 3.21532E+09 990401
432033 7442.30 2.29 3.21532E+09 12
29 1.44299E+08 382502
41811 3451.22 9.15 1.44299E+08 30
11 2.90799E+08 34781 6929
41968.40 5.02 2.90799E+08 15
14 2.28191E+08 990401
92931 2455.49 10.66 2.28191E+08 31
17 2.1182E+09 93965 6524
324678.16 14.40 2.1182E+09 15
32 6.90482E+08 31264 2221
310887.94 14.08 6.90482E+08 15
35 1.02124E+08 382502
19781 5162.71 19.34 1.02124E+08 13
-1 4 21
1 4.00 21.00 4 6
20 1.93429E+08 990401
65761 2941.39 15.06 1.93429E+08 15
0 4183 37
8 522.88 4.62 4183 4
---Data leaf statistic END--------------------------------------
---Data node statistic------------------------------------------
table_id total_dist total_entry total_pages
ave_distance_p ave_entries_p table_scan_dist
23 2.44981E+08 341735 17558
13952.68 19.46 2.44981E+08
26 9.57232E+06 2432 19
503806.32 128.00 9.57232E+06
8 2.62246E+08 432032 18632
14075.04 23.19 2.62246E+08
29 1.6465E+08 41810 288
571701.21 145.17 1.6465E+08
11 7.42887E+07 6928 138
538323.67 50.20 7.42887E+07
14 1.18546E+08 92930 645
183792.47 144.08 1.18546E+08
17 4.39871E+07 6523 73
602562.47 89.36 4.39871E+07
32 9.39564E+06 2220 18
521979.89 123.33 9.39564E+06
35 7.82839E+07 19780 137
571415.65 144.38 7.82839E+07
20 1.36796E+08 65760 457
299335.48 143.89 1.36796E+08
0 5 7
1 5.00 7.00 5
---Data node statistic END--------------------------------------
Slow H2 database dump after defrag.
---Data leaf statistic------------------------------------------
table_id total_dist total_entry total_pages
ave_distance_p ave_entries_p table_scan_dist num_of_columns
23 1.10467E+07 1.53001E+06
341736 32.33 4.48 1.10467E+07 7
26 174937 13382
2433 71.90 5.50 174937 15
8 1.34723E+07 990401
432033 31.18 2.29 1.34723E+07 12
29 3.07153E+06 382502
41811 73.46 9.15 3.07153E+06 30
11 292073 34781
6929 42.15 5.02 292073 15
14 6.79585E+06 990401
92931 73.13 10.66 6.79585E+06 31
17 390990 93965
6524 59.93 14.40 390990 15
32 163685 31264
2221 73.70 14.08 163685 15
35 1.44913E+06 382502
19781 73.26 19.34 1.44913E+06 13
-1 4 21
1 4.00 21.00 4 6
0 9.1296E+06 37 8
1141200.50 4.62 9.1296E+06 4
20 4.80729E+06 990401
65761 73.10 15.06 4.80729E+06 15
---Data leaf statistic END--------------------------------------
---Data node statistic------------------------------------------
table_id total_dist total_entry total_pages
ave_distance_p ave_entries_p table_scan_dist
23 4.65401E+07 341735
17558 2650.65 19.46 4.65401E+07
26 22544 2432
19 1186.53 128.00 22544
8 6.13696E+07 432032
18632 3293.77 23.19 6.13696E+07
29 7.08489E+06 41810 288
24600.32 145.17 7.08489E+06
11 1.62148E+06 6928 138
11749.88 50.20 1.62148E+06
14 7.77253E+06 92930 645
12050.43 144.08 7.77253E+06
17 759320 6523 73
10401.64 89.36 759320
32 15671 2220
18 870.61 123.33 15671
35 2.4549E+06 19780 137
17918.99 144.38 2.4549E+06
20 6.02871E+06 65760 457
13191.93 143.89 6.02871E+06
0 5 7
1 5.00 7.00 5
---Data node statistic END--------------------------------------
The above figures shows the statistic of each table. Let's first focus
on the statistic of table 8 (The table with 990401 rows.), and compare
the before-defrag result and the after-defrag result. We can see the
ave_distance_p decreased dramatically from 7442.30 to 31.18 for data
leaf, and from 14075.04 to 3293.77 data node. This results the
table_scan_dist decreased from 3.21532E+09 to 1.34723E+07 for data
leaf, and from 2.62246E+08 to 6.13696E+07 for data node. The reason
why the ave_distance_p decreased is because of the SHUTDOWN DEFRAG
reorder the data pages. It will be great if we can reorder the page on-
line, so that we can increase the data locality and improve the table
scan time.
Also, I noticed the average entries(rows) per page for the table 8 is
only 2.29. It seems the size of page is not fully utilized. I am
wondering why those empty space is not being re-used?
---Example of a table 8 data
page---------------------------------------------------------
-- page 35592: data leaf (last) parent: 35384 table: 8 entries: 2
columns: 12
-- empty: 1908
-- [0] storage: 8 key: 3749183 off: 1989
INSERT INTO O_8 VALUES('130678', TIMESTAMP '2010-10-11 22:00:00.0',
0.0038295, TIMESTAMP '2010-10-11 23:00:00.0', POWER(0, -1), 18,
0.4537950827631001, '39689', 1, 0.0, 0.0, 0.0);
-- [1] storage: 8 key: 3749189 off: 1931
INSERT INTO O_8 VALUES('132065', TIMESTAMP '2010-10-11 22:00:00.0',
0.0038364407540394546, TIMESTAMP '2010-10-11 23:00:00.0', POWER(0,
-1), 12, 0.3840540144425778, '40068', 1, 0.0, 0.0, 0.0);
---Example of a table 8 data page
END-----------------------------------------------------
== PART 1 END ===================================================
== PART 2 =======================================================
Among those 10 tables there are three tables being updated very
frequently in the same frequency. The table 8 in the previous analysis
is one of them, it gets at least 166 updates per second in average in
our production system, which implies each row in table 8 will be
updated 74 times per hour. There are 8000 rows being inserted into
table 8 for each hour.
166 updates/sec * 60 * 60 = 597600 updates per hour.
597600 updates per hour / 8000 rows of data in an hour = 74 updates
per row.
Base on the above calculation I came up with the following test case,
and tried to reproduce the slow table scan issue.
Synthetic database setup query:
1. Created 10 tables (data_table_0 ~ data_table_9) with exactly same
schema as the table 8, where Id and start_time columns are primary
key.
2. The granularity of start_time is hour. The query will insert around
8000 rows for every hour in a table, so the query will insert 80000
rows in total into H2 database for one hour worth of data.
3. The query guarantees the insert of data_table_2 and data_table_9
will follow by 70 updates. Basically, these two tables have more
updates than the other tables. The insert of the other tables will
follow by 0 or 1 updates.
4. The query generates around 5 days worth of data.
After created the data, I did the same analysis and found the similar
data signature as the slow dump. Let's focus on the statistic of table
44 (This could be data_table_2 or data_table_9), and compare the
before-defrag result and the after-defrag result. We can see the
ave_distance_p decreased dramatically from 138742.19 to 51.80 for data
leaf, and from 136037.83 to 4129.40 data node. This results the
table_scan_dist decreased from 1.50025E+10 to 5.60128E+06 for data
leaf, and from 2.77653E+08 to 8.4281E+06 data node.
I ran the same table scan query that was running on the slow database
dump. I found the query on data_table_9 (the table got a lots of
updates) took 881 seconds, on data_table_1 (the table with small
amount of updates) took 266 seconds. Comparing the before-defrag
result and the after-defrag result. I found the time for running the
query on data_table_9 decreased from 881 seconds to 11 seconds, and
the time for running the query on data_table_1 decreased from 266
seconds to 10 seconds.
I am wonder why does the table with a lots of update operations has
more fragments than the table with less updates? Is the update
operation being done in-place, or is the update operation a two-step
operations that involve delete and insert? I am assuming if it's a two-
step operation, then there would be some overhead for keeping the b-
tree properties. In terms of database compaction, are we expecting the
average entries per page of the table with large amount of updates is
less than the tables with small amount of updates?
Also, is there a way to partition a h2 table, so that we can optimizer
the table scan query?
Thank you for reading through my test case. Following are the query
and the results for the analysis in part 2. I also attached the schema
of all the tables in production (Renamed the columns and tables :-).
Regards,
Sheng
Synthetic database before defrag
---Data leaf statistic------------------------------------------
table_id total_dist total_entry total_pages
ave_distance_p ave_entries_p table_scan_dist num_of_columns
40 1.39585E+09 941019 34574
40372.93 27.22 1.39585E+09 12
42 1.40683E+09 941018 34587
40674.98 27.21 1.40683E+09 12
44 1.50025E+10 941019 108132
138742.19 8.70 1.50025E+10 12
46 1.48978E+09 941018 34576
43087.04 27.22 1.48978E+09 12
10 515 1
1 515.00 1.00 515 1
48 1.46508E+09 941018 34579
42369.19 27.21 1.46508E+09 12
13 516 18
1 516.00 18.00 516 7
17 519 0
1 0.00 0.00 0 4
18 520 0
1 0.00 0.00 0 9
31 526 1
1 526.00 1.00 526 9
32 527 1
1 527.00 1.00 527 5
33 528 3
1 528.00 3.00 528 3
35 530 0
1 0.00 0.00 0 12
36 531 2
1 531.00 2.00 531 2
37 532 1
1 532.00 1.00 532 4
38 533 0
1 0.00 0.00 0 2
50 1.37756E+09 941019 34567
39851.83 27.22 1.37756E+09 12
52 1.42884E+09 941019 34580
41319.89 27.21 1.42884E+09 12
-1 4 33
1 4.00 33.00 4 6
54 1.46924E+09 941019 34577
42491.96 27.22 1.46924E+09 12
56 1.41919E+09 941019 34582
41038.34 27.21 1.41919E+09 12
0 401 59
18 22.28 3.28 401 4
58 1.52618E+10 941018 107744
141648.85 8.73 1.52618E+10 12
---Data leaf statistic END--------------------------------------
---Data node statistic------------------------------------------
table_id total_dist total_entry total_pages
ave_distance_p ave_entries_p table_scan_dist
40 6.54102E+07 34573 241
271411.82 143.46 6.54102E+07
42 6.7081E+07 34586 242
277194.40 142.92 6.7081E+07
44 2.77653E+08 108131 2041
136037.83 52.98 2.77653E+08
46 6.56682E+07 34575 241
272482.01 143.46 6.56682E+07
48 6.6388E+07 34578 242
274330.45 142.88 6.6388E+07
50 6.59157E+07 34566 241
273509.11 143.43 6.59157E+07
52 6.66741E+07 34579 242
275512.88 142.89 6.66741E+07
54 6.65738E+07 34576 241
276239.71 143.47 6.65738E+07
56 6.75747E+07 34581 242
279234.11 142.90 6.75747E+07
0 514 18
1 514.00 18.00 514
58 2.67172E+08 107743 2028
131741.48 53.13 2.67172E+08
---Data node statistic END--------------------------------------
---Table scan query --------------------------------------------
SELECT * FROM DATA_TABLE_9 ORDER BY ID, START_TIME DESC LIMIT 1;
...
(1 row, 881347 ms)
SELECT * FROM DATA_TABLE_1 ORDER BY ID, START_TIME DESC LIMIT 1;
...
(1 row, 266979 ms)
---Table scan query END------------------------------------------
Synthetic database after defrag
---Data leaf statistic------------------------------------------
table_id total_dist total_entry total_pages
ave_distance_p ave_entries_p table_scan_dist num_of_columns
40 2.51672E+06 941019
34574 72.79 27.22 2.51672E+06 12
42 2.50245E+06 941018
34587 72.35 27.21 2.50245E+06 12
44 5.60128E+06 941019
108132 51.80 8.70 5.60128E+06 12
46 2.51696E+06 941018
34576 72.80 27.22 2.51696E+06 12
10 634513 1 1
634513.00 1.00 634513 1
48 2.50146E+06 941018
34579 72.34 27.21 2.50146E+06 12
13 634514 18 1
634514.00 18.00 634514 7
17 634517 0
1 0.00 0.00 0 4
18 634518 0
1 0.00 0.00 0 9
31 634522 1 1
634522.00 1.00 634522 9
32 634519 1 1
634519.00 1.00 634519 5
33 634521 3 1
634521.00 3.00 634521 3
35 634520 0
1 0.00 0.00 0 12
36 634516 2 1
634516.00 2.00 634516 2
37 634515 1 1
634515.00 1.00 634515 4
38 634523 0
1 0.00 0.00 0 2
50 2.51483E+06 941019
34567 72.75 27.22 2.51483E+06 12
52 2.50168E+06 941019
34580 72.34 27.21 2.50168E+06 12
-1 4 33
1 4.00 33.00 4 6
54 2.51741E+06 941019
34577 72.81 27.22 2.51741E+06 12
56 2.50196E+06 941019
34582 72.35 27.21 2.50196E+06 12
0 152 59
18 8.44 3.28 152 4
58 5.59454E+06 941018
107744 51.92 8.73 5.59454E+06 12
---Data leaf statistic END--------------------------------------
---Data node statistic------------------------------------------
table_id total_dist total_entry total_pages
ave_distance_p ave_entries_p table_scan_dist
40 4.50559E+06 34573 241
18695.39 143.46 4.50559E+06
42 4.49204E+06 34586 242
18562.14 142.92 4.49204E+06
44 8.4281E+06 108131
2041 4129.40 52.98 8.4281E+06
46 4.18836E+06 34575 241
17379.10 143.46 4.18836E+06
48 4.44449E+06 34578 242
18365.68 142.88 4.44449E+06
50 4.36181E+06 34566 241
18098.81 143.43 4.36181E+06
52 4.58832E+06 34579 242
18959.99 142.89 4.58832E+06
54 4.60164E+06 34576 241
19093.96 143.47 4.60164E+06
56 4.68439E+06 34581 242
19356.99 142.90 4.68439E+06
0 634526 18 1
634526.00 18.00 634526
58 7.73303E+06 107743
2028 3813.13 53.13 7.73303E+06
---Data node statistic END--------------------------------------
---Table scan query --------------------------------------------
SELECT * FROM DATA_TABLE_9 ORDER BY ID, START_TIME DESC LIMIT 1;
...
(1 row, 11596 ms)
SELECT * FROM DATA_TABLE_1 ORDER BY ID, START_TIME DESC LIMIT 1;
...
(1 row, 10143 ms)
---Table scan query END-----------------------------------------
---The query for creating the synthetic database----------------
CREATE ALIAS CREATETABLES AS $$
import java.sql.*;
import java.util.*;
import org.h2.jdbcx.JdbcConnectionPool;
@CODE
void createTables() throws Exception {
JdbcConnectionPool cp =
JdbcConnectionPool.create("jdbc:h2:bulktest",
"bulktest", "bulktest");
Connection conn = cp.getConnection();
final int numOfTables = 10;
for (int tid = 0; tid < numOfTables; tid++) {
String createTableQuery = String.format(
(Locale) null,
"CREATE CACHED TABLE
PUBLIC.DATA_TABLE_%d(" +
" ID VARCHAR(255) NOT NULL," +
" START_TIME TIMESTAMP NOT NULL,"
+
" C1 DOUBLE," +
" C2 TIMESTAMP," +
" C3 DOUBLE," +
" C4 INTEGER," +
" C5 DOUBLE," +
" C6 VARCHAR(255)," +
" C7 INTEGER," +
" C8 DOUBLE," +
" C9 DOUBLE," +
" C10 DOUBLE" +
");",
tid);
String createIndexQuery = String.format(
(Locale) null,
"CREATE PRIMARY KEY
PUBLIC.PRIMARY_KEY_%d ON PUBLIC.DATA_TABLE_%d(ID, START_TIME)",
tid,tid);
conn.createStatement().execute(createTableQuery);
conn.createStatement().execute(createIndexQuery);
}
conn.close();
cp.dispose();
}
$$;
SELECT CREATETABLES();
CREATE ALIAS BULKQUERY AS $$
import java.sql.*;
import java.util.*;
import org.h2.jdbcx.JdbcConnectionPool;
@CODE
void bulkQuery() throws Exception {
JdbcConnectionPool cp =
JdbcConnectionPool.create("jdbc:h2:bulktest",
"bulktest", "bulktest");
final int rowNum = 8000;
final int dayNum = 5;
final int biasUpdate = 70;
final double updatePercent = 0.2;
final int hourNum = 24;
final int numOfTables = 10;
final int[] ranArray = new int[numOfTables];
for (int i = 0; i < ranArray.length; i++) {
ranArray[i] = i;
}
Random ran = new Random();
for (int day = 1; day < dayNum + 1; day++) {
for (int hour = 0; hour < hourNum; hour++) {
int actualUpdateNum = 0;
int actualInsertNum = 0;
// Inserting rows for an hour
final long insertStartTime =
System.currentTimeMillis();
for (int row = 0; row < rowNum; row++) {
// Randomize table id array.
for (int i = 0; i < ranArray.length; i++) {
int ranInt = ran.nextInt(ranArray.length);
int temp = ranArray[i];
ranArray[i] = ranArray[ranInt];
ranArray[ranInt] = temp;
}
for (int tid : ranArray) {
Connection conn = cp.getConnection();
// 1 Insert
String insertQuery = String
.format(
(Locale) null,
"INSERT INTO PUBLIC.DATA_TABLE_
%d VALUES('%s', TIMESTAMP '2010-10-%02d %02d:00:00.0', 0.0, TIMESTAMP
'2010-12-31 00:00:00.0', 0.0 , 0, 0.0, '41129', 1, 0.0, 0.0, 0.0)",
tid, row, day, hour);
actualInsertNum++;
conn.createStatement().execute(insertQuery);
// Random and bias update.
double ranUpdate = ran.nextDouble();
int bias = ranUpdate <= updatePercent ? 1 : 0;
if (9 == tid || 2 == tid) {
bias = biasUpdate;
}
for (int update = 0; update < bias; update++)
{
String updateQuery = String
.format(
(Locale) null,
"UPDATE PUBLIC.DATA_TABLE_
%d SET ID=ID, START_TIME=START_TIME, C1=C1, C2=C2, C3=C3, C4=C4+1,
C5=C5, C6=C6, C7=C7, C8=C8, C9=C9, C10=C10 WHERE ID = '%s' AND
START_TIME = TIMESTAMP '2010-10-%02d %02d:00:00.0'",
tid, row, ran.nextInt(day)
+ 1, ran
.nextInt(hour +
1));
actualUpdateNum++;
conn.createStatement().execute(updateQuery);
}
conn.close();
}
}
final long insertEndTime = System.currentTimeMillis();
final long insertElapsed = insertEndTime -
insertStartTime;
System.out.println("Day:" + day + " hour:" + hour
+ " --> Inserted " + actualInsertNum + "
Updated "
+ actualUpdateNum + " rows in " +
insertElapsed
+ " millis");
}
}
cp.dispose();
}
$$;
select BULKQUERY();
---The query for creating the synthetic database END------------
FYI: Schema for all tables. The schema of both table with table_id 8
in part 1 and all of the tables in part 2 is table T6.
---Schema for all tables----------------------------------------
CREATE CACHED TABLE PUBLIC.T1(
ID VARCHAR(255) NOT NULL,
C1 DOUBLE NOT NULL,
C2 DOUBLE,
C3 NOT NULL,
C4 DOUBLE,
C5 DOUBLE NOT NULL,
C6 TIMESTAMP,
C7 DOUBLE NOT NULL,
C8 DOUBLE NOT NULL,
C9 DOUBLE,
C10 NOT NULL,
C11 DOUBLE,
C12 NOT NULL,
C13 DOUBLE,
START_TIME TIMESTAMP
)
CREATE CACHED TABLE PUBLIC.T2(
ID VARCHAR(255) NOT NULL,
START_TIME TIMESTAMP NOT NULL,
C1 DOUBLE NOT NULL,
C2 DOUBLE,
C3 DOUBLE NOT NULL,
C4 DOUBLE,
C5 DOUBLE NOT NULL,
C6 TIMESTAMP,
C7 DOUBLE NOT NULL,
C8 DOUBLE NOT NULL,
C9 DOUBLE,
C10 DOUBLE NOT NULL,
C11 DOUBLE,
C12 NOT NULL,
C13 DOUBLE
)
CREATE CACHED TABLE PUBLIC.T3(
ID VARCHAR(255) NOT NULL,
START_TIME TIMESTAMP NOT NULL,
C1 DOUBLE,
C2 DOUBLE,
C3 DOUBLE,
C4 TIMESTAMP,
C5 BIT,
C6 BIT,
C7 DOUBLE,
C8 TIMESTAMP,
C9 DOUBLE,
C10 DOUBLE,
C11 DOUBLE,
C12 DOUBLE,
C13 DOUBLE,
C14 DOUBLE,
C15 DOUBLE,
C16 DOUBLE,
C17 DOUBLE,
C18 DOUBLE,
C19 DOUBLE,
C20 DOUBLE,
C21 DOUBLE,
C22 DOUBLE,
C23 DOUBLE,
C24 DOUBLE,
C25 DOUBLE,
C26 INTEGER,
C27 DOUBLE,
C28 DOUBLE,
C29 VARCHAR(255)
)
CREATE CACHED TABLE PUBLIC.T4(
ID VARCHAR(255) NOT NULL,
START_TIME TIMESTAMP NOT NULL,
C1 DOUBLE,
C2 TIMESTAMP,
C3 DOUBLE,
C4 INTEGER,
C5 DOUBLE
)
CREATE CACHED TABLE PUBLIC.T5(
ID VARCHAR(255) NOT NULL,
C1 DOUBLE NOT NULL,
C2 DOUBLE,
C3 DOUBLE NOT NULL,
C4 DOUBLE,
C5 DOUBLE NOT NULL,
C6 TIMESTAMP,
C7 DOUBLE NOT NULL,
C8 DOUBLE NOT NULL,
C9 DOUBLE,
C10 DOUBLE NOT NULL,
C11 DOUBLE,
C12 DOUBLE NOT NULL,
C13 DOUBLE,
START_TIME TIMESTAMP
)
CREATE CACHED TABLE PUBLIC.T6(
ID VARCHAR(255) NOT NULL,
START_TIME TIMESTAMP NOT NULL,
C1 DOUBLE,
C2 TIMESTAMP,
C3 DOUBLE,
C4 INTEGER,
C5 DOUBLE,
C6 VARCHAR(255),
C7 INTEGER,
C8 DOUBLE,
C9 DOUBLE,
C10 DOUBLE
)
CREATE CACHED TABLE PUBLIC.T7(
ID VARCHAR(255) NOT NULL,
START_TIME TIMESTAMP NOT NULL,
C1 DOUBLE,
C2 DOUBLE,
C3 DOUBLE,
C4 DOUBLE,
C5 TIMESTAMP,
C6 TIMESTAMP,
C7 DOUBLE,
C8 DOUBLE,
C9 DOUBLE,
C10 DOUBLE,
C11 VARCHAR(255),
C12 VARCHAR(255),
C13 VARCHAR(255)
)
CREATE CACHED TABLE PUBLIC.T8(
ID VARCHAR(255) NOT NULL,
START_TIME TIMESTAMP NOT NULL,
C1 DOUBLE,
C2 DOUBLE,
C3 DOUBLE,
C4 TIMESTAMP,
C5 BIT,
C6 BIT,
C7 DOUBLE,
C8 TIMESTAMP,
C9 DOUBLE,
C10 DOUBLE,
C11 DOUBLE,
C12 DOUBLE,
C13 DOUBLE,
C14 DOUBLE,
C15 DOUBLE,
C16 DOUBLE,
C17 DOUBLE,
C18 DOUBLE,
C19 DOUBLE,
C20 DOUBLE,
C21 DOUBLE,
C22 DOUBLE,
C23 DOUBLE,
C24 DOUBLE,
C25 DOUBLE,
C26 INTEGER,
C27 DOUBLE,
C28 DOUBLE
)
CREATE CACHED TABLE PUBLIC.T9(
ID VARCHAR(255) NOT NULL,
START_TIME TIMESTAMP NOT NULL,
C1 DOUBLE,
C2 DOUBLE,
C3 DOUBLE,
C4 DOUBLE,
C5 TIMESTAMP,
C6 TIMESTAMP,
C7 DOUBLE,
C8 DOUBLE,
C9 DOUBLE,
C10 DOUBLE,
C11 VARCHAR(255)
)
CREATE CACHED TABLE PUBLIC.T10(
ID VARCHAR(255) NOT NULL,
START_TIME TIMESTAMP NOT NULL,
C1 DOUBLE NOT NULL,
C2 DOUBLE,
C3 DOUBLE NOT NULL,
C4 DOUBLE,
C5 DOUBLE NOT NULL,
C6 TIMESTAMP,
C7 DOUBLE NOT NULL,
C8 DOUBLE NOT NULL,
C9 DOUBLE,
C10 DOUBLE NOT NULL,
C11 DOUBLE,
C12 DOUBLE NOT NULL,
C13 DOUBLE
)
CREATE PRIMARY KEY PUBLIC.PRIMARY_KEY_20 ON PUBLIC.T1(ID)
CREATE PRIMARY KEY PUBLIC.PRIMARY_KEY_E1 ON PUBLIC.T2(ID, START_TIME)
CREATE PRIMARY KEY PUBLIC.PRIMARY_KEY_C ON PUBLIC.T3(ID, START_TIME)
CREATE PRIMARY KEY PUBLIC.PRIMARY_KEY_D ON PUBLIC.T4(ID, START_TIME)
CREATE PRIMARY KEY PUBLIC.PRIMARY_KEY_E ON PUBLIC.T5(ID)
CREATE PRIMARY KEY PUBLIC.PRIMARY_KEY_1 ON PUBLIC.T6(ID, START_TIME)
CREATE PRIMARY KEY PUBLIC.PRIMARY_KEY_2 ON PUBLIC.T7(ID, START_TIME)
CREATE PRIMARY KEY PUBLIC.PRIMARY_KEY_1F ON PUBLIC.T8(ID, START_TIME)
CREATE PRIMARY KEY PUBLIC.PRIMARY_KEY_EE ON PUBLIC.T9(ID, START_TIME)
CREATE PRIMARY KEY PUBLIC.PRIMARY_KEY_7 ON PUBLIC.T10(ID, START_TIME)
---Schema for all tables END------------------------------------
== PART 2 END =======================================================