blr_sort, 0

21 views
Skip to first unread message

Dimitry Sibiryakov

unread,
May 26, 2025, 12:47:15 PMMay 26
to firebir...@googlegroups.com
Hello All,

I've found subj in some test queries.

AFAICS blr_sort has as a first parameter the number of fields/values in "by"
clause.
How sorting by empty list is supposed to work?

--
WBR, SD.

Dmitry Yemanov

unread,
May 26, 2025, 12:50:51 PMMay 26
to firebir...@googlegroups.com
It's ignored, see PAR_sort().


Dmitry

Dimitry Sibiryakov

unread,
May 26, 2025, 1:17:01 PMMay 26
to firebir...@googlegroups.com
Dmitry Yemanov wrote 26.05.2025 18:50:
> It's ignored, see PAR_sort().

Then what exactly is the second sort in the following query's plan and why it
sorts a key (BIGINT + CHAR(63) CHARACTER SET UTF8)?

select rc.rdb$relation_name, rc.rdb$index_name
,dense_rank()over(partition by rc.rdb$relation_name order by
rc.rdb$index_name) dr1
,count(*)over(partition by rc.rdb$relation_name) fld_cnt
from rdb$relation_constraints rc;

Select Expression
-> Window
-> Window Partition
-> Record Buffer (record length: 1331)
-> Refetch
-> Sort (record length: 1084, key length: 256)
-> Window Partition
-> Record Buffer (record length: 1050)
-> Sort (record length: 1060, key length: 512)
-> Window Partition
-> Window Buffer
-> Record Buffer (record length: 529)
-> Table
"RDB$RELATION_CONSTRAINTS" as "RC" Full Scan

Execution path (BLR):

0 blr_version5,
1 blr_begin,
2 blr_message, 1, 9,0,
6 blr_short, 0,
8 blr_short, 0,
10 blr_int64, 0,
12 blr_short, 0,
14 blr_int64, 0,
16 blr_short, 0,
18 blr_varying2, 4,0, 252,0,
23 blr_short, 0,
25 blr_varying2, 4,0, 252,0,
30 blr_for,
31 blr_stall,
32 blr_rse, 1,
34 blr_window,
35 blr_rse, 1,
37 blr_rid2, 22,0, 2, 'R','C', 0,
44 blr_end,
35 3,
46 blr_partition_by, 1,
48 0,
49 blr_sort, 0,
51 blr_map, 2,0,
54 0,0,
56 blr_fid, 0, 2,0,
56 1,0,
62 blr_fid, 0, 5,0,
66 blr_partition_by, 2,
68 1,
69 blr_fid, 0, 2,0,
73 blr_fid, 2, 1,0,
77 blr_sort, 1,
79 blr_ascending,
80 blr_fid, 0, 5,0,
84 blr_map, 2,0,
87 0,0,
89 blr_agg_function, 10,
'D','E','N','S','E','_','R','A','N','K',0,
89 1,0,
104 blr_fid, 0, 2,0,
108 blr_partition_by, 3,
110 1,
111 blr_fid, 0, 2,0,
115 blr_fid, 3, 1,0,
119 blr_sort, 0,
121 blr_map, 2,0,
124 0,0,
126 blr_agg_count,
126 1,0,
129 blr_fid, 0, 2,0,
133 blr_end,
134 blr_send, 1,
136 blr_begin,
137 blr_assignment,
138 blr_literal, blr_short, 0, 1,0,
143 blr_parameter, 1, 0,0,
147 blr_assignment,
148 blr_fid, 3, 0,0,
152 blr_parameter2, 1, 2,0, 1,0,
158 blr_assignment,
159 blr_fid, 2, 0,0,
163 blr_parameter2, 1, 4,0, 3,0,
169 blr_assignment,
170 blr_fid, 1, 1,0,
174 blr_parameter2, 1, 6,0, 5,0,
180 blr_assignment,
181 blr_fid, 1, 0,0,
185 blr_parameter2, 1, 8,0, 7,0,
191 blr_end,
192 blr_send, 1,
194 blr_assignment,
195 blr_literal, blr_short, 0, 0,0,
200 blr_parameter, 1, 0,0,
204 blr_end,
205 blr_eoc



--
WBR, SD.

Dimitry Sibiryakov

unread,
May 27, 2025, 7:49:51 AMMay 27
to firebir...@googlegroups.com
Dimitry Sibiryakov wrote 26.05.2025 19:16:
>
> select rc.rdb$relation_name, rc.rdb$index_name
>    ,dense_rank()over(partition by rc.rdb$relation_name order by
> rc.rdb$index_name) dr1
>    ,count(*)over(partition by rc.rdb$relation_name) fld_cnt
> from rdb$relation_constraints rc;

BTW, I noticed another strange thing with this query: when I dumped list of
fields in record buffers, there were much more of them than necessary.

> ==== 00000000064594E0 RecordBuffer 3: fields: 5, mappings 5, next 0000000006459000 ====
> Mapping field to stream 0, field 2
> Mapping field to stream 0, field 5
> Mapping tid to stream 0, field 0
> Mapping dbkey to stream 0, field 0
> Mapping dbkey valid to stream 0, field 0
> Field 0: type char, length 252, offset 4
> Field 1: type char, length 252, offset 256
> Field 2: type bigint, length 8, offset 512
> Field 3: type bigint, length 8, offset 520
> Field 4: type char, length 1, offset 528

This is the first record buffer, two necessary char fields: rdb$relation_name
and rdb$index_name.

> ==== 00000000064599B0 WindowStream 5 (1): agg sources 0, agg targets 0, pass sources 0, pass targets 0, next 0000000006459AD0 ====
> ==== 000000000645A090 SortedStream 6: Keys 4, items 10, length 1060, key length 512, flags 0, next 00000000064599B0 ====
> Node item: stream 0, field 2, type char, length 252, offset 4, null offset 0
> Node item: stream 0, field 5, type char, length 252, offset 260, null offset 256
> Field item: stream 1, field 0, type char, length 252, offset 514, null offset 512
> Field item: stream 1, field 1, type char, length 252, offset 766, null offset 513
> Field item: stream 1, field -1, type bigint, length 8, offset 1024, null offset 0
> Field item: stream 1, field -3, type bigint, length 8, offset 1032, null offset 0
> Field item: stream 0, field -1, type bigint, length 8, offset 1040, null offset 0
> Field item: stream 0, field -3, type bigint, length 8, offset 1048, null offset 0
> Field item: stream 1, field -2, type char, length 1, offset 1056, null offset 0
> Field item: stream 0, field -2, type char, length 1, offset 1057, null offset 0
This is the first sorting: two keys, two data fields. Still overhead is
minimal...

> ==== 000000000645A210 RecordBuffer 7: fields: 10, mappings 10, next 000000000645A090 ====
> Mapping field to stream 1, field 0
> Mapping field to stream 1, field 1
> Mapping field to stream 0, field 2
> Mapping field to stream 0, field 5
> Mapping tid to stream 1, field 0
> Mapping dbkey to stream 1, field 0
> Mapping tid to stream 0, field 0
> Mapping dbkey to stream 0, field 0
> Mapping dbkey valid to stream 1, field 0
> Mapping dbkey valid to stream 0, field 0
> Field 0: type char, length 252, offset 4
> Field 1: type char, length 252, offset 256
> Field 2: type char, length 252, offset 508
> Field 3: type char, length 252, offset 760
> Field 4: type bigint, length 8, offset 1016
> Field 5: type bigint, length 8, offset 1024
> Field 6: type bigint, length 8, offset 1032
> Field 7: type bigint, length 8, offset 1040
> Field 8: type char, length 1, offset 1048
> Field 9: type char, length 1, offset 1049

...but result of sorting in the next record buffer includes these fields twice.

> ==== 000000000645A0F0 WindowStream 8 (2): agg sources 1, agg targets 1, pass sources 0, pass targets 0, next 000000000645A210 ====
> ==== 000000000645ADB0 SortedStream 9: Keys 2, items 14, length 1084, key length 256, flags 8, next 000000000645A0F0 ====
> Node item: stream 0, field 2, type char, length 252, offset 4, null offset 0
> Field item: stream 2, field 0, type bigint, length 8, offset 264, null offset 256
> Field item: stream 2, field 1, type char, length 252, offset 272, null offset 257
> Field item: stream 1, field 0, type char, length 252, offset 524, null offset 258
> Field item: stream 1, field 1, type char, length 252, offset 776, null offset 259
> Field item: stream 2, field -1, type bigint, length 8, offset 1032, null offset 0
> Field item: stream 2, field -3, type bigint, length 8, offset 1040, null offset 0
> Field item: stream 1, field -1, type bigint, length 8, offset 1048, null offset 0
> Field item: stream 1, field -3, type bigint, length 8, offset 1056, null offset 0
> Field item: stream 0, field -1, type bigint, length 8, offset 1064, null offset 0
> Field item: stream 0, field -3, type bigint, length 8, offset 1072, null offset 0
> Field item: stream 2, field -2, type char, length 1, offset 1080, null offset 0
> Field item: stream 1, field -2, type char, length 1, offset 1081, null offset 0
> Field item: stream 0, field -2, type char, length 1, offset 1082, null offset 0

And here is the second sorting that sorts record which is 252 bytes bigger
than necessary because it includes not only aggregate value from DENSE_RANK but
also partition value which duplicate rdb$relation_name. Because of this
following record buffer includes five string fields from which only two are
necessary:

> ==== 000000000645AF30 RecordBuffer 10: fields: 15, mappings 15, next 000000000645ADB0 ====
> Mapping field to stream 2, field 0
> Mapping field to stream 2, field 1
> Mapping field to stream 1, field 0
> Mapping field to stream 1, field 1
> Mapping field to stream 0, field 2
> Mapping field to stream 0, field 5
> Mapping tid to stream 2, field 0
> Mapping dbkey to stream 2, field 0
> Mapping tid to stream 1, field 0
> Mapping dbkey to stream 1, field 0
> Mapping tid to stream 0, field 0
> Mapping dbkey to stream 0, field 0
> Mapping dbkey valid to stream 2, field 0
> Mapping dbkey valid to stream 1, field 0
> Mapping dbkey valid to stream 0, field 0
> Field 0: type bigint, length 8, offset 8
> Field 1: type char, length 252, offset 16
> Field 2: type char, length 252, offset 268
> Field 3: type char, length 252, offset 520
> Field 4: type char, length 252, offset 772
> Field 5: type char, length 252, offset 1024
> Field 6: type bigint, length 8, offset 1280
> Field 7: type bigint, length 8, offset 1288
> Field 8: type bigint, length 8, offset 1296
> Field 9: type bigint, length 8, offset 1304
> Field 10: type bigint, length 8, offset 1312
> Field 11: type bigint, length 8, offset 1320
> Field 12: type char, length 1, offset 1328
> Field 13: type char, length 1, offset 1329
> Field 14: type char, length 1, offset 1330

It there a reason for that that I don't see?

--
WBR, SD.
Reply all
Reply to author
Forward
0 new messages