strange behavior

54 views
Skip to first unread message

Usama Dar

unread,
Apr 22, 2013, 11:53:35 AM4/22/13
to supersonic-...@googlegroups.com
we have a table with 100 M records ...we are finding difference in execution time ..Are we doing something wrong? The question is why the last approach is so much faster. We expected a nested filter + aggregate to be just as or faster than doing  a filter first and then an aggregate?.


The following code takes 700 milli seconds to execute, here we create an operation with filter and aggregation

    scoped_ptr<Operation> named_columns(Project(ProjectRename(util::gtl::Container("F1", "F2"),
                                                        ProjectAllAttributes()), table));
    scoped_ptr<Operation> filter1(Filter(Less(AttributeAt(0), ConstInt32(200000)), ProjectAllAttributes(), named_columns.release()));
    scoped_ptr<Operation> filter2(Filter(Greater(AttributeAt(0), ConstInt32(0)), ProjectAllAttributes(), filter1.release()));

    OPeration* op = AggregateClusters(new CompoundSingleSourceProjector(),
                            (new AggregationSpecification)->AddAggregation(MAX, "F1", "f1_max"),
                            filter2.release());

   /* Iterate through cursor to copy the result records to a block */
  FailureOrOwned<Cursor> cur = op->CreateCursor();
  if(cur.is_success()){
    ResultView  rv = cur->Next(-1);
    *result_space = new Block(cur->schema(), allocator);

    ViewCopier copier(cur->schema(), true);
    offset = 0;

    while (!rv.is_done()) {
       const View& view = rv.view();
       rowcount_t view_row_count = view.row_count();
       (*result_space)->Reallocate(offset + view_row_count);
       rowcount_t rows_copied = copier.Copy(view_row_count, view, offset, (*result_space));
       offset += rows_copied;
       rv = cur->Next(-1);
    }
  }


The below code takes 500 ms to execute .. here we are just doing an aggregation


   
    OPeration* op = AggregateClusters(new CompoundSingleSourceProjector(),
                            (new AggregationSpecification)->AddAggregation(MAX, "F1", "f1_max"),
                            table);

  FailureOrOwned<Cursor> cur = op->CreateCursor();
  if(cur.is_success()){
    ResultView  rv = cur->Next(-1);
    *result_space = new Block(cur->schema(), allocator);

    ViewCopier copier(cur->schema(), true);
    offset = 0;

    while (!rv.is_done()) {
       const View& view = rv.view();
       rowcount_t view_row_count = view.row_count();
       (*result_space)->Reallocate(offset + view_row_count);
       rowcount_t rows_copied = copier.Copy(view_row_count, view, offset, (*result_space));
       offset += rows_copied;
       rv = cur->Next(-1);
    }
  }

The below code takes 200 ms to execute .. here we are doing filter and materializing the result to a block and passing the block to create aggregate operation  


    scoped_ptr<Operation> filter1(Filter(Less(AttributeAt(0), ConstInt32(200000)), ProjectAllAttributes(), table));
    OPeration* op= filter2(Filter(Greater(AttributeAt(0), ConstInt32(0)), ProjectAllAttributes(), filter1.release()));
   
  FailureOrOwned<Cursor> cur = op->CreateCursor();
  if(cur.is_success()){
    ResultView  rv = cur->Next(-1);
    *result_space = new Block(cur->schema(), allocator);

    ViewCopier copier(cur->schema(), true);
    offset = 0;

    while (!rv.is_done()) {
       const View& view = rv.view();
       rowcount_t view_row_count = view.row_count();
       (*result_space)->Reallocate(offset + view_row_count);
       rowcount_t rows_copied = copier.Copy(view_row_count, view, offset, (*result_space));
       offset += rows_copied;
       rv = cur->Next(-1);
    }
  }

/* pass the block just created from filter operation to aggregate operation */
 OPeration* op1 = AggregateClusters(new CompoundSingleSourceProjector(),
                            (new AggregationSpecification)->AddAggregation(MAX, "F1", "f1_max"),
                            new Table(*result_space));

FailureOrOwned<Cursor> cur1 = op1->CreateCursor();
  if(cu1r.is_success()){
    ResultView  rv = cu1r->Next(-1);
    final_result_space = new Block(cur->schema(), allocator);

    ViewCopier copier(cur1->schema(), true);
    offset = 0;

    while (!rv.is_done()) {
       const View& view = rv.view();
       rowcount_t view_row_count = view.row_count();
      final_result_space ->Reallocate(offset + view_row_count);
       rowcount_t rows_copied = copier.Copy(view_row_count, view, offset,final_result_space );
       offset += rows_copied;
       rv = cur1->Next(-1);
    }
  }

Piotr Tabor

unread,
Apr 23, 2013, 7:46:37 AM4/23/13
to Usama Dar, supersonic-...@googlegroups.com
Interesting case...

Still I don't understand why you are using AggregateClusters with empty set of columns that are clustered. Isn't this
semantically equivalent to ScalarAggregate ?

In first example: You are first filtering by first condition, than the data are compacted in blocks of 1024 rows that match the condition than
you filter by second condition and the results are compacted again. If this strategy is better than using And(expression, expression2) in single filter depends
on selectivity of your filter (do we copy a lot or not). 

In second case there are no filters and no compaction so it might be faster. (especially if selectivity of filter is low). 

3rd case is more interesting: Here you are after filtering not creating a blocks of 1024 rows, but a single block that contains all rows.
It might be more convenient  for AggregateCluster that needs to remember last values from previous block on block border.  

You can also try using the benchmark tool to visualize the data flow and statistics (in particular block sizes and selectivity).

Piotr


--
You received this message because you are subscribed to the Google Groups "Supersonic Query Engine Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to supersonic-query-...@googlegroups.com.
To post to this group, send an email to supersonic-...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Reply all
Reply to author
Forward
0 new messages