Hi Team,
We have recently trained a model named addcart_prob_model_web using Instant BQML, aimed at predicting the probability of a product being added to cart on our web platform. The entire pipeline is running successfully through CRMint.
As we analyze the output, we have several questions regarding the model evaluation metrics and prediction-related tables that were automatically generated by the Instant BQML process.
- Mean Absolute Error: 0.1222
- Mean Squared Error: 0.0488
- Mean Squared Log Error: 0.0246
- Median Absolute Error: 0.0536
- R² Score: 0.2212
Whether these metrics are within acceptable ranges for binary classification problems in e-commerce contexts (e.g., add-to-cart prediction)?
How should we interpret a relatively low R² score (0.22) – is this expected for behavioral models trained via Instant BQML?
Do you have any internal benchmarking data or best practices for interpreting these specific metrics when using Instant BQML?
After running the pipeline, the following tables were created:
- addcart_prob_predictions_web
- addcart_prob_scored_users_log_web
- addcart_prob_uservaluemap_web
- addcart_prob_performance_insights
- addcart_prob_audience_boundaries_web
- addcart_prob_calculated_fields
- addcart_prob_calculated_session_id_timestamps
- addcart_prob_calculated_visitors
- addcart_prob_conversionvalues_web
- addcart_prob_measurement_protocol_formatted_session_attribution_web
- addcart_prob_measurement_protocol_formatted_web
A brief explanation of the purpose of each table and how they should be used in practice (e.g., final scores, diagnostic, attribution, input references).
Whether other auxiliary or optional tables could also be created depending on configuration.
And most importantly: is there an official Instant BQML documentation or guide that explains these table structures and their intended usage?
Best Regards,