How to Improve Model Results

28 views
Skip to first unread message

Tom Butkevich

unread,
Oct 14, 2024, 11:56:10 AM10/14/24
to Instant BQML and Vertex Users

Hi Google ML Team,

I have three models running on one website split between category (1) and brand (2) propensity scores.

However, I am seeing significant differences in the evaluation metrics between these models, with the category model (Row 3) scoring poorly compared to the brand models. Below are the evaluation metrics for each:

  • Row 1 (Brand Model 1):

    • Mean Absolute Error: 0.0914
    • Mean Squared Error: 0.0361
    • Mean Squared Log Error: 0.0194
    • Median Absolute Error: 0.0297
    • R² Score: 0.0101
    • Explained Variance: 0.0392
  • Row 2 (Brand Model 2):

    • Mean Absolute Error: 0.1536
    • Mean Squared Error: 0.0647
    • Mean Squared Log Error: 0.0331
    • Median Absolute Error: 0.0431
    • R² Score: 0.5184
    • Explained Variance: 0.5257
  • Row 3 (Category Model):

    • Mean Absolute Error: 0.0290
    • Mean Squared Error: 0.0093
    • Mean Squared Log Error: 0.0049
    • Median Absolute Error: 0.0140
    • R² Score: -0.1376
    • Explained Variance: -0.1156

To expand the events considered for the model, I modified the SQL scripts for all models. I changed the event_cnts portion of the SQL script to include event_name = search, replacing scroll, and added event_name = view_item.

Despite these adjustments, the category model's performance remains poor-- and maybe Brand Model 1 too. I am looking for guidance or suggestions on how to improve this model’s performance. Could you provide recommendations on tuning, feature engineering, or model selection that may help?

Thank you!

Tom

Reply all
Reply to author
Forward
0 new messages