Hey Glen,
if you know some coding with Python, you could write a little script to delete the duplicates.
e.g.: You could use the libary "pandas" to load and delete the duplicates by adressing the column "start_time".
import pandas as pd
# Load the CSV file into a DataFrame
df = pd.read_csv('data.csv')
# Remove duplicate rows based on the 'start_time' column
df = df.drop_duplicates(subset='start_time')
# Save the cleaned DataFrame back to a CSV file
df.to_csv('cleaned_data.csv', index=False)
print("Duplicates removed and cleaned data saved to 'cleaned_data.csv'.")
Does this help?
Kind regards,
Nico