Mike Dwell <
mike.dwel...@gmail.com> wrote:
>
> Lets say we want to assess the data quality of Company A's big data. Due
> to both security, privacy and work-load concerns, it's impossible to
> view/access the whole data repository(data-lake or data-ocean) of A.
>
> We can only request a sample of Company A's big data and then hopefully we can apply some quality-assess-toolkit to do some analysis.
>
> My question is: how to draw such a data sample? what requirements should we set up for such a data sample?
>
> Moreover, Company A may "optimize" or "decorate" the sample data that he gives out, what might be a good scheme or mechanism design
> such that we can avoid his "optimization" or "decoration"?
Hi. I don't think there is a generic "quality-assess" approach, as
this so depends on the nature of the work the company does, aside from
obvious things like range checks on dates etc (which they usually do on
the fly anyway) or batch-to-batch comparisons. Like can you validate
data points using external data sources? If it is "big data", then
they can still easily provide a far bigger sample than you will ever
need for statistical power purposes. Personally, I would first do an
"eyeball" examination of a relatively small contiguous data series
just to get a feel of what goes on.
Just 2c, David Duffy.