Hi Nikki -- glad you are sorting it out. I had a look at the gl.filter.allna script and have improved its efficiency so next time you should not have to wait an hour for it to process. This will become available in the next release.
I see that you initially took the approach of constructing a distance matrix and then passing this to gl.pcoa. In other words, you did a PCoA. The script I gave you did a PCA. When using Euclidean Distance for the PCoA and (implicitly) the covariance matrix for PCA, the outcomes are essentially equivalent. The only difference will be in the % explained values on the axes, for reasons explained in the preprint I recommended you read. So all good on that front.
The script I recommended had a couple of elements in it. The first was to filter fairly heavily on call rate to increase the data density. This reduces the level of imputation required to generate a fully populated data matrix required by classical PCA. I recommend using nearest neighbour for the imputation (the default).
PCA requires global absence of missing values, hence the need for imputation of some sort. adegenet::glPCA forms the kernal of gl.pcoa, but it uses imputation based on substitution from the global means, and so can cause distortions (individuals with large numbers of missing values will be drawn toward the origin which can be very misleading).
PCoA is more tolerant of missing values because the distances in the input distance matrix are calculated pairwise. But they are still problematic for a number of reasons (e.g. an NA is dropped from the distance calculation, so that locus contributes zero to the distance), so best to impute also with PCoA if you plan to go down that route.
Let us know if you have any further issues.
Arthur