--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist+unsubscribe@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.
Hi JoAnne and Paloma,We've been doing a lot of analyses of the proportion of incorrectly ID'd Research Grade obs. From the experiments we've done, its actually pretty low, like around 2.5% for most groups we've looked at.You could argue that this is too high (ie we're being too liberal with the 'Research Grade' threshold) or too low (we're being to conservative) and we've had different asks to move the threshold one way or another so I imagine changing would be kind of a zero sum game.One thing we have noticed from our experiments though is that our current Research Grade system (which is quite simplistic) is that we could do a better job of discriminating high risk (ie potentially incorrectly ID'd) from low risk (ie likely correctly ID'd) into Research and Needs ID categories. As you can see from the figures on the left below, there's some overlap between high risk and Research Grade and low risk and Needs ID. We've been exploring more sophisticated systems that do a better job of discriminating these (figures on the right).We (by which I mean Grant Van Horn who was also heavily involved in our Computer Vision model) actually just presented one approach which is kind of an 'earned reputation' approach where we simultaneously estimate the 'skill' of identifiers and the risk of observations at this conference a few weeks ago: http://cvpr2018.thecvf.com/you can read the paper 'lean multiclass crowdsourcing' here:
http://openaccess.thecvf.com/content_cvpr_2018/papers/Van_Horn_Lean_Multiclass_Crowdsourcing_CVPR_2018_paper.pdfStill more work to be done, but its appealing to us that a more sophisticated approach like this could improve discriminating high risk and low risk obs into Needs ID and Research Grade categories rather than just moving the threshold in the more or less conservative direction without really improving thingsScott
On Wed, Jul 18, 2018 at 4:17 PM, JoAnne Russo <jo.a....@gmail.com> wrote:
thanks! I was trying to search too, figuring this topic must have been broached before, but couldn't find anything.
On Wednesday, July 18, 2018 at 5:13:42 PM UTC-4, JoAnne Russo wrote:I am wondering if there's any chance of making it necessary for 2 people to agree with an initial species level ID to elevate it to research grade? I would argue that an extra set of eyes verifying an ID would give greater credibility to the "research grade" level of an ID. I find mistakes made all the time with IDs.JoAnne
--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.
I like the "earned reputation" approach! I tend not to trust an ID from someone who I don't know or has only IDed a few compared to another who IDed higher numbers of the species.
--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.
Not that I understand all the nuances of "crowdsourced multiclass annotations", but I glanced at the referenced pdf article and was impressed with the outcome of the new methodology. We are all prone to making ID errors on occasion (!!!), but one of my frustrations in the ID system is when one or more friends (often new users and/or young students?) jump to agreement on IDs that turn out to be wrong. Recently, this has happened more and more when the iNat identitron (AI) suggestions are followed blindly*. Implementing a user-history defined success calculation will substatially bypass or assuage my concerns.
Hi JoAnne and Paloma,We've been doing a lot of analyses of the proportion of incorrectly ID'd Research Grade obs. From the experiments we've done, its actually pretty low, like around 2.5% for most groups we've looked at.You could argue that this is too high (ie we're being too liberal with the 'Research Grade' threshold) or too low (we're being to conservative) and we've had different asks to move the threshold one way or another so I imagine changing would be kind of a zero sum game.One thing we have noticed from our experiments though is that our current Research Grade system (which is quite simplistic) is that we could do a better job of discriminating high risk (ie potentially incorrectly ID'd) from low risk (ie likely correctly ID'd) into Research and Needs ID categories. As you can see from the figures on the left below, there's some overlap between high risk and Research Grade and low risk and Needs ID. We've been exploring more sophisticated systems that do a better job of discriminating these (figures on the right).We (by which I mean Grant Van Horn who was also heavily involved in our Computer Vision model) actually just presented one approach which is kind of an 'earned reputation' approach where we simultaneously estimate the 'skill' of identifiers and the risk of observations at this conference a few weeks ago: http://cvpr2018.thecvf.com/you can read the paper 'lean multiclass crowdsourcing' here:
http://openaccess.thecvf.com/content_cvpr_2018/papers/Van_Horn_Lean_Multiclass_Crowdsourcing_CVPR_2018_paper.pdfStill more work to be done, but its appealing to us that a more sophisticated approach like this could improve discriminating high risk and low risk obs into Needs ID and Research Grade categories rather than just moving the threshold in the more or less conservative direction without really improving thingsScott
On Wed, Jul 18, 2018 at 4:17 PM, JoAnne Russo <jo.a....@gmail.com> wrote:
thanks! I was trying to search too, figuring this topic must have been broached before, but couldn't find anything.
On Wednesday, July 18, 2018 at 5:13:42 PM UTC-4, JoAnne Russo wrote:I am wondering if there's any chance of making it necessary for 2 people to agree with an initial species level ID to elevate it to research grade? I would argue that an extra set of eyes verifying an ID would give greater credibility to the "research grade" level of an ID. I find mistakes made all the time with IDs.JoAnne
--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.
You received this message because you are subscribed to a topic in the Google Groups "iNaturalist" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/inaturalist/4l0MjWlArFg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to inaturalist...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist+unsubscribe@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.
--Sent from Gmail Mobile
--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist+unsubscribe@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.
--Sent from Gmail Mobile--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.
Hi JoAnne and Paloma,We've been doing a lot of analyses of the proportion of incorrectly ID'd Research Grade obs. From the experiments we've done, its actually pretty low, like around 2.5% for most groups we've looked at.You could argue that this is too high (ie we're being too liberal with the 'Research Grade' threshold) or too low (we're being to conservative) and we've had different asks to move the threshold one way or another so I imagine changing would be kind of a zero sum game.One thing we have noticed from our experiments though is that our current Research Grade system (which is quite simplistic) is that we could do a better job of discriminating high risk (ie potentially incorrectly ID'd) from low risk (ie likely correctly ID'd) into Research and Needs ID categories. As you can see from the figures on the left below, there's some overlap between high risk and Research Grade and low risk and Needs ID. We've been exploring more sophisticated systems that do a better job of discriminating these (figures on the right).We (by which I mean Grant Van Horn who was also heavily involved in our Computer Vision model) actually just presented one approach which is kind of an 'earned reputation' approach where we simultaneously estimate the 'skill' of identifiers and the risk of observations at this conference a few weeks ago: http://cvpr2018.thecvf.com/you can read the paper 'lean multiclass crowdsourcing' here:
http://openaccess.thecvf.com/content_cvpr_2018/papers/Van_Horn_Lean_Multiclass_Crowdsourcing_CVPR_2018_paper.pdfStill more work to be done, but its appealing to us that a more sophisticated approach like this could improve discriminating high risk and low risk obs into Needs ID and Research Grade categories rather than just moving the threshold in the more or less conservative direction without really improving thingsScott
On Wed, Jul 18, 2018 at 4:17 PM, JoAnne Russo <jo.a....@gmail.com> wrote:
thanks! I was trying to search too, figuring this topic must have been broached before, but couldn't find anything.
On Wednesday, July 18, 2018 at 5:13:42 PM UTC-4, JoAnne Russo wrote:I am wondering if there's any chance of making it necessary for 2 people to agree with an initial species level ID to elevate it to research grade? I would argue that an extra set of eyes verifying an ID would give greater credibility to the "research grade" level of an ID. I find mistakes made all the time with IDs.JoAnne
--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.