Technical question on Computer Vision Suggestion Tool

217 views
Skip to first unread message

Chris Cheatle

unread,
Oct 23, 2018, 2:04:20 PM10/23/18
to iNaturalist
If you post an observation with multiple attached photos, and run the identification suggestion tool against it, which photo is actually used in the suggestion?
  • the photo in the first position
  • does it run on all photos attached and then somehow aggregate the results
  • the photo you currently have active on the page, for example if I have photo #3 open and showing on the record - does it run on that

Scott Loarie

unread,
Oct 23, 2018, 2:05:52 PM10/23/18
to inatu...@googlegroups.com
Hi Chris,

At the moment, it runs on the first photo.

We've discussed making that more inclusive/versatile down the road
> --
> You received this message because you are subscribed to the Google Groups "iNaturalist" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.
> To post to this group, send email to inatu...@googlegroups.com.
> Visit this group at https://groups.google.com/group/inaturalist.
> For more options, visit https://groups.google.com/d/optout.



--
--------------------------------------------------
Scott R. Loarie, Ph.D.
Co-director, iNaturalist.org
California Academy of Sciences
55 Music Concourse Dr
San Francisco, CA 94118
--------------------------------------------------

Ralph Begley

unread,
Oct 23, 2018, 4:35:46 PM10/23/18
to iNaturalist
You can switch photos to run multiple photos through the recognition model singly.
The first photo is always chosen for matching.
Confidence in ID increases with the occurrence of intersection(s) of Suggestion sets.

Charlie Hohn

unread,
Oct 23, 2018, 8:32:52 PM10/23/18
to inatu...@googlegroups.com
so you are saying if you switch which photo is first and run them all then it will weight something higher if it comes up in both?

--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.


--
============================
Charlie Hohn
Montpelier, Vermont

Ralph Begley

unread,
Oct 23, 2018, 9:27:58 PM10/23/18
to iNaturalist
Chris,
Each analysis produces a "suggestion set". If you take the "intersection" of the sets, this would increase my confidence that the intersection may be the correct ID, assuming the model has been trained to recognize, in my case, a mushroom I've collected.
Bear in mind that the opposite may be true as well. Knowing the difference requires my intervention, i.e. I know it's not the correct ID and the model has not been sufficiently trained.

Mark Tutty

unread,
Oct 23, 2018, 9:37:26 PM10/23/18
to inatu...@googlegroups.com

This raises a point for me...

 

I have often taken a photograph of “the thing”, framed nicely and hopefully with enough diagnostic details to ID (photo 1). I then might provide additional photos with much better diagnostic angles (photos 2 thru n-1). The last photo I put up will often be a “context” photo, showing where the thing sits in the environment (photo n). Photo 1 always, diagnostics sometimes, context occasionally.

 

I often see other observations where there is a context shot, and in the description one of the plants, for example, gets singled out as subject. Often it is not centre of frame either. Would it be fair to suggest that if it is the only photo, it could well be “untraining the AI”? If so, is there a way we can flag such photos/observations as “don’t use for AI training”?

 

cheers
Mark Tutty
kiwif...@gmail.com

--

Charlie Hohn

unread,
Oct 24, 2018, 9:22:31 AM10/24/18
to inatu...@googlegroups.com
That sort of thing has been discussed, but doesn't exist yet. Certainly there are sometimes photos that may not be as useful to the AI, but on the other hand if that is the type of photo people usually send of that species, it's good to get the AI trying to train on those too.

I think the best of both worlds would be a way to 'tag' where the species is in the photo, the same way you can tag someone's face on facebook. But that might be hard to program

Ralph Begley

unread,
Oct 24, 2018, 10:32:59 AM10/24/18
to iNaturalist
Charlie,
That's actually a good suggestion.
The existing model seems to be able to determine the object of interest in most cases. There are instances where it cannot determine this,i.e. human holding mushroom, for example.
Although I have not experimented with "cropping" images to isolate the object of interest, it's certainly worth pursuing.

Charlie Hohn

unread,
Oct 24, 2018, 10:39:18 AM10/24/18
to inatu...@googlegroups.com
another option would be in-app and in-website cropping functionality though if it is created it would be nice to also retain the uncropped photo for habitat info.

--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.

Ralph Begley

unread,
Oct 24, 2018, 10:46:39 AM10/24/18
to iNaturalist
Charlie,
Here's a link to a description of the model algorithm.
https://www.inaturalist.org/pages/computer_vision_demo
I have my doubts about its use of species distribution as a factor to determine likelihood.
Ralph

Charlie Hohn

unread,
Oct 24, 2018, 10:56:58 AM10/24/18
to inatu...@googlegroups.com
really? I'm quite curious why.  It's probably the single most important species ID tool in existence. Like in Vermont I can more or less cut my vascular plant options from 300,000 to 3000 (probably significantly less) As an ecologist, the idea of trying to identify species without considering location, or for that matter habitat, makes no sense. If i had to key out every plant with a global key, we'd literally never identify anything.

I'm not talking about super high res range maps or anything. I'm saying i don't think the algorithm should consider all the massive biodviersity of tropical plants, not to mention every other organism, when trying to identify a pine tree in Maine. 

Right now the algorithm is classifying a bunch of California plants on the east coast and vice versa, i always check them and they are never right. It seems to be by far the biggest error factor for the algorithm. 

The idea of an algorithm that can literally identify any organism with no location data is kind of cool i guess, but the chance of it working seems miniscule.

--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.

Ralph Begley

unread,
Oct 24, 2018, 11:01:42 AM10/24/18
to iNaturalist
Charlie,
Unfortunately, I see no method for posting photos in this google group, but here's an example where distribution does not seem "at play".
My photo of Armillaria mellea, taken in Oregon, is correctly identified. But A.tabescens is the 2nd candidate in the suggestion set. A. tabescens only occurs East of the Rockies, and is a very unlikely candidate.
For a novice, this would be very confusing. Most "novices" do not bother to determine "distribution".
Showing a confidence score would be helpful. Being able to circumscribed the desired distribution would also be helpful.
Ralph

Charlie Hohn

unread,
Oct 24, 2018, 11:07:06 AM10/24/18
to inatu...@googlegroups.com
Sorry, i misunderstood. I thought you were saying it *shouldn't* consider it not that it didn't. No, it currently does not.

--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.

Ralph Begley

unread,
Oct 24, 2018, 11:21:41 AM10/24/18
to iNaturalist
Charlie,
Here's a fun activity:
Go to Hunger Mountain Coop.
Take pictures of fruits and veggies in the produce section.
See how accurate the recognition is.
It stumbles with Pomegranate at Trader Joe's.
Ralph

Ralph Begley

unread,
Oct 24, 2018, 11:59:13 AM10/24/18
to iNaturalist
Image cropping and confidence:
As an experiment I used a photo from Facebook, correctly identified as Suillus lakei.
An uncropped image run through the model produces a suggestive but inaccurate suggestion set.
A cropped image, circumcribing the most "diagnostic" area of the photo, produces a suggestion list with Suillus sp. as a very likely candidate.

Charlie Hohn

unread,
Oct 24, 2018, 12:28:18 PM10/24/18
to inatu...@googlegroups.com
in terms of fruits and veggies, the algorithm currently only is fed with 'research grade' observations which does not include cultivated organisms such as crops. So it won't do well with those sorts of things.

--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.

Chris Cheatle

unread,
Oct 24, 2018, 12:28:54 PM10/24/18
to iNaturalist
A couple of comments / notes about the discussion:

- regarding this note 'I have my doubts about its use of species distribution as a factor to determine likelihood.'. If the documentation says it does it, then barring a code review, I don't think there is reason to doubt the distribution is used. The question is what percentage of the score comes from this element. It clearly does at least look at it as recommendations are tagged with 'seen nearby'

- there are multiple challenges in using the distribution. It will bias the results towards areas with heavy number of reports. I don't have to go too far in Ontario before i get to areas that have little to no iNat coverage in terms of reports. The obvious answer is to use regional atlases or checklists as the basis for determining regional presence, but that is a massive project in terms of updating and maintaining regional checklists.Using a national checklist is pointless in large nations, what is seen in BC 4,000 kilometers away from me can be very different than what I see locally.

- additionally, at some point you do have to deal with out of range species, or even captive ones. If someone goes to the zoo in Toronto and photogrphs a zebra, should the suggestions totally exclude zebra because it's not found natively in North America. If the visual match is very strong, but current data suggests it is not there geographically what do you do ? A better example than the zebra is I recently observed a Great Kiskadee in Ontario. it was the first ever record for Canada. i have little doubt the tool would have said it was a Kiskadee, but if someone needed to use it, should that have been excluded from the suggestions ?

- to me a better answer is to more clearly indicate to the user that something is not currently known to be in that location, as opposed to simply excluding it

- showing the score assigned to a suggstion can be very helpful if there is a strong gap between the 1st and subsequent suggestions. You do have to deal with how it will be interpreted in cases where the scores are very similiar. How will a user interpret it if they see it is a 0.85 score on Swainson's Thrush and a 0.83 score on Grey-cheeked Thrush ? That may actually be more confusing than helpful.

Charlie Hohn

unread,
Oct 24, 2018, 12:36:16 PM10/24/18
to inatu...@googlegroups.com
I guess i don't really see the point in having the app identify zoo animals for people, especially considering most zoos would hopefully already tell you what the animal is. And we don't want to encourage people to add zoo animals anyway. In terms of cultivated plants there is an argument for that, we'd have to somehow include ethose. But i don't see the need for tropical plants or California chaparral plants to be suggested in New England. Maybe a super coarse geography filter.

The main issue here is people putting in species that are wayyyy out of range and habitat and could never survive, because of the algorithm suggesting them. I don't feel like people will avoid clicking on the species because it's out of range, because it already says if something was observed nearby, and it hasn't stopped people. 

In the case of the two birds with nearly the same number, i'd want to know of the similar score MORE so than in the other case. Why would it be confusing to see the algorithm is uncertain between two species? If i saw the photo and was uncertain between those two i'd certainly specify it.

I personally think the more inner workings of the algorithm we can see, the better.

--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.
To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.
Message has been deleted

Chris Cheatle

unread,
Oct 24, 2018, 1:30:56 PM10/24/18
to iNaturalist
The point I was hoping to make regarding the examples was better illustrated with the Kiskadee example than the zebra, basically there needs to be some balance between visual matching and location probability. What that balance is I'm not sure. Maybe it is better solved by showing the vision match score and then beside it a geographic match.

There still needs to be an effective way to know what is expected in a location. I see two options for this:
- rely on checklists. This has the benefit of including species reported on iNat and not reported so far. But the challenge is what checklist to use ? I used to live in Denmark, using a national checklist to validate sightings there is fine. I now live in Canada. Canada has national parks larger than Denmark. using the Canadian national checklist is likely to produce results not seen within thousands of kilometers.
- so then you use sub-national checklists, but that simply reverses the problem. 13 checklists for Canada, or 51 for the US seems fine. Slovenia has 212 1st level divisions. Maintaining that seems impossible. Maybe nations have to be defined as use national or subnational lists as part of the data flow for the suggestions.
- the other option is to use records already submitted to iNat to determine if a species is seen nearby, which of course is what it tries to do now. Unfortunately, it simply does not work properly right now. Here's a record of mine, coincidentally of one of the species I mentioned above : https://www.inaturalist.org/observations/5154909
- i don't have an issue with the fact that the correct answer is #4 in the list if you run the tool against it, many thrushes are very similar looking, and I suspect the visual match scores on at least the first 5 options are very close. The issue is the tool does not seem to think the correct answer is seen nearby, when there are multiple research grade records within 10 kilometers (as seen in the map) and in the same calendar month. If it is ranked #4 in the options because it assumes the species is not seen in the area, then that is a problem.

On Tuesday, October 23, 2018 at 2:04:20 PM UTC-4, Chris Cheatle wrote:

Mark Tutty

unread,
Oct 24, 2018, 4:57:27 PM10/24/18
to inatu...@googlegroups.com

In reading the background to the implementation of the AI, it strikes me that one of the big goals was to ease the burden on the identifiers as the volume increases. I have seen it over the last 3 years, I was commenting to someone the other day about how when I joined 3 years ago, there would be a dozen or so observations a day and up to a hundred on weekends. I came back from a weekday trip where there was no internet, and after two days I had 270+ observations to review. This is in NZ where we are on a regionalised portal!

 

If the point is to reduce the burden on the identifiers, then getting an observation to the right identifier as quickly as possible is key, while shielding them from the observations that aren’t their area. I became involved as an identifier, not because I am an expert in any area, but I felt I could help by handling many of the everyday easy stuff, saving the time of the experts to focus on the difficult ones.

 

I see the AI doing that same role, just on steroids. It aint gonna get em all right, anymore than I ever was! But what it will do is get the majority to the right identifiers quickly, and those identifiers will usually know who to tag if it’s not their area.

 

Here in NZ, we get so many spiders being suggested as northern hemisphere ones, but that’s ok, because as a subscriber to aranae, I know they aren’t and can flick em back up the tree to aranae. If the observer is not back on to change their ID, I can tag others like myself to bring in the weight to change it.

 

In an ideal system, I can see a single specialist identifier sitting on a subscription to each taxa, ie one per taxa. Some might warrant multiple taxa per identifier, but the point being if we each choose a taxa, and get really good at determining that taxa, then the workload is shared and collectively we cover everything! Those beginning out can take up higher level taxa, and those wanting challenge can hit the real difficult stuff. I think it would be useful to see on the taxa page how many users are “subscribed” to it, so that when you are looking for an area to work on, you can gauge where the help would be most useful. Some of course would choose purely on their own expertise or interest. In this scenario, the AI is only jump starting the process by placing the observation at a taxa that it deems most likely.  I often go back and review a given taxa that I am learning, and that process acts as a quality assurance step that often picks up errors or missed characters etc.

 

cheers
Mark Tutty
kiwif...@gmail.com

--

Tony Iwane

unread,
Oct 24, 2018, 5:07:14 PM10/24/18
to iNaturalist
Just chiming in here on a few things:

- When you submit an observation to computer vision, it takes the first photo and runs it through the model and spits out visually similar results. If something is "Seen nearby" that will affect the ranking of the results. Improvements to ranking and displaying results is definitely something we are looking into. That, along with a way to train the model on not just species but higher-level taxa will be really helpful, I think. But that will take time, it's not easy.

- It's important to remember that the model is trained not to recognize organisms, per se, but to recognize iNaturalist photos of organisms. So for, say, Amanita muscaria, we take every photo from RG observations of that species and feed it to the model, telling it these are all photos of A. muscaria and it basically uses them to "learn" what an iNat photo of A. muscaria looks like. So, some of the "context" photos that Mark mentioned are also fine (as long as the organism is in the shot - photos of just habitat should not be uploaded to iNat) because they represent how an iNat user might photograph that organism. Ralph said "The existing model seems to be able to determine the object of interest in most cases." but that wording is not totally accurate because the model is not determining the organism of interest, it is just saying "this photograph looks like other photographs of what I have been told is this taxon." Which is why, for example a photo of a bee on a flower will often return both insect and plant results.

- Regarding cropping: because in most iNat photos the organism of interest fills the frame, computer vision does tend to do better if your photo is cropped.

- As Charlie said, because the model is trained only on RG observations, it will not work well with produce, common garden plants which are not normally seen in the wild, captive animals, many pinned specimens, etc. Right now it's best at in situ photos of organisms which are commonly observed on iNat. The best way to improve it is to a) keep adding photos of organisms and b) identifying observations on iNaturalist so that we have a greater number of species and photos on which to train the model. 

- To emphasize that last point, I just wanted everyone to remember that the model is what it is because of all of our amazing users contributing their time and effort at adding both observations and, crucially IDs. As cool as computer vision is, it's the hard work by our community that fuels it.

Tony Iwane

Chris Cheatle

unread,
Oct 24, 2018, 5:35:37 PM10/24/18
to iNaturalist
Tony,

Can I ask you guys to look at the observation noted in my last reply. This inability to properly know what is reported nearby seems to be a repeated issue. I know this is I think the 3rd such example (might be 2nd, I cant remember if I posted 1) I have posted to the group, so it is apparently not an isolated issue. Assuming of course there is something in the technology I don't understand.

Thx.

On Tuesday, October 23, 2018 at 2:04:20 PM UTC-4, Chris Cheatle wrote:

Tony Iwane

unread,
Oct 24, 2018, 5:45:52 PM10/24/18
to inatu...@googlegroups.com
Hey Chris,

So what you're saying is that you submitted that photo to computer vision and it didn't list Gray-cheeked Thrush as being seen nearby?

Tony Iwane

--
You received this message because you are subscribed to a topic in the Google Groups "iNaturalist" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/inaturalist/gsI1PqAJv8M/unsubscribe.
To unsubscribe from this group and all its topics, send an email to inaturalist...@googlegroups.com.

Ralph Begley

unread,
Oct 24, 2018, 7:09:35 PM10/24/18
to iNaturalist
Tony,
Thanks for your clarification.
As to cropping affecting the outcome of the analysis, Was my previous example an outlier?
An uncropped image of Suillus lakei produces a suggestion set with Suillus sp. as an unlikely candidate.
A cropped image using the most "diagnostic" region produces a suggestion set with Suillus sp. as a likely candidate.

Thanks for the team's efforts.
Ralph

Chris Cheatle

unread,
Oct 24, 2018, 7:24:36 PM10/24/18
to iNaturalist
Tony, 

Yes that is correct, here is what I get. The concern is has Grey-cheeked Thrush been lowered in the ranks because it thinks it is not seen nearby.

My observation is in May.

This is a research grade observation in May about 4 kilometers away : https://www.inaturalist.org/observations/1784791
Here is another also in May, about 5 kilometers away : https://www.inaturalist.org/observations/17514025
And another about 15 km away, also in May : https://www.inaturalist.org/observations/6385555

It's not isolated, I can send other examples.

Untitled.jpg

Tony Iwane

unread,
Oct 24, 2018, 9:02:25 PM10/24/18
to inatu...@googlegroups.com
Hi Chris,

Hmm, here is what I get when I try to upload photo and the exact date and location info from https://www.inaturalist.org/observations/5154909

image.png

Is the screenshot you posted current? Does the photo have location info embedded in it? If so, can you please send it here or to he...@inaturalist.org? Not sure that if that'll make a difference but it would be good to test the original photo. 

Ralph, can you send those photos either here or to he...@inaturalist.org? Tough to say without seeing the photos.

Tony Iwane

Chris Cheatle

unread,
Oct 24, 2018, 9:30:48 PM10/24/18
to iNaturalist
Hi Tony,

Yes the screenprint is current, I captured it as writing the message. The photo does have loaction data embedded in it. The pin I dropped it on location wise was manual, and it turns about about 1km south of the actual location as the GPS embedded in the picture shows (the photo was apparently taken at the north end of that peninsula). Does the 'seen nearby' use the co-ordinates in the observation, or the photo. Ironically the embedded gps is actually closer to the other records. 

I just tried again on the record, and got the same behaviour and results.

To put in perspective, from base to tip, that peninsula sticking out into Lake Ontario is 5km long, so it's approximately 2 to 3 km to the closest other observation.

The embedded gps is 43.6511, -79.3215

Like you I tried creating a fresh observation with the photo and exact details, and in the upload process it showed Grey-cheeked nearby, as it did after saving, so why does the exact same photo, date and location on a new observation say Grey-cheeked is seen nearby, but the observation in the link says it is not. What happens if you try to run the suggestions on the original observation record ?

I likely don't have the original photo,it's obviously not a keeper quality, but I will check.

Chris Cheatle

unread,
Oct 24, 2018, 9:45:38 PM10/24/18
to iNaturalist
And to make matters even more confusing, here is a Swainson's Thrush record from the exact same day, and with the exact same GPS, and if I run the suggestion tool on it, Grey-cheeked is listed is seen nearby.

So I have 2 different records of thrushes, from the exact same date and exact same location, one says Grey-cheeked is seen nearby, the other says it is not.

Tony Iwane

unread,
Oct 25, 2018, 12:17:55 AM10/25/18
to inatu...@googlegroups.com
Hi Chris, 

I can replicate the issue on the observation pages themselves (I was trying via the uploader previously, and it worked OK there - weird) so I'll pass this on and see what might be causing it. 

Tony

Chris Cheatle

unread,
Oct 25, 2018, 9:47:33 AM10/25/18
to iNaturalist
Tony,

A couple of other notes, I don\t know if they will help>
  • the Grey-cheeked record itself was created using (I think the duplicate tool to copy another record and then change the details, or manually), the Swainson\s was created using the bulk upload tool from my ebird history. As noted the dates  and gps are identical, but the records were physically created in different ways, although I cant see that being a factor
  • as mentioned it is not isolated, I can send other examples
  • it is also not unique to me or something I am doing, if you can find it, a couple of months ago I posted in the group about an Aeshna species dragonfly in Belgium that had the same problem.
I personally dont need the id too to recognize birds, but it is confusing why the exact same gps and date produces different results in different places. I do use it for some plants, although in that case I try to ID myself and then check the tool, if it matches my ID, I select it, making it look like I used the tool, if it does not, I enter a higher level ID, not picking the suggestion.

pfr...@gmail.com

unread,
Jan 28, 2019, 3:04:28 PM1/28/19
to iNaturalist
I see that this thread is cold, but as the new badge for CV suggestion is recently available, I want to ask if it is not possible to assign an automatic ID in order to eliminate all of the "Unknown" cases.
Maybe setting an special initial badge or something like that.
I am new to the site but I realize that there is a lot of cases for "unknown" or "state of life" that are easily identifiable and may help to attract the attention.

Regards;
Pablo


Charlie Hohn

unread,
Jan 28, 2019, 3:25:20 PM1/28/19
to iNaturalist
Given the lack of location filter for the algorithm, i wouldn't want to do this to species level unless there was a location filter. Too many things way out of range show up on the algorithm. I wouldn't be opposed to somehow automatically running it on everything in that way and displaying the potential results on the page, but i wouldn't think a full on ID would be a great idea, at least not to species level. Most of the 'something' observations that remain are those from newbies who take blurry pics and then don't stick around, so they may not be identifiable anyway.

--
You received this message because you are subscribed to the Google Groups "iNaturalist" group.
To unsubscribe from this group and stop receiving emails from it, send an email to inaturalist...@googlegroups.com.

To post to this group, send email to inatu...@googlegroups.com.
Visit this group at https://groups.google.com/group/inaturalist.
For more options, visit https://groups.google.com/d/optout.

Chris Cheatle

unread,
Jan 28, 2019, 3:30:52 PM1/28/19
to iNaturalist
You can always find a list of observations with no ID attached via the Identify page and the filters ( https://www.inaturalist.org/observations/identify?iconic_taxa=unknown&place_id=any ) and then add a high level ID to the best of your ability, even if it is just birds, spiders, plants or whatever which has a better chance of getting to the attention of folks with knowledge to try and ID it.

pfr...@gmail.com

unread,
Jan 28, 2019, 4:40:48 PM1/28/19
to iNaturalist
Yes, that was the way I was working.
But lately I saw an increased number of "Unknown" cases due to applications and thought this may help, at least tag by kingdom ;)
Thanks!

pfr...@gmail.com

unread,
Jan 30, 2019, 3:43:04 PM1/30/19
to iNaturalist
Looks like when using this filter the CV suggestions is not working.
I been working all day with it and no once it brought up a suggestion.


El lunes, 28 de enero de 2019, 17:30:52 (UTC-3), Chris Cheatle escribió:

Tony Iwane

unread,
Jan 30, 2019, 4:22:57 PM1/30/19
to iNaturalist
Can you please share a screenshot of what you're seeing? Are you using the Suggestions tab in Identify, and is it set to "Visually Similar" for source?

Tony Iwane

pfr...@gmail.com

unread,
Jan 30, 2019, 4:45:17 PM1/30/19
to iNaturalist
When I select an observation this window pop up (picture 1).
Then I click "Add ID" button and when I place my cursor there, no CV suggestions appear (as it does when I upload an observation or review others) .
When selecting suggestions it brings many suggestions tab but as picture 2 shows, they has nothing to do with the real specimen.
Hope it helps.

Regards;
Pablo
pic_1.png
pic_2.png

Chris Cheatle

unread,
Jan 30, 2019, 5:00:32 PM1/30/19
to iNaturalist
Just to clarify, this is on all records off the identity page, it does not matter if it is an 'Unknown'. No CV suggestions are being offered.

pfr...@gmail.com

unread,
Jan 30, 2019, 5:07:52 PM1/30/19
to iNaturalist
Ok. But, is a feature or a "bug"?

Tony Iwane

unread,
Jan 30, 2019, 5:11:47 PM1/30/19
to inatu...@googlegroups.com
Hi Pablo,

As far as I know, CV suggestions have never been available on the "Add an Identification" part of the Identify page (eg pic_1 that you shared), so it's not a bug. You will need to go to the Suggestions tab and select "Visually similar" under Source:

image.png

Then you will be show CV suggestions, restricted by taxon if you so desire.

Tony Iwane

You received this message because you are subscribed to a topic in the Google Groups "iNaturalist" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/inaturalist/gsI1PqAJv8M/unsubscribe.
To unsubscribe from this group and all its topics, send an email to inaturalist...@googlegroups.com.

pfr...@gmail.com

unread,
Jan 30, 2019, 6:13:41 PM1/30/19
to iNaturalist
Thanks for the clarification.

It was my mistake or bias, as IT guy I expected to have it the same way as in the observation "Suggest an identification" tab.

Regards;
Pablo
Reply all
Reply to author
Forward
0 new messages