--
***
Please note: the NVDA project has a Citizen and Contributor Code of Conduct.
NV Access expects that all community members will read and abide by the rules set out in this document while participating in this group.
https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md
You can contact the group owners and moderators via nvda-user...@nvaccess.org.
---
You received this message because you are subscribed to the Google Groups "NVDA Screen Reader Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nvda-users+...@nvaccess.org.
To view this discussion visit https://groups.google.com/a/nvaccess.org/d/msgid/nvda-users/CAKsDpFgDt0%2BSQgyxZJR2uHVMBsWNkwd1GLN3ws5ZL2v_jKhNUw%40mail.gmail.com.
To view this discussion visit https://groups.google.com/a/nvaccess.org/d/msgid/nvda-users/1DEE108B-6D7E-41F6-AD5A-279CC564ACD1%40hxcore.ol.
Oops my bad
My nvda dont reed special characters like comma
I have it turned off
Or special characters..
I have forgot the English Word for it
--
To view this discussion visit https://groups.google.com/a/nvaccess.org/d/msgid/nvda-users/F149F083-F926-439F-ADB2-A525BE5DBAAE%40hxcore.ol.
Hi Quentin,
Thanks for sharing this exciting update about the new NVDA alpha build. I'm thrilled to hear about the on-device image description feature, which is going to be a game-changer for users.
I do have one question regarding the shortcut key – NVDA + Windows +comma to activate On Device Image Description. If someone is using the Desktop layout for their keyboard, using this shortcut might be a bit of a stretch (literally!). I tried it and my fingers felt like Spiderman trying to web-sling across the keyboard!
From: Quentin Christensen <que...@nvaccess.org>
Sent: Friday, October 3, 2025 12:59 PM
To: NVDA Screen Reader Discussion <nvda-...@nvaccess.org>
Subject: [NVDA] NVDA 64-bit with on-device image description alpha build now available
Hi everyone,
--
Hello
I have just installed the Alpha and with anxious anticipation gave the new image description a test run.
So far I am unimpressed in the extreme. This does not look anywhere near worth the effort. Here is one example, but I could give more: I asked it to describe an image I received via WhatsApp. Its description read: "A man looking at his cellphone". I showed Be My AI the same image and here is its description:
A screenshot of a Facebook feed on a mobile device. At the top of
the screen, the time is 11:15, and the battery is at 57%. There
are several navigation icons for Facebook at the top, indicating
you are on the home feed.
The main visible post is from the group "Alberton What's
Happening" by Nicole Moneron. The post is a warning reading:
"Please be careful driving on the N12 towards voortrekker Bridge
there's guys throwing rocks over the bridge and then a few guys on
each side of the road ready to run." The post has received 4 angry
reactions, 6 comments, and options to Like, Comment, and Share.
Below the post, Jonnno Moyle has commented: "Hate the human race."
There is a comment box with a profile picture of a child, ready
for a new comment.
Below this, another post from "Dream Mansions" starts with the
headline: "VIDEO SH0CK: The newly leaked 911 audio from the
Charlie Kirk incident will leave you speechless. This is far more
shocking than we were told..."
Above Nicole Moneron's post, part of a previous post is visible,
showing images of outdoor scenes including a yard and a pool area,
with a "+12" indicating more photos. This post has 22 likes, 1
comment, and 7 shares.
[End of description]
I realise that the image description feature is offline and based on a small language model, but it hardly seems usable. I will stick to the AI add-ons.
Kind regards
Christo
Hi,
Regarding add-ons and 64-bit NVDA (a separate topic), any add-on with 32-bit only dependencies will not work on 64-bit NVDA (even after forcefully enabling them), including several speech synthesizers and global plugins.
Cheers,
Joseph
--
When I went to the link provided for the alpha build, all I found were samples of snapshots.
I probably shouldn't be messing with alpha builds anyway, but I found nothing helpful at that link.
So, I will wait for a late beta or even the release…
Richard, USA
“Reality is the leading cause of stress for those who are in touch with it.”
– Jane Wagner, from The Search for Signs of Intelligent Life in the Universe.
My web site: https://www.turner42.com
To view this discussion visit https://groups.google.com/a/nvaccess.org/d/msgid/nvda-users/000d01dc3470%24ac0ff260%24042fd720%24%40gmail.com.
I use basiliskLLM for my image descriptions. I craft my prompt and have it describe all of my images so I can do what I need to do later. so while this feature will be useful, there are other apps, better apps which will allow the use of more customizable LLMs.
To view this discussion visit https://groups.google.com/a/nvaccess.org/d/msgid/nvda-users/A610D373-E939-442E-9C8A-95880E4CEBC1%40seanrandall.me.
Hello Quentin
I've tested the image descriptions, and I'm afraid I can't offer
any positive feedback. Maybe it's just me, but so far, I haven't
gotten a single result I'd call even remotely accurate.
It almost seems as though the AI is coming up with descriptions of
random objects that have nothing to do with the actual image.
For example, here's what it told me about my desktop screen:
"a colorful advertisement for a k orean fashion show"
This is just one example; it has failed consistently. When I
requested a description of a webpage, it told me it was a picture
of a blue water bottle.
I'm hoping this will be improved before official release, as
right now this just isn't up to the mark.
Thanks
Hi everyone,
--
Hi,
This might be due to the model in use (looks like the model is Xenova GPT2 model as NVDA tries to download this model from huggingface.com (the version control repo used by OpenAI) he first time AI image captioner is enabled.
Cheers,
Joseph
To view this discussion visit https://groups.google.com/a/nvaccess.org/d/msgid/nvda-users/2d0eac23-0247-487f-b37b-aa3abf105a9e%40gmail.com.