NVDA 64-bit with on-device image description alpha build now available

407 views
Skip to first unread message

Quentin Christensen

unread,
Oct 3, 2025, 3:29:38 AM (8 days ago) Oct 3
to NVDA Screen Reader Discussion
Hi everyone,

We're bringing on-device image description to NVDA! More on that further down.

As you may know, up until now, NVDA has been a 32-bit program. We are working towards NVDA 2026.1 being 64-bit.  Almost all consumer PCs made in the last decade or so are 64-bit and Windows 11 only runs on 64-bit processors.

We've hit a milestone today - nvda_snapshot_alpha-52930,7778bb6c.exe now available, and is a 64-bit program! You can test it out from: https://download.nvaccess.org/snapshots/alpha/

Note, it IS still an alpha build, so there may be issues and bugs - but we wanted to let you know as we know a number of people are interested in trying it out. And we wanted to let you know a couple of caveats with it:

1) If you install a 64-bit build of NVDA, you cannot safely downgrade to 2025.3 or earlier.  You will need to uninstall NVDA and re-install the older version.
2) Windows 10 is now the minimum Windows version supported.
3) It won't work on ARM devices running Windows 10.
4) It won't work if you have a 32-bit version of Windows
5) This version is a "2026.1" alpha, so existing add-ons won't work.
6) When you upgrade, you may lose the secure desktop configuration (NVDA's configuration it uses on the logon screen) which may revert to the default config.

Now, to the exciting part: On-device image description!

Press NVDA+Windows+, to get an AI generated image description. This is generated locally on the device - no information is sent to the internet.

To use these image descriptions, NVDA will need to download approximately 230MB of data from the internet for the model the first time you enable it.

To enable AI generated image descriptions:
1. Press NVDA+control+g to open NVDA's general settings
2. Press CONTROL+TAB to get down to "AI Image Descriptions"
3. Press TAB to "Enable image captioner".
4. Press SPACEBAR to enable this option.
5. Press ENTER to close settings.
6. NVDA will prompt to download the data it needs. Press ENTER to allow this.

The data is downloaded (there is no progress bar, we are aware of that) and NVDA advises when it is done. After that, you can start using the image captioner.

Note that this ONLY gives English descriptions currently. Our plan is to bring internationalisation to this as soon as possible.

There are other exciting updates already in this alpha for 2026.1 you can check out:
- Report when multiple items can be selected in a list control
- The status bar can be reported in VS Code
- Performance improvements on ARM64
- Spelling errors can be reported with a sound or Braille rather than speech
- VirusTotal scan results are available in the details for add-ons in the add-on store
- In the add-on store you can see the latest changes for add-ons.

If you do test out this build, please do let us know how you find it. And don't forget to file any bugs you find on GitHub: https://github.com/nvaccess/nvda/issues

Kind regards

Quentin

--

Quentin Christensen
Training and Support Manager

NV Access

Subscribe to email updates (blog, new versions, etc): https://eepurl.com/iuVyjo

mattias

unread,
Oct 3, 2025, 6:01:34 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org

And wich key are ”windows?

I tryed with nvda + my left windows key but no

 

Sent from Mail for Windows

--
***
Please note: the NVDA project has a Citizen and Contributor Code of Conduct.
NV Access expects that all community members will read and abide by the rules set out in this document while participating in this group.
https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md
 
You can contact the group owners and moderators via nvda-user...@nvaccess.org.
---
You received this message because you are subscribed to the Google Groups "NVDA Screen Reader Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nvda-users+...@nvaccess.org.
To view this discussion visit https://groups.google.com/a/nvaccess.org/d/msgid/nvda-users/CAKsDpFgDt0%2BSQgyxZJR2uHVMBsWNkwd1GLN3ws5ZL2v_jKhNUw%40mail.gmail.com.

 

Quentin Christensen

unread,
Oct 3, 2025, 6:06:22 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org
Sorry Mattias,

The correct keystroke is NVDA+Windows+comma  (I copied the details from the release notes - which has the comma symbol - normally I try to write out symbols in words, to avoid it being lost just as it was here).

Also, to answer another question someone asked - yes you can setup this alpha as a portable copy of NVDA.  I got caught up with the caveats for installing it, that I forgot to mention the way many will do it - setting it up as a portable copy - and yes that should work as any other portable copy of NVDA.

Kind regards
Quentin

mattias

unread,
Oct 3, 2025, 6:11:04 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org

Oops my bad

My nvda dont reed special characters like comma

I have it turned off

Or special characters..

I have forgot the English Word for it

Fettah Pınar

unread,
Oct 3, 2025, 6:17:21 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org
Hello Quentin!
First of all, thank you so much for sharing this wonderful development with us. The innovations are quite exciting and impressive.
I would like to ask you a question that's been on my mind.
As I understand it, image recognition will work entirely within the device and will not require the internet.
My question is this: won't this significantly impact the quality of image recognition?  
I mean, isn't the accuracy of image recognition higher when done online?  
Which AI model are you using for image recognition, and what is the accuracy rate?

Quentin Christensen <que...@nvaccess.org>, 3 Eki 2025 Cum, 10:29 tarihinde şunu yazdı:
--

Quentin Christensen

unread,
Oct 3, 2025, 6:17:49 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org
Not at all, that's exactly why I usually write the word out.  Sorry for neglecting it!

Johann Tan

unread,
Oct 3, 2025, 6:45:08 AM (8 days ago) Oct 3
to NVDA Screen Reader Discussion, Quentin Christensen
Hi.
Is there a way for when running this alha to not have it read my current nvda config when launching the installer, because I'm afraid it will do something to my current NVDA config that would possibly corrupt it.

Gene Asner

unread,
Oct 3, 2025, 7:00:43 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org
You give a command to get descriptions but how do you tell the feature
what image or images you want descriptions of? Does it give
descriptions of all images on a page?

Gene

Mujtaba Merchant

unread,
Oct 3, 2025, 8:10:50 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org

Hi Quentin,

 

Thanks for sharing this exciting update about the new NVDA alpha build. I'm thrilled to hear about the on-device image description feature, which is going to be a game-changer for users.

 

I do have one question regarding the shortcut key – NVDA + Windows +comma to activate On Device Image Description. If someone is using the Desktop layout for their keyboard, using this shortcut might be a bit of a stretch (literally!). I tried it and my fingers felt like Spiderman trying to web-sling across the keyboard!

 

 

 

 

From: Quentin Christensen <que...@nvaccess.org>
Sent: Friday, October 3, 2025 12:59 PM
To: NVDA Screen Reader Discussion <nvda-...@nvaccess.org>
Subject: [NVDA] NVDA 64-bit with on-device image description alpha build now available

 

Hi everyone,

--

Gene Asner

unread,
Oct 3, 2025, 8:25:09 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org
It doesn't matter what keyboard configuration you are using. Set
capslock to be an NVdA key, you can change the setting in the keyboard
settings dialog and you can then use caps lock, Windows key with the
left and and type comma with the right. You can keep insert as another
NVDA key. Just set caps lock as an NVDA key and leave the insert
setting as it is.

Gene

On 10/3/2025 7:10 AM, Mujtaba Merchant wrote:
> Hi Quentin,
>
> Thanks for sharing this exciting update about the new NVDA alpha build.
> I'm thrilled to hear about the on-device image description feature,
> which is going to be a game-changer for users.
>
> I do have one question regarding the shortcut key – NVDA + Windows
> +comma to activate On Device Image Description. If someone is using the
> Desktop layout for their keyboard, using this shortcut might be a bit of
> a stretch (literally!). I tried it and my fingers felt like Spiderman
> trying to web-sling across the keyboard!
>
> *From:*Quentin Christensen <que...@nvaccess.org>
> *Sent:* Friday, October 3, 2025 12:59 PM
> *To:* NVDA Screen Reader Discussion <nvda-...@nvaccess.org>
> *Subject:* [NVDA] NVDA 64-bit with on-device image description alpha
> www.nvaccess.org <http://www.nvaccess.org>
>
> Training: https://www.nvaccess.org/shop/ <https://www.nvaccess.org/shop/>
>
> Certification: https://certification.nvaccess.org/
> <https://certification.nvaccess.org/>
>
> Subscribe to email updates (blog, new versions, etc):
> https://eepurl.com/iuVyjo <https://eepurl.com/iuVyjo>
>
> User group: _https://groups.google.com/a/nvaccess.org/g/nvda-users
> <https://groups.google.com/a/nvaccess.org/g/nvda-users>_
>
> Other groups: https://github.com/nvaccess/nvda/wiki/Connect
> <https://github.com/nvaccess/nvda/wiki/Connect>
>
> Mastodon: https://fosstodon.org/@NVAccess <https://fosstodon.org/@NVAccess>
>
> Facebook: http://www.facebook.com/NVAccess
> <http://www.facebook.com/NVAccess>
>
> --
> ***
> Please note: the NVDA project has a Citizen and Contributor Code of Conduct.
> NV Access expects that all community members will read and abide by the
> rules set out in this document while participating in this group.
> https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md
> <https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md>
>
> You can contact the group owners and moderators via
> nvda-user...@nvaccess.org <mailto:nvda-user...@nvaccess.org>.
> ---
> You received this message because you are subscribed to the Google
> Groups "NVDA Screen Reader Discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to nvda-users+...@nvaccess.org
> <mailto:nvda-users+...@nvaccess.org>.
> <https://groups.google.com/a/nvaccess.org/d/msgid/nvda-users/CAKsDpFgDt0%2BSQgyxZJR2uHVMBsWNkwd1GLN3ws5ZL2v_jKhNUw%40mail.gmail.com?utm_medium=email&utm_source=footer>.
>
> --
> ***
> Please note: the NVDA project has a Citizen and Contributor Code of Conduct.
> NV Access expects that all community members will read and abide by the
> rules set out in this document while participating in this group.
> https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md
> <https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md>
>
> You can contact the group owners and moderators via
> nvda-user...@nvaccess.org.
> ---
> You received this message because you are subscribed to the Google
> Groups "NVDA Screen Reader Discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to nvda-users+...@nvaccess.org
> <mailto:nvda-users+...@nvaccess.org>.
> To view this discussion visit
> https://groups.google.com/a/nvaccess.org/d/msgid/nvda-users/013e01dc345e%24bef28fd0%243cd7af70%24%40gmail.com
> <https://groups.google.com/a/nvaccess.org/d/msgid/nvda-users/013e01dc345e%24bef28fd0%243cd7af70%24%40gmail.com?utm_medium=email&utm_source=footer>.

Mujtaba Merchant

unread,
Oct 3, 2025, 8:28:58 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org
Thanks for the tip about setting the capslock as an NVDA key to make the shortcut easier to use. That way, I can use caps lock with the Windows key and comma without having to stretch my fingers too much. I'll definitely try that out. Your suggestion makes it much more manageable. Thanks again for sharing this helpful advice.
To unsubscribe from this group and stop receiving emails from it, send an email to nvda-users+...@nvaccess.org.
To view this discussion visit https://groups.google.com/a/nvaccess.org/d/msgid/nvda-users/7ebbc2f4-0aa0-b586-6805-1b66ba138ce6%40gmail.com.

muhammed ali çiçek

unread,
Oct 3, 2025, 9:09:32 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org
"Hello, I haven't tried it yet, but I think a small, open-source language model is being used. Regarding the quality of the descriptions, I assume it won't be able to provide the exact descriptions we want in the first versions. If you want a better description, try the Cloud Vision add-on. You're probably already using it, but I wanted to mention it.
Best regards."

Fettah Pınar <fettah...@gmail.com>, 3 Eki 2025 Cum, 13:22 tarihinde şunu yazdı:

Christo de Klerk

unread,
Oct 3, 2025, 10:01:53 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org

Hello


I have just installed the Alpha and with anxious anticipation gave the new image description a test run.


So far I am unimpressed in the extreme. This does not look anywhere near worth the effort. Here is one example, but I could give more: I asked it to describe an image I received via WhatsApp. Its description read: "A man looking at his cellphone". I showed Be My AI the same image and here is its description:


A screenshot of a Facebook feed on a mobile device. At the top of the screen, the time is 11:15, and the battery is at 57%. There are several navigation icons for Facebook at the top, indicating you are on the home feed.

The main visible post is from the group "Alberton What's Happening" by Nicole Moneron. The post is a warning reading: "Please be careful driving on the N12 towards voortrekker Bridge there's guys throwing rocks over the bridge and then a few guys on each side of the road ready to run." The post has received 4 angry reactions, 6 comments, and options to Like, Comment, and Share.

Below the post, Jonnno Moyle has commented: "Hate the human race."

There is a comment box with a profile picture of a child, ready for a new comment.

Below this, another post from "Dream Mansions" starts with the headline: "VIDEO SH0CK: The newly leaked 911 audio from the Charlie Kirk incident will leave you speechless. This is far more shocking than we were told..."

Above Nicole Moneron's post, part of a previous post is visible, showing images of outdoor scenes including a yard and a pool area, with a "+12" indicating more photos. This post has 22 likes, 1 comment, and 7 shares.


[End of description]


I realise that the image description feature is offline and based on a small language model, but it hardly seems usable. I will stick to the AI add-ons.


Kind regards


Christo

Sean Randall

unread,
Oct 3, 2025, 10:11:35 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org
I'm as excited as anyone about these llm's and the support they can give us. I regularly use them to interpret the visuals on my dauter's homework, describe family photos and memes and extract data from charts, visual pdf's etc. 
I still rely on standard OCR to read my food items and things like that though. Someone here said this integration was a gamechanger for us, but I disagree there. That level of description will add very little value to me on the regular. My little laptop's just not got the power to do much decent with generative yet when compared with the online models and that's fine. 

So although it's a cool feature, A bit like those people saying how amazing Talkback is now,  screen readers still do a heck of a lot without needing a language model on board to my mind. 
Thanks

Sean


On 3 Oct 2025, at 15:01, Christo de Klerk <christo...@gmail.com> wrote:



joseph....@gmail.com

unread,
Oct 3, 2025, 10:19:10 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org

Hi,

Regarding add-ons and 64-bit NVDA (a separate topic), any add-on with 32-bit only dependencies will not work on 64-bit NVDA (even after forcefully enabling them), including several speech synthesizers and global plugins.

Cheers,

Joseph

--

Richard Turner

unread,
Oct 3, 2025, 10:22:27 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org

When I went to the link provided for the alpha build, all I found were samples of snapshots.

I probably shouldn't be messing with alpha builds anyway, but I found nothing helpful at that link.

So, I will wait for a late beta or even the release…

 

 

 

Richard, USA

“Reality is the leading cause of stress for those who are in touch with it.

– Jane Wagner, from The Search for Signs of Intelligent Life in the Universe.

 

My web site: https://www.turner42.com

 

 

Sarah Alawami

unread,
Oct 3, 2025, 11:17:04 AM (8 days ago) Oct 3
to nvda-...@nvaccess.org

I use basiliskLLM for my image descriptions. I craft my prompt  and have it describe all of my images so I can do what I need to do later. so while this feature will be useful, there are other apps, better apps which will allow the use of more customizable  LLMs.

Aamir

unread,
Oct 3, 2025, 12:56:41 PM (8 days ago) Oct 3
to nvda-...@nvaccess.org, Quentin Christensen

Hello Quentin


I've tested the image descriptions, and I'm afraid I can't offer any positive feedback. Maybe it's just me, but so far, I haven't gotten a single result I'd call even remotely accurate.
It almost seems as though the AI is coming up with descriptions of random objects that have nothing to do with the actual image.
For example, here's what it told me about my desktop screen:
"a colorful advertisement for a k orean fashion show"
This is just one example; it has failed consistently. When I requested a description of a webpage, it told me it was a picture of a blue water bottle.

I'm hoping this will be improved before official release, as right now this just isn't up to the mark.
Thanks

muhammed ali çiçek

unread,
Oct 3, 2025, 12:59:30 PM (8 days ago) Oct 3
to nvda-...@nvaccess.org
"Hello, I'm very happy that this feature is coming. Which language model does it use, and when do you plan to release the stable version?
Best regards."

Quentin Christensen <que...@nvaccess.org>, 3 Eki 2025 Cum, 10:29 tarihinde şunu yazdı:
Hi everyone,

--

joseph....@gmail.com

unread,
Oct 3, 2025, 1:00:16 PM (8 days ago) Oct 3
to nvda-...@nvaccess.org

Hi,

This might be due to the model in use (looks like the model is Xenova GPT2 model as NVDA tries to download this model from huggingface.com (the version control repo used by OpenAI) he first time AI image captioner is enabled.

Cheers,

Joseph

Zvonimir Stanecic

unread,
Oct 3, 2025, 5:03:29 PM (8 days ago) Oct 3
to nvda-...@nvaccess.org, Gene Asner
Hi gene,

you open the image in some image viewer or in any place where you need a
description.

and then use the shortcut

W dniu 3.10.2025 o 13:00, Gene Asner pisze:

Sean Budd

unread,
Oct 6, 2025, 12:20:12 AM (6 days ago) Oct 6
to NVDA Screen Reader Discussion
Hi all,

Just answering a few questions/issues raised.
  • As I understand it, image recognition will work entirely within the device and will not require the internet. My question is this: won't this significantly impact the quality of image recognition? I mean, isn't the accuracy of image recognition higher when done online? Which AI model are you using for image recognition, and what is the accuracy rate?
The quality of the image description really depends on the model involved. Online models often use higher processing power and can give better descriptions than local models. We have opted for local models for freedom, ease of use and data privacy reasons. Using online models generally require you to pay to use them, and set up API keys. Local models ensure your data is never sent to a third party, usage is free, and you don't need to set up keys.
The model we are currently shipping with is Xenova/vit-gpt2-image-captioning, however we plan to add the option to download more models. Newer and better models are always coming out, so we've designed the code to be easily upgraded to the latest and best models. We expect the quality of local models to continue to improve to a point where they are competitive with online models. I'm not aware of a clear statistic of accuracy rate for this model, but with any models, expect accuracy to vary.
  • Is there a way for when running this alha to not have it read my current nvda config when launching the installer, because I'm afraid it will do something to my current NVDA config that would possibly corrupt it.
The NVDA installer is designed to never change your current config. Any config changes performed will not be saved to disk, and will only happen while the installer is running. You should be able to open the installer and create a portable copy without risking your config.
  • When I went to the link provided for the alpha build, all I found were samples of snapshots. I probably shouldn't be messing with alpha builds anyway, but I found nothing helpful at that link. So, I will wait for a late beta or even the release…
Those are our alpha builds. The first item on the list is the most recent alpha, and the ones below are older alphas in descending order. If you want to give image descriptions a try, I would suggest creating a portable copy. But if you'd like to wait for the beta or stable release, that's totally fine.
  •   Which language model does it use, and when do you plan to release the stable version?
As mentioned above, we are using a simple model for now, but the default model may change and we will be adding the ability to use other models. The stable release will be in 2026.1. Unfortunately, we cannot provide a release date, NVDA releases come when they are ready and stable, rather than on a fixed schedule.

Fettah Pınar

unread,
Oct 6, 2025, 5:18:53 AM (5 days ago) Oct 6
to nvda-...@nvaccess.org
Hello Sean,
Thank you very much for this detailed explanation.
Actually, there’s one more topic I’d like to ask about — SAPI 5 voices.
As you may know, SAPI 5 has both 32-bit and 64-bit versions.
Some voices work on 32-bit SAPI 5, while others run on the 64-bit version.
Generally, older SAPI voices appear among the 32-bit ones, whereas newer neural-based voices are found within the 64-bit SAPI 5 voices.
Back when I was using JAWS, we could solve this issue quite easily.
In the synthesizer settings, there were two SAPI 5 versions visible:
one labeled “SAPI 5 X” and another “SAPI 5 64.”
How do you plan to address this issue in NVDA?
Will it be possible to use both SAPI 5 versions?
Best regards.

Sean Budd <se...@nvaccess.org>, 6 Eki 2025 Pzt, 07:20 tarihinde şunu yazdı:
--
***
Please note: the NVDA project has a Citizen and Contributor Code of Conduct.
NV Access expects that all community members will read and abide by the rules set out in this document while participating in this group.
https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md
 
You can contact the group owners and moderators via nvda-user...@nvaccess.org.
---
You received this message because you are subscribed to the Google Groups "NVDA Screen Reader Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nvda-users+...@nvaccess.org.

Harun Çetinkaya

unread,
Oct 6, 2025, 6:27:27 AM (5 days ago) Oct 6
to nvda-...@nvaccess.org
Hello, I think this model's most critical issue is its limited ability to provide a description. Unfortunately, I can't learn more than a few words. Yes, you can acquire a minimum of information, but that's not enough. Concise and clear explanations aren't always sufficient, and sometimes a detailed description of a photograph is required. As far as I can tell, this model in its current state can't do that. My solution is this: What if we added a question feature to its feedback, prompted it to describe the image in more detail, and forced the model to provide a longer description?

Sean Budd <se...@nvaccess.org>, 6 Eki 2025 Pzt, 07:20 tarihinde şunu yazdı:
Hi all,
--
***
Please note: the NVDA project has a Citizen and Contributor Code of Conduct.
NV Access expects that all community members will read and abide by the rules set out in this document while participating in this group.
https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md
 
You can contact the group owners and moderators via nvda-user...@nvaccess.org.
---
You received this message because you are subscribed to the Google Groups "NVDA Screen Reader Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nvda-users+...@nvaccess.org.

Rui Fontes

unread,
Oct 6, 2025, 7:46:48 AM (5 days ago) Oct 6
to nvda-...@nvaccess.org

Hello!


Any change to have the text in other languages than english?


Best regards,

Rui Fontes
NVDA portuguese team



--
***
Please note: the NVDA project has a Citizen and Contributor Code of Conduct.
NV Access expects that all community members will read and abide by the rules set out in this document while participating in this group.
https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md
 
You can contact the group owners and moderators via nvda-user...@nvaccess.org.
---
You received this message because you are subscribed to the Google Groups "NVDA Screen Reader Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nvda-users+...@nvaccess.org.

Sean Budd

unread,
Oct 6, 2025, 7:33:27 PM (5 days ago) Oct 6
to NVDA Screen Reader Discussion
Hey all,

Thanks for the follow up questions
  • Will it be possible to use both [32 and 64bit] SAPI 5 versions?
Right now it is only possible to use 64bit synthesizers. We are hoping to be able to add support 32bit synthesizers as well, however this might not be possible.
  • My solution is this: What if we added a question feature to its feedback, prompted it to describe the image in more detail, and forced the model to provide a longer description? 
We'd like to be able to add the ability to ask follow up questions about the image, however this is not going to be addressed until other issues with the feature are solved.
  • Any change to have the text in other languages than english?

Yes, using an AI model to translate the text is planned the immediate future
You can contact the group owners and moderators via nvda-users+managers@nvaccess.org.

---
You received this message because you are subscribed to the Google Groups "NVDA Screen Reader Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nvda-users+unsubscribe@nvaccess.org.

Acer B-T

unread,
Oct 6, 2025, 9:02:20 PM (5 days ago) Oct 6
to nvda-...@nvaccess.org
Hi,

I too was questioning 32 bit synthesizer support, as I do occasionally still use some 32 bit sapi 5 voices, and as expected those aren't available in the current alpha. How hard would it be to build a bridge application that at least allows 32 bit sapi voices to work? I asked Sam Tupy about this some time ago, since his game engine, NVGT, is also 64 bit and doesn't support 32 bit voices, and he said it might be something for him to try, that being building a bridge app to bring 32 bit sapi voices into that. I'd hate to fully loos support, since the 32 bit voices i use are favorites, and they sound good even today.

You can contact the group owners and moderators via nvda-user...@nvaccess.org.

---
You received this message because you are subscribed to the Google Groups "NVDA Screen Reader Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nvda-users+...@nvaccess.org.

--
***
Please note: the NVDA project has a Citizen and Contributor Code of Conduct.
NV Access expects that all community members will read and abide by the rules set out in this document while participating in this group.
https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md
 
You can contact the group owners and moderators via nvda-user...@nvaccess.org.

---
You received this message because you are subscribed to the Google Groups "NVDA Screen Reader Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nvda-users+...@nvaccess.org.
To view this discussion visit https://groups.google.com/a/nvaccess.org/d/msgid/nvda-users/a97fa93d-de25-4f18-9dcf-b9f635c66014n%40nvaccess.org.

joseph....@gmail.com

unread,
Oct 6, 2025, 9:07:41 PM (5 days ago) Oct 6
to nvda-...@nvaccess.org

Hi,

If there is a way to detect 32-bit SAPI5 voices, then writing a bridge executable might be possible. This is also applicable for 32-bit speech synthesizer add-ons.

Cheers,

Jospeh

From: nvda-...@nvaccess.org <nvda-...@nvaccess.org> On Behalf Of Acer B-T
Sent: Monday, October 6, 2025 7:02 PM
To: nvda-...@nvaccess.org
Subject: Re: [NVDA] NVDA 64-bit with on-device image description alpha build now available

 

Hi,

Fettah Pınar

unread,
Oct 7, 2025, 1:05:15 AM (4 days ago) Oct 7
to nvda-...@nvaccess.org

Hello.

I agree with you on this matter.

Being able to use 64-bit SAPI 5 sounds is really great, but it's equally important not to lose the 32-bit SAPI 5 sounds.

I hope a bridge can be developed for 32-bit in this regard.


7 Eki 2025 Sal 04:02 tarihinde Acer B-T <jac...@gmail.com> şunu yazdı:

Steve Nutt

unread,
Oct 7, 2025, 5:28:33 AM (4 days ago) Oct 7
to nvda-...@nvaccess.org

I agree with you Sean, but that example below shows how poor it is with NVDA. It probably won’t get any better with free models either.

I am guessing JAWS, includes premium versions of those LLMs and Google of course have their own, so Talkback will always be good at that.

All the bst

 

Steve

 

--

Computer Room Services

77 Exeter Close, Stevenage, Hertfordshire, SG1 4PW

T: +44(0)1438-742286, M: +44(0)7956-334938

E: st...@comproom.co.uk, W: https://www.comproom.co.uk

 

From: nvda-...@nvaccess.org <nvda-...@nvaccess.org> On Behalf Of Sean Randall
Sent: 03 October 2025 15:11
To: nvda-...@nvaccess.org
Subject: Re: [NVDA] NVDA 64-bit with on-device image description alpha build now available

 

I'm as excited as anyone about these llm's and the support they can give us. I regularly use them to interpret the visuals on my dauter's homework, describe family photos and memes and extract data from charts, visual pdf's etc. 

FARHAN ISHRAK Fahim

unread,
Oct 8, 2025, 3:46:42 PM (3 days ago) Oct 8
to nvda-...@nvaccess.org
I never like that AI translates my native language in English. Please, if it is possible, add option to choose language. I always love to read AI responses of Bangla image in Bangla language instead of translation.
Reply all
Reply to author
Forward
0 new messages