I program Windows applications for the visually impaired using both speech recognition and read text. This is not easy and may be very complex as Hal mentioned. I am currently learning AI2. I am an experienced programmer so it is not difficult for me. I think I may be able to help your NGO get started.
I too recommend you start using AI2 immediately. I also recommend you switch to the Chrome browser; currently it works better with AI2. "The key thing for your app is to try to use Google Location Services to generate the directions." .. probably, however there are other things that can supplement this.
What have you done so far? Have you tried any experiments with the Location.Sensor? Have you read anything about Google Location Services, geolocation? There is a recent addition to the App Inventor tutorials that shows an app that does make use of geolocation services; There is both an aia (source code) and a compiled app. Here is a link to the Map It tutorial Map It Tutorial for AI2
. In some ways, this is what you are looking for, but with out speech recognition and other tools necessary to make this work for a visually impaired person.
I have some code I developed showing things that can be currently done using the Location.Sensor. I will look at adding speech recognition and the ability to talk to the user. The app tests some of the features of the location sensor..depending on the device you use it on, it provides information regarding the positional 'accuracy' of the device's gps and proximity to an adjacent known location.
Some things to consider before you embark on this project:
1) the gps in most phones is not precisely accurate, +/- 50 feet on average, perhaps as good as 5 feet on occasion.
2) phones do also use other information to determine location. It is not as accurate as the gps.
3) the gps receivers in phones are not very sensitive. They lose signals in buildings and are difficult in an urban environment. Buildings obscure their line of sight to the positional satellites.
4) not all phones have a gps. Those phones that do not have much reduced positional accuracy compared to those that have a gps.
4) there is an option in the location sensor to determine proximity to a destination. My experiments indicate it is not very accurate. +/- 50 feet reliable might be possible.
Because of these limitations, your app will have to have plenty of error control routines. You might have to use the orientation sensor to allow the user to control the app (shaking the phone, turning it left to right etc). My experiments with the orientation sensor is this is difficult to do but more difficult to ensure the gestures used to command work more than half the time.
I think some of what you hope for is possible:
Person will speak address to it and this app will identify an possible address; being able to do this depends on knowing how to formulate the correct words to say. If the phone is connected to a network, you should easily be able to show the path between current and destination ., however, you can not do this easily if the phone isnot connected. How about having the individual select from several 'set' locations from a list? The user could choose from items in a list and the phone could speak the address back to the user and ask for confirmation. Easier to code and more reliable than using the speech recognizer on formulate the address. This procedure would limit the destinations the phone would know but might be worthwhile. The phone can provide the user the information "where am I" and provide a street address if one is available in the location services (this is not always true for every point on the planet.. If the destination address is well formed by the speech recognizer or from a list getting directions is very possible, even not difficult. The Map It app can almost do this: i want to fetch location,latitude and longitude of the destination address .
Think about these limitations. If they are not significant issues, come back here and we can discuss what appears to be a very worthwhile project.