Though we have got some good examples of serving data to wider business intelligence community, we are still interested in finding excellent, compelling examples for showing the value of PostgreSQL/PostGIS as a data service to desktop application users.
I just wonder whether there are excellent examples, for general users to appreciate?
_______________________________________________ postgis-users mailing list postgi...@lists.osgeo.org https://lists.osgeo.org/mailman/listinfo/postgis-users
Hi,
In our GIS team (small team of 10, local government, province level) we use a lot of SQL in collaboration with Gradle/GRETL and Jenkins for our automated data flows and statistics. It is amazing how much analysis and data aggregation you can do with SQL only - without having to touch QGIS or ArcGIS or any other so called "business intelligence" tools (that are often quite expensive).
Every new employee that wants to join our team has to have SQL knowledge - that's a prerequisite - or they wouldn't get the job. Most of our employees are not programmers though.
I also teach PostgreSQL/Postgis training courses (2-3 days usually) - a lot of the participants are not programmers but still manage to do analysis with SQL. Typical course participants are scientists, people working at engineering companies or government.
So - I do think there is a significant number of people who use SQL, but aren't programmers.
Andreas
All,
I almost jumped over this thread from the beginning because I wasn’t understanding the original question very well, mostly based on my own labels for these types of services/datasets. I reference things like this as “derived” data. The actual source data doesn’t exist (unless it gets cached for performance reasons) but rather it a data blending via a SQL call.
I agree with other comments here too, related to this type of product usually being the sort of thing that can’t be easily done by most GIS apps out of the box. Also provides for a pipelining of processes of sorts for sudo processing on the fly.
I’ve got a few different examples of this, some fairly simple, some very complicated that are treated as datasets by the end users, because the SQL is embed into a config, like a Mapserver Mapfile for example. More and more of our datasets are being created in this fashion vs historically sourcing a “real” dataset directly.
In general the end users are starting to think and expect this type of analysis approach to the data, especially related to time indexing and looking at data over time. Consequently, this is pushing me (us) to start thinking about time indexing of data and how to store datasets accordingly.
Bobb
From: postgis-users <postgis-us...@lists.osgeo.org> on behalf of Shaozhong SHI <shisha...@gmail.com>
Reply-To: PostGIS Discussion <postgi...@lists.osgeo.org>
Date: Friday, September 25, 2020 at 3:11 AM
To: PostGIS Discussion <postgi...@lists.osgeo.org>
Subject: Re: [postgis-users] Promoting PostgreSQL and PostGIS to wider business intelligence community
|
Think Before You Click: This email originated outside our organization. |
Ruven,
In our case it’s both, but where the same type of request occurs frequently, it can be automated, consequently we the admins, set up these types of recurring SQL based “derived” dataset services.
The admins are much more versed in the SQL side, but that shouldn’t preclude those end users who do know how to write SQL, which is what we target in our support structure. In truth, we the admins set up the majority of the SQL though. In the last few years, the number of end users in our organization that has an interest in using SQL has been increasing, but it still overall, a minority sub-group of the end users.
-->
Yes the inundation polygons are the results of numerous hydraulic models at a range of discharges. Then tables of predicted discharges from our hydrology models selects the associated inundation polygons for display via a view.