Statistics Table

0 views
Skip to first unread message

Jason

unread,
Aug 5, 2024, 7:40:54 AM8/5/24
to geihasemo
Ihave used zonal statistics as table to find the average value of raster pixels over various polygons. The number of polygons in the feature zone data is 1794, while the output table only results in 1299 zones. Is there a possible reason for this? When analyzing which zones did not end up in the output table, they seem to be completely random, not related to size or shape of the zone. I have attempted using the spatial analyst supplemental tools and the tool Zonal Statistics 2, but the result is the same. I am trying to do zonal statistics for the following picture:

Additionally, I have tried the same method using a different raster, and a different number of observations are created in the new output table. It is still less than the full amount of polygons in the shapefile. In trying a third raster, (with much higher resolution), the resulting table is only 1 less than the number of polygons in the shapefile. My assumption is there are some pixels that do not have the granularity to cover some of the polygons in my shapefile.


You are on the right track in terms of the pixel size. ArcMap will perform an automatic vector-to-raster conversion in zonal statistics tools, if necessary. This can get problematic because approximations = error, and error propagates. That being said, you need to consider what assumptions you are willing to make and errors are acceptable: i.e. think about what you lose by converting your feature class to raster versus resampling your raster to a higher resolution.


Esri recommends that you always use raster data for your zones. They have a good discussion about vector-to-raster zonal statistics here (scroll down to the bullet that starts with "If the zone input is a feature dataset".


I am trying to run the Zonal Statistics as Table tool on my data in order to sum the number of cells of a certain type of landcover in each county. For example, I want it to sum how many Deciduous forest, Pasture/Hay land and etc.. there are in each county individually.


I have tried two different ways: 1. By using the Extract by Attributes tool on the raster to extract each individual type of cell (so, one raster is created for deciduous forest cells and one raster for paster/hay and etc...), then running the zonal statistic on each of those one at a time (not sure if this is necessary or not). 2. By just running the Zonal statistics as Table tool on the full raster.


I have tried EVERYTHING (saving it in a personal geodatabase, saving it in a file, getting rid of spaces in file names, making sure zone feature name is less than 17 characters etc...) and I still keep getting either "ERROR 999999" or there is no error at all under messages or even an indication that it has stopped, instead the red X circle just appears next to


Does anyone have any idea what I might be able to do to make this tool work? And do I need to be performing the "Extract by Attributes" tool or should the Zonal Statistics tool be able to sum up each type of cell (e.i. deciduous forest, developed land etc..) individually in each county?


- Yes, I joined tables together to match my Lyme disease case numbers with the county shapefiles, is that a problem for trying to perform a zonal statistic? If so, how should I go about fixing this and getting the Lyme case data and population data matched with the shapefile?


- Yes, the Z folder is a server that I am doing all of this work on because I am working on a university computer, is that a problem for this tool? I can move it to the computer and then move it back to the server before I shut down the computer. Will that cause any issues or just involve restoring the pathnames?


- I tried the normal (non-table) Zonal Statistics tool on one county, and it just gave me a black box over the area. I assume that means the tool didn't work. I also tried the Zonal table tool on just one of the cell types, and that also didn't work.


Did you try to make the file with the join permanent by exporting to a new featureclass, then copy it to a local folder with a simple path (ie c:\test) You have to simplify the process and joined data, complex paths, networks and files that are too big just complicate issues. Give it a go


Yes I did exactly that, shortening file names and making the join permanent by exporting it to a new feature class, except I kept it on the outside server in the same folder as the main map, and the tool luckily still worked. Thank you!


I will do that! I have one more question if you don't mind, I want to tell the Zonal Stat tool to only count cells that are completely inside a county, not ones that are only half inside a county (in other words exclude raster cells in the SUM that are split by the county boundary line). However, I do not see an option for this in the Zonal Stat tool window. Do you have any idea how I could accomplish this?


That is correct! Since the zones are converted to raster first, a pixel can only be part of a single zone. If you set the environment setting to match the cellsize and snap raster to the raster the you want to extract the statistics from, you will have no problems. Convert the zones to raster first if you want to have a better control of how a pixel is assigned to a zone.


This section of uscourts.gov provides statistical data on the business of the federal Judiciary. Specific publications address the work of the appellate, district, and bankruptcy courts; the probation and pretrial services systems; and other components of the U.S. courts.


Filter for statistical tables by topic, report, or date. Topics include court of appeals, district and bankruptcy courts, and the U.S. Supreme Court. The district court topic includes sub-topics for data on jury, civil, criminal, magistrate judges, probation, pretrial services, and trials.


When the ANALYZE TABLE statement is executed, MariaDB makes a call to the table's storage engine, and the storage engine collects its own statistics for the table. The specific behavior depends on the storage engine. For InnoDB, see InnoDB Persistent Statistics for more information.


When the ANALYZE TABLE statement is executed, MariaDB may also collect engine-independent statistics for the table. The specific behavior depends on the value of the use_stat_tables system variable. Engine-independent statistics will only be collected by the ANALYZE TABLE statement if one of the following is true:


In MariaDB 10.4 and later, the use_stat_tables system variable is set to preferably_for_queries by default. With this value, engine-independent statistics are used by default, but they are not collected by default. If you want to use engine-independent statistics with the default configuration, then you will have to collect them by executing the ANALYZE TABLE statement and by specifying the PERSISTENT FOR clause. It is recommended to collect engine-independent statistics on as-needed basis, so typically one will not have engine-independent statistics for all indexes/all columns.


The syntax for the ANALYZE TABLE statement has been extended with the PERSISTENT FOR clause. This clause allows one to collect engine-independent statistics only for particular columns or indexes. This clause also allows one to collect engine-independent statistics, regardless of the value of the use_stat_tables system variable. For example:


It is possible to update statistics tables manually. One should modify the table(s) with regular INSERT/UPDATE/DELETE statements. Statistics data will be re-read when the tables are re-opened. One way to force all tables to be re-opened is to issue FLUSH TABLES command.


Now that the wrong data is cleared, I created 2 queries (procedures) to recalculate the sum based on the states in the statistics or statistics_short_term tables:

Each procedure takes 2 arguments: the ID of the sensor (metadata_id) and the date from which you want to recalculate. If you have a lot of entries in that table be sure to select a date as close as possible from the start of the problems.


After this update my Energy data is correct, except of course the days where we deleted the data. However the total energy consumed is not lost, it just gets added to the first correct period (I can only post 1 picture as a newbie )

So one day will have a lot of energy but the overall monthly & yearly stats should be correct (unless ofcourse the data you deleted happens to span a month-end or year-end).


And lastly as there was a update to Home Assistant 2022.11.0 I installed the update and my sensor became visible in the energy dashboard with the correct unit of measurement and I was plotting my Gas consumption.


My Power Smartmeter had one spike to 800.000 something kWh and now I am suffering on all stats /yearly /monthly /weekly and cannot get them display correctly again despite removing the high spike from all recorded statistics.


Hi, I want to use the procedure to correct wrong values in my db.

To you now how can I add the procedure in SqLite? I installed the Sqlite Addon but there seems to be no option to add a stored procedure.


You need to install your database client (SqLite or any other) in your own PC and connect to the database using that tool. Remember to stop HA when you are doing database changes and backup your data!


So next question after downloading the DB from HA host and try to updating it with Sqlite Studio? Is it possible that alle the procedures in this thread belong to mysql DBs? The default is Sqlite, which, as i read, does not know the concept of stored procedures


I had similar issues, a sensor going wild. So my watermeter data was flooded with liters not consumed many. I managed to remove via statistics the incorrect values for each watermeter sensor.

Now I am left with calculations which are not correct and still use the deleted data.

image3326832 166 KB


How do I get this corrected? I do not have any clue on what to do on the database as described above, way above my knowledge level.

Is there not an option for dummies? Goal of HA is to be simple, removing/adjusting incorrect values via statistics easy and straight forward, but then it stops to be simple and you need to be a DB engineer to understand what to do.

Hope someone has a simple and straight forward option to fix this.

Otherwise I will have to delete the watermeter and install it again

Thanks for helping out.

3a8082e126
Reply all
Reply to author
Forward
0 new messages