[Arcgis Server 10.1 Download Free Torrent

0 views
Skip to first unread message

Luther Lazaro

unread,
Jun 12, 2024, 5:44:31 PM6/12/24
to neronapo

Somehow our ArcGIS Server doesn't return logs anymore. This is the case in the manager and on the REST endpoint (which is no surprise, as the manager seems to use this endpoint as well). Instead of logs the server is just infinitely 'pending':

arcgis server 10.1 download free torrent


DOWNLOADhttps://vbooc.com/2yEB34



1) What is the version and OS for your ArcGIS Server?

2) Is there sufficient space (10 GB's min) on the ArcGIS Server machine's drives for both the install directory and ArcGIS Server config store and directories?

-Christof

In addition to what Christof wrote above, I would make sure that the account running the ArcGIS Server service has access to the logs directory. From the web page you included in the screenshot, you should see a "Settings" icon in the upper right hand corner. Clicking this button will indicate the directory where the ArcGIS Server logs are stored. (By default, this is "C:\arcgisserver\logs\").

We did resolve the issue this morning. The VPS on which the datastore was installed was not responding/operating correctly. After rebooting the VPS everything operated normally. The logs are now available in the manager and on the REST-endpoint.

Even though we fixed the issue, we're not quite sure what role the datastore has in correlation to the logging and why an unreachable datastore(server) would impact the retrieval of the logs. Maybe you could shine some light on this.

We've just come across the same issue and after 2 days of pulling my hair out have found a solution that worked for us. For context, our solution is a highly available Enterprise with the Portal, Server and Datastores on the same machine.

In short, a relational datastore removed itself from one of the machines and caused the primary datastore to save the backups back on the local c drive, not the designated shared folder. This slowly filled the c drive up and caused the primary datastore to enter read-only mode, which we subsequently cleared and set back to read/write with no joy.

We also noticed that the missing datastore would show up when validating in Server Admin or using the describe datastore tool, but not when accessing the datastore configuration :2443/arcgis/datastore

Is it the Datastore or the ArcGIS Server that's doing the heavy lifting when serving up Hosted Feature Services to the Portal? I'm analysing the performance of our system, and looking at ways of spreading the load across our available machines. I read an article that said that the ArcGIS Server ArcSOC for a Hosted Feature Service is just a lightweight REST endpoint and the datastore does the hard work. But I'm not too sure about that.

It's a mix. If you consider a traditional map service, then (in dedicated instances), publishing a service will create an ArcSOC.exe and start consuming memory. If you start using it and max instances are higher than 1 then it may create more ArcSOC.exe's, and this starts to consume an increasing but linear amount of memory. The CPU loading is dependent on how often it's called. The memory usage relates to the fact that it needs to 'make an image' from the data. The internal workflow will be something like:

Clearly, this is a gross simplification. In a hosted feature service, the burn layers step is replaced by 'format data' (JSON/PBF). The manipulation of the text (data) is a much lighter weight computational operation. Importantly it doesn't need a SOC for every service instance. In effect a server that is only used for hosting services needs much less memory than a traditional ArcGIS Server.

In terms of which is doing most, then the Data Store will be doing the data heavy lift, with the Hosting Server doing the conversion to JSON/PBF. Obviously, if a little bit of data is requested then neither has a high workload, but as larger amounts of data re requested then both start to ramp up.

I think it would be fair to say that both are working in tandem. The data stores workload would be somewhat comparable to an Enterprise Geodatabase, but a specific hosting server will need less resources than a traditional ArcGIS Server. Many of my smaller clients run Enterprise Portal, Hosting Server, and Data Store (the Base Deployment) on a single machine, and then traditional server roles on individual machines as required.

The issue we are having is we have reinstalled the web adaptor for an Image server and provided a new web adaptor name. e.g. server1 > Imageserver. This works and is accessible when applying the webadaptor name to the end of the our enterprise URL.

The issues start when trying to add this newly applied web adapted server as a 'federated' server. The dialogue we receive back is 'Error - server is already federated'. We have tried unfederating this server via portal manager and also the server manager to no avail.

you should not unfederate your ArcGIS Server because when you federate to the same ArcGIS server again all the items in ArcGIS Server will be re-added with new itemid's and this causes havoc to permissions links etc.

Hi Henry. Thanks for the reply. I following the link towards the end of your comment and I was able to successfully unregister the federated server. So thankyou. I have now been able to add the server as federated again. I'm now having an issue where the hosting server won't validate the server managed database when validating. Any ideas as to why this may be?

Hi @ArranGIS, the datastore is probably still reverencing the old url all you do is re-register it.

Go to _url.co.za/webadaptor_name/manager/site.html then datastore and click on the x for relational and tile cache if present.

We have a solution at the government agency I work for that needed to move to both HA and DR configurations (that San Antonio data center lightning strike caused a 15 hour outage). We were avoiding the same single point of failure you mentioned that stems from the file share. Auto-recover doesn't work when the data center is down. We deploy multi-AZ primary and multi-AZ dr environments so if a single data center gets the cooling systems shocked to death, or whatever, the other AZ is still humming along. If the whole region goes down then the ip forwards to the DR environment on the other side of the country.

At the time we began that project EFS could not handle the volume of small, fast locks required by arcgis, and thus it was an unsupported configuration. So we moved to test using SoftNAS which works well. Because we need to stay on supported configurations (what's the point of premier support if you don't) we engaged with ESRI professional services to get SoftNAS 'blessed' and also noted that since the time of that docker instance that you mentioned EFS has been improved by AWS. The improvements that were made to EFS allow for the many short and fast locks that ArcGIS needs and the professional services team blessed it for use in prod. e.g. it is a supported option for the file share now. Professional services tested and both SoftNAS and EFS meet the need now. Since we deploy primarily to gov cloud on aws we opted to go with EFS.

I'm just starting to look at this same question and came across your post. The performance of the EFS looks theoretically better than using EBS on a single VM (which is what I understand the cloudformation template uses). I plan to spin up both configs as sandboxes and I'll get some rough performance numbers. Hopefully someone else here can give you a more accurate response, but it's been a couple months so maybe not.

Also, Azure (premium) files (File Storage Microsoft Azure ) would be the equivalent. I've just starting using those on a different project for all shared folders. The initial impression is that they're holding up fine, but I don't have any hard numbers as of yet.

There's a Windows file share service called FSx that looks like it's a little slower, and more expensive than EFS: =sn&loc=1 It's also not available in my region, which is a deal breaker for me. At the moment that means I'm sticking with the VM file server.

another alternative we have been using is Two File Server VMs ((Linux) with ObjectiveFS installed to make them stay synchronised. Then use Samba to provide a SMB / NFS file share that can be mounted as a drive on each AGS VM.

As OFS is based on using S3 as its backend storage location, and I think EFS is using EBS - I think OFS actually works out cheaper for a small number of fileserver nodes (that only need to be small EC2 Linux instances).

795a8134c1
Reply all
Reply to author
Forward
0 new messages