Yourreal compression performance will probably depend a lot on the data you are putting in. Is it all geometries? If you have a lot of non-spatial data (or a lot of text attributes for spatial points), then it doesn't really matter what you do the geometries - you need to find some way to compress that data instead.
As others have said, I think you are going to struggle to find a format that meets your compression requirements. You would have to create your own custom format, which given your requirement to use commercial software is not going to be viable.
I think you need to possibly first consider how you can make your data models more efficient, then look at the compression aspects. For example, do you have a lot of repetition of geometry? You could then have a base set of geometry layers with unique IDs and then separate attribute data sets that reference the geometry by ID - that way you can have multiple views of the same geometry serving specific functions. Most decent software packages will then allow you to create joins or relates in order to create the unified view for a layer.
GML is a good example of a format that supports this kind of relational data model, though being a verbose format file sizes will be large. You can compress GML using gzip compression and can potentially get a 20:1 ratio but then you are relying on the software being able to support compressed GML.
Regardless, I would urge you to first look at your data models and see where there could be savings to be had. FME from Safe Software is your best bet if you need to start manipulating your data models.
To achieve that sort of ratio, you could use some sort of lossy compression, but I don't know of anything that uses it, and although I have a couple of ideas on how one might implement it, it would be far from standard. It would be much much cheaper to kit your server out with a 1TB disk than to spend time and money developing a custom solution.
You are also confusing data storage with data representation. Your 4th point mentions being able to view the data at different scales, but this is a function of your renderer, not the format per se. Again, a hypothetical lossily compressed file could store data at various resolutions in a sort of LoD structure, but that is likely to increase data size if anything.
If your data is to be on a server somewhere accessible by mobile applications, you're far better off using existing tools that have been designed for the purpose. A WFS server (such as GeoServer or MapServer) is ideally suited to this sort of application. The client makes a request for data of a specific area, normally that covered by the screen, and the WFS sends vector data for just that area, so all the heavy lifting is done by the server. It's then up to the application to render that data. An alternative would be to use the WMS features of MapServer and GeoServer, in which all the rendering is done by the server, and then it sends an image tile to the client. This enables features such as server-side caching of tiles, as well as scale-dependent rendering, with the minimum of work by you. They both read myriad formats, so you can author your data exactly how you like, and store it where you like, and they do all the cool stuff. Quantum GIS also has a WMS server, so you can author and serve data all in the same application.
The site is secure.
The ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Ammonia is an important compound with many uses, such as in the manufacture of fertilizers, explosives and pharmaceuticals. As an archetypal hydrogen-bonded system, the properties of ammonia under pressure are of fundamental interest, and compressed ammonia has a significant role in planetary physics. We predict new high-pressure crystalline phases of ammonia (NH(3)) through a computational search based on first-principles density-functional-theory calculations. Ammonia is known to form hydrogen-bonded solids, but we predict that at higher pressures it will form ammonium amide ionic solids consisting of alternate layers of NH(4)(+) and NH(2)(-) ions. These ionic phases are predicted to be stable over a wide range of pressures readily obtainable in laboratory experiments. The occurrence of ionic phases is rationalized in terms of the relative ease of forming ammonium and amide ions from ammonia molecules, and the volume reduction on doing so. We also predict that the ionic bonding cannot be sustained under extreme compression and that, at pressures beyond the reach of current static-loading experiments, ammonia will return to hydrogen-bonded structures consisting of neutral NH(3) molecules.
This study investigated the perceptual adjustments that occur when listeners recognize highly compressed speech. In Experiment 1, adjustment was examined as a function of the amount of exposure to compressed speech by use of 2 different speakers and compression rates. The results demonstrated that adjustment takes place over a number of sentences, depending on the compression rate. Lower compression rates required less experience before full adjustment occurred. In Experiment 2, the impact of an abrupt change in talker characteristics was investigated; in Experiment 3, the impact of an abrupt change in compression rate was studied. The results of these 2 experiments indicated that sudden changes in talker characteristics or compression rate had little impact on the adjustment process. The findings are discussed with respect to the level of speech processing at which such adjustment might occur.
Compressed sensing (CS) is a recent mathematical technique that leverages the sparsity in certain sets of data to solve an underdetermined system and recover a full set of data from a sub-Nyquist set of measurements of the data. Given the size and sparsity of the data, radar has been a natural choice to apply compressed sensing to, typically in the fast-time and slow-time domains. Polarimetric synthetic aperture radar (PolSAR) generates a particularly large amount of data for a given scene; however, the data tends to be sparse. Recently a technique was developed to recover a dropped PolSAR channel by leveraging antenna crosstalk information and using compressed sensing. In this dissertation, we build upon the initial concept of the dropped-channel PolSAR CS in three ways. First, we determine a metric which relates the measurement matrix to the l2 recovery error. The new metric is necessary given the deterministic nature of the measurement matrix. We then determine a range of antenna crosstalk required to recover a dropped PolSAR channel. Second, we propose a new antenna design that incorporates the relatively high levels of crosstalk required by a dropped-channel PolSAR system. Finally, we integrate fast- and slow-time compression schemes into the dropped-channel model in order to leverage sparsity in additional PolSAR domains and overall increase the compression ratio. The completion of these research tasks has allowed a more accurate description of a PolSAR system that compresses in fast-time, slow-time, and polarization; termed herein as highly compressed PolSAR. The description of a highly compressed PolSAR system is a big step towards the development of prototype hardware in the future.
The development of equations-of-state and transport models in areas such as shock compression and fusion energy science is critical to DOE programs. Notable shortcomings in these activities are phase transitions in highly compressed metals. Fully characterizing high energy density phenomena using pulsed power facilities is possible only with complementary numerical modeling for design, diagnostics, and data interpretation.
This team constructed a multiscale simulation framework based on a combination of high-fidelity electronic structure data, ML, and molecular dynamics enabling quantum-accurate, computationally efficient predictions. This provides kinetics of magneto-structural phase transitions along shock Hugoniots and ramp compression paths in the equations of state, and transport properties such as viscosity, electrical and thermal conductivities. Findings from this project were published in the Journal of Material Science and npj Computational Materials.
Our columnar compression, available to all PostgreSQL databases via the TimescaleDB extension, transforms automatically created time-based partitions into a columnar format, optimizing storage space and query performance. By storing data values from the same column together and using specialized algorithms for compression based on the data type of such columns, this method capitalizes on the natural tendencies of time-series data.
This approach resulted in a tremendously efficient compression (95%+ compression rates) that allowed developers to run very fast analytical queries while storing large volumes of time-series data for cheap in PostgreSQL.
Time-series data is usually the quintessential example of append-only. Whether it's tracking stock prices, sensor readings, or website metrics, once a data point is recorded for a specific timestamp in a database, it usually remains unchanged, as the main mission of time-series data is to provide a chronological account.
A sensor might transmit corrupted data due to a temporary malfunction, and once fixed, there might be a need to correct the historical data with accurate values; or, in financial settings, there might be restatements or corrections to historical data; or in a temperature sensor, a calibration error might require to backfill the previously recorded data with accurate temperature readings; or perhaps new IoT devices require older data to be replaced with quality controlled data as it was the case for @srstsavage in this GitHub issue.
3a8082e126