Hi Naked Scientists, I was just wondering - if planets like Jupiter are just gas giants, why is it they exert such enormous gravitational pull on surrounding matter, like the asteroid belt? Do they have a very large, dense core providing the pull or is the gas highly compressed contributing to the mass? Love the show, Orlando (Perth, Western Australia)
Dominic - Well, planets like Jupiter certainly do have cores. Jupiter, we think, has a rocky core that's about 10 times more massive than Earth. Jupiter itself is a really vast planet. It's got about 300 times the mass of Earth and about 10 times the radius of the Earth and most of that volume, most of that mass is a mixture of hydrogen and helium gas. That gas is very heavily compressed and that's how Jupiter manages to be so very massive.
Actually it is in a state called metallic hydrogen, where these molecules are so compressed together that they form a lattice and the electrons, rather than orbiting around individual hydrogen nuclei, actually can flow freely through that metallic hydrogen. That's why Jupiter has such a strong metallic field - because the electrons flow through the hydrogen producing that electric field.
(adsbygoogle = window.adsbygoogle []).push();Chris - How did it get all of that gas in the first place? Hydrogen and Helium being so light, how did they manage to coalesce around Jupiter before it got big and had all of that gravity?
Dominic - That's an interesting question that people are actually still researching. But I think that the best theory at the moment is that when a planet gets to a mass of ten times that of the Earth, it's gravitational field is then so strong that it can pull in gas around it and you can get this sudden catastrophic fall of material onto this planet. So any planet that is less than ten times the mass of the Earth will tend to be rocky, like the inner planets of the solar system. Any planets that creep over that mass suddenly turn into these vast gas giants like Jupiter and Saturn.
Your real compression performance will probably depend a lot on the data you are putting in. Is it all geometries? If you have a lot of non-spatial data (or a lot of text attributes for spatial points), then it doesn't really matter what you do the geometries - you need to find some way to compress that data instead.
As others have said, I think you are going to struggle to find a format that meets your compression requirements. You would have to create your own custom format, which given your requirement to use commercial software is not going to be viable.
I think you need to possibly first consider how you can make your data models more efficient, then look at the compression aspects. For example, do you have a lot of repetition of geometry? You could then have a base set of geometry layers with unique IDs and then separate attribute data sets that reference the geometry by ID - that way you can have multiple views of the same geometry serving specific functions. Most decent software packages will then allow you to create joins or relates in order to create the unified view for a layer.
GML is a good example of a format that supports this kind of relational data model, though being a verbose format file sizes will be large. You can compress GML using gzip compression and can potentially get a 20:1 ratio but then you are relying on the software being able to support compressed GML.
Regardless, I would urge you to first look at your data models and see where there could be savings to be had. FME from Safe Software is your best bet if you need to start manipulating your data models.
To achieve that sort of ratio, you could use some sort of lossy compression, but I don't know of anything that uses it, and although I have a couple of ideas on how one might implement it, it would be far from standard. It would be much much cheaper to kit your server out with a 1TB disk than to spend time and money developing a custom solution.
You are also confusing data storage with data representation. Your 4th point mentions being able to view the data at different scales, but this is a function of your renderer, not the format per se. Again, a hypothetical lossily compressed file could store data at various resolutions in a sort of LoD structure, but that is likely to increase data size if anything.
If your data is to be on a server somewhere accessible by mobile applications, you're far better off using existing tools that have been designed for the purpose. A WFS server (such as GeoServer or MapServer) is ideally suited to this sort of application. The client makes a request for data of a specific area, normally that covered by the screen, and the WFS sends vector data for just that area, so all the heavy lifting is done by the server. It's then up to the application to render that data. An alternative would be to use the WMS features of MapServer and GeoServer, in which all the rendering is done by the server, and then it sends an image tile to the client. This enables features such as server-side caching of tiles, as well as scale-dependent rendering, with the minimum of work by you. They both read myriad formats, so you can author your data exactly how you like, and store it where you like, and they do all the cool stuff. Quantum GIS also has a WMS server, so you can author and serve data all in the same application.
Compressed sensing (CS) is a recent mathematical technique that leverages the sparsity in certain sets of data to solve an underdetermined system and recover a full set of data from a sub-Nyquist set of measurements of the data. Given the size and sparsity of the data, radar has been a natural choice to apply compressed sensing to, typically in the fast-time and slow-time domains. Polarimetric synthetic aperture radar (PolSAR) generates a particularly large amount of data for a given scene; however, the data tends to be sparse. Recently a technique was developed to recover a dropped PolSAR channel by leveraging antenna crosstalk information and using compressed sensing. In this dissertation, we build upon the initial concept of the dropped-channel PolSAR CS in three ways. First, we determine a metric which relates the measurement matrix to the l2 recovery error. The new metric is necessary given the deterministic nature of the measurement matrix. We then determine a range of antenna crosstalk required to recover a dropped PolSAR channel. Second, we propose a new antenna design that incorporates the relatively high levels of crosstalk required by a dropped-channel PolSAR system. Finally, we integrate fast- and slow-time compression schemes into the dropped-channel model in order to leverage sparsity in additional PolSAR domains and overall increase the compression ratio. The completion of these research tasks has allowed a more accurate description of a PolSAR system that compresses in fast-time, slow-time, and polarization; termed herein as highly compressed PolSAR. The description of a highly compressed PolSAR system is a big step towards the development of prototype hardware in the future.
The single-pixel imaging technique uses multiple patterns to modulate the entire scene and then reconstructs a two-dimensional (2-D) image from the single-pixel measurements. Inspired by the statistical redundancy of natural images that distinct regions of an image contain similar information, we report a highly compressed single-pixel imaging technique with a decreased sampling ratio. This technique superimposes an occluded mask onto modulation patterns, realizing that only the unmasked region of the scene is modulated and acquired. In this way, we can effectively decrease 75% modulation patterns experimentally. To reconstruct the entire image, we designed a highly sparse input and extrapolation network consisting of two modules: the first module reconstructs the unmasked region from one-dimensional (1-D) measurements, and the second module recovers the entire scene image by extrapolation from the neighboring unmasked region. Simulation and experimental results validate that sampling 25% of the region is enough to reconstruct the whole scene. Our technique exhibits significant improvements in peak signal-to-noise ratio (PSNR) of 1.5 dB and structural similarity index measure (SSIM) of 0.2 when compared with conventional methods at the same sampling ratios. The proposed technique can be widely applied in various resource-limited platforms and occluded scene imaging.
You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.Contact your librarian or system administrator
or
Login to access Optica Member Subscription
You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.Contact your librarian or system administrator
or
Login to access Optica Member Subscription
You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.Contact your librarian or system administrator
or
Login to access Optica Member Subscription
You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.Contact your librarian or system administrator
or
Login to access Optica Member Subscription
The development of equations-of-state and transport models in areas such as shock compression and fusion energy science is critical to DOE programs. Notable shortcomings in these activities are phase transitions in highly compressed metals. Fully characterizing high energy density phenomena using pulsed power facilities is possible only with complementary numerical modeling for design, diagnostics, and data interpretation.
03c5feb9e7