Hi Naked Scientists, I was just wondering - if planets like Jupiter are just gas giants, why is it they exert such enormous gravitational pull on surrounding matter, like the asteroid belt? Do they have a very large, dense core providing the pull or is the gas highly compressed contributing to the mass? Love the show, Orlando (Perth, Western Australia)
Dominic - Well, planets like Jupiter certainly do have cores. Jupiter, we think, has a rocky core that's about 10 times more massive than Earth. Jupiter itself is a really vast planet. It's got about 300 times the mass of Earth and about 10 times the radius of the Earth and most of that volume, most of that mass is a mixture of hydrogen and helium gas. That gas is very heavily compressed and that's how Jupiter manages to be so very massive.
Actually it is in a state called metallic hydrogen, where these molecules are so compressed together that they form a lattice and the electrons, rather than orbiting around individual hydrogen nuclei, actually can flow freely through that metallic hydrogen. That's why Jupiter has such a strong metallic field - because the electrons flow through the hydrogen producing that electric field.
(adsbygoogle = window.adsbygoogle []).push();Chris - How did it get all of that gas in the first place? Hydrogen and Helium being so light, how did they manage to coalesce around Jupiter before it got big and had all of that gravity?
Dominic - That's an interesting question that people are actually still researching. But I think that the best theory at the moment is that when a planet gets to a mass of ten times that of the Earth, it's gravitational field is then so strong that it can pull in gas around it and you can get this sudden catastrophic fall of material onto this planet. So any planet that is less than ten times the mass of the Earth will tend to be rocky, like the inner planets of the solar system. Any planets that creep over that mass suddenly turn into these vast gas giants like Jupiter and Saturn.
Compressed sensing (CS) is a recent mathematical technique that leverages the sparsity in certain sets of data to solve an underdetermined system and recover a full set of data from a sub-Nyquist set of measurements of the data. Given the size and sparsity of the data, radar has been a natural choice to apply compressed sensing to, typically in the fast-time and slow-time domains. Polarimetric synthetic aperture radar (PolSAR) generates a particularly large amount of data for a given scene; however, the data tends to be sparse. Recently a technique was developed to recover a dropped PolSAR channel by leveraging antenna crosstalk information and using compressed sensing. In this dissertation, we build upon the initial concept of the dropped-channel PolSAR CS in three ways. First, we determine a metric which relates the measurement matrix to the l2 recovery error. The new metric is necessary given the deterministic nature of the measurement matrix. We then determine a range of antenna crosstalk required to recover a dropped PolSAR channel. Second, we propose a new antenna design that incorporates the relatively high levels of crosstalk required by a dropped-channel PolSAR system. Finally, we integrate fast- and slow-time compression schemes into the dropped-channel model in order to leverage sparsity in additional PolSAR domains and overall increase the compression ratio. The completion of these research tasks has allowed a more accurate description of a PolSAR system that compresses in fast-time, slow-time, and polarization; termed herein as highly compressed PolSAR. The description of a highly compressed PolSAR system is a big step towards the development of prototype hardware in the future.
The development of equations-of-state and transport models in areas such as shock compression and fusion energy science is critical to DOE programs. Notable shortcomings in these activities are phase transitions in highly compressed metals. Fully characterizing high energy density phenomena using pulsed power facilities is possible only with complementary numerical modeling for design, diagnostics, and data interpretation.
This team constructed a multiscale simulation framework based on a combination of high-fidelity electronic structure data, ML, and molecular dynamics enabling quantum-accurate, computationally efficient predictions. This provides kinetics of magneto-structural phase transitions along shock Hugoniots and ramp compression paths in the equations of state, and transport properties such as viscosity, electrical and thermal conductivities. Findings from this project were published in the Journal of Material Science and npj Computational Materials.
Our columnar compression, available to all PostgreSQL databases via the TimescaleDB extension, transforms automatically created time-based partitions into a columnar format, optimizing storage space and query performance. By storing data values from the same column together and using specialized algorithms for compression based on the data type of such columns, this method capitalizes on the natural tendencies of time-series data.
This approach resulted in a tremendously efficient compression (95%+ compression rates) that allowed developers to run very fast analytical queries while storing large volumes of time-series data for cheap in PostgreSQL.
Time-series data is usually the quintessential example of append-only. Whether it's tracking stock prices, sensor readings, or website metrics, once a data point is recorded for a specific timestamp in a database, it usually remains unchanged, as the main mission of time-series data is to provide a chronological account.
A sensor might transmit corrupted data due to a temporary malfunction, and once fixed, there might be a need to correct the historical data with accurate values; or, in financial settings, there might be restatements or corrections to historical data; or in a temperature sensor, a calibration error might require to backfill the previously recorded data with accurate temperature readings; or perhaps new IoT devices require older data to be replaced with quality controlled data as it was the case for @srstsavage in this GitHub issue.
Another classic example of backfilling is production migrations. To migrate large databases with minimal downtime, we at Timescale often recommend following what we call the dual-write and backfill migration method, which consists of writing to the target and source database for some time while backfilling the necessary time-series data to run the production application (e.g., data going back three months to enable user analytics) into the target database. Once the user is ready for the switch, they can do it with minimal downtime.
Backfilling scenarios challenge the very essence of traditional compression methods for time series. They often introduce immutability to the data, and if data is immutable post-compression, then any necessary corrections or additions would require manual compression and decompression or complex workarounds.
If we want to add a modern compression mechanism for time-series data in PostgreSQL that truly helps developers, it has to account for these disruptions in the traditional time-series data lifecycle, painting a more realistic picture according to what happens in practice in a production setting.
We launched the first version of compression at the end of 2019 with TimescaleDB 1.5. This release laid down our foundational compression design: recognizing that time-series workloads access data in temporal order, we built an efficient columnar storage system by converting many wide rows of data (1,000) into a single row of data, compressing each field (column) using dedicated algorithms.
This columnar compression engine is based on hypertables, which automatically partition your PostgreSQL tables by time. At the user level, you would simply indicate which partitions (chunks in Timescale terminology) are ready to be compressed by defining a compression policy.
In TimescaleDB 2.3, we started to improve the flexibility of this high-performing columnar compression engine by allowing INSERTS directly into compressed data. The way we did this at first was by doing the following:
With this approach, when new rows were inserted into a previously compressed chunk, they were immediately compressed row-by-row and stored in the internal chunk. The new data compressed as individual rows was periodically merged with existing compressed data and recompressed. This batched, asynchronous recompression was handled automatically within TimescaleDB's job scheduling framework, ensuring that the compression policy continued to run efficiently.
The newly introduced ability to make changes to data that is compressed breaks the traditional trade-off of having to plan your compression strategy around your data lifecycle. You can now change already-compressed data without largely impacting data ingestion, database designers no longer need to consider updates and deletes when creating a data model, and the data is now directly accessible to application developers without post-processing.
You can now have time-series data (with compression enabled) in your OLTP database. The traditional paradigm of time-series data is to conceive it as part of OLAP data, mainly due to the nature of immutable data requirements to get the technology to work at scale.
In practice, this not only means a much better developer experience but also that our columnar compression can be used in many more use cases. We'll now dive into a few examples inspired by our users.
Consider a manufacturing company that has an IoT infrastructure in place to monitor machine efficiency in real time. The company decides to expand its IoT network by adding new sensors to monitor additional parameters, such as machine vibration, noise levels, and air quality.
However, with the advanced capabilities of TimescaleDB 2.11, backfilling becomes a straightforward process. The company can simulate or estimate the data for the new parameters for the preceding months and seamlessly insert this data into the already compressed historical dataset.
d3342ee215