DCIs Statistics Section collects information, maintains databases and publishes reports on the insurance markets in Missouri. With this information, the department aims to facilitate the flow of insurance market information for consumers, insurance companies and departmental staff and to monitor the availability and affordability of insurance coverage in Missouri.
Complaint index report
Complaints filed against Missouri insurers during the past three years. Companies are categorized by line of business and type of complaint. Company performance is compared to the Missouri industry average.
Supplement data reports for property & casualty and life & health
Data on written premium, loss ratios, and other data for all property and casualty and life and health insurers licensed in Missouri for all major lines of business.
Autism report and ABA limits
Missouri law required the department to annually issue a report to the legislature assessing the impact of the autism mandate on the insurance marketplace. The portion of the statute requiring the report was subsequently repealed. The last report was released in 2019.
HMO report (Discontinued after 2016)
HMO data from the annual financial statements, such as premiums earned and costs incurred for health-related services. It also includes enrollment data, including enrollment for specified regions of the state.
Medical malpractice report
Aggregate claims for the three years before the report. Includes information on claim frequency, loss ratios by company, by type of insured, average dollar settlements, litigated claims, average time to close claims and other trends in medical malpractice.
Legal malpractice report
A 10-year summary by area of law for filed claims, major activity responsible for alleged errors or omissions, most significant reasons for claims, years admitted to practice and relationship of the insured to the claimant.
Product liability report
Aggregate claims analysis incorporating three years of data. Included are indemnity paid per claim, average loss expense, average initial reserve, average time to close claims, business classification loss experience, product indemnity analysis and resolution and expense of litigated claims.
Mortgage guaranty insurance report
Reports data from the most recent year and for more than two decades for both residential and commercial lines of mortgage guaranty insurance. The data includes earned premium by company, losses paid, outstanding claim reserves, IBNR reserves, contingency reserves, loaded loss ratios and true loss ratios.
First Steps Report
Senate Bill 500, passed in 2005, requires licensed health insurers and HMOs to include coverage in their benefit packages for services provided under the First Steps Program. This report details the number of children receiving private insurance coverage for the program and the total amount paid for children with private insurance coverage.
Missouri ZIP code insurance data for homeowners/dwelling fire, farm owners, mobile Home, earthquake and private passenger automobile
Missouri law requires insurers writing personal lines insurance to file data by ZIP code, including exposures written, premium written, loss paid count and losses paid. Data is collected on an annual basis.
Medigap (Medicare supplement) experience data
Since 1982, companies writing Medigap insurance have reported their experience by policy number. Included are annual premium and loss experiences.
Commercial liability experience data
Provides information on companies writing commercial liability insurance, including a profitability report and a summary of the claims, closed and outstanding, for specific commercial lines of insurance.
The Department of Insurance compiles a sophisticated database of ZIP code information on premium sales, losses and exposures from company reports. Aggregate data (non-company specific) is available to the public by exposures written, premium written, number of losses and losses paid. Please email or call us at
573-526-2945 to place your request.
Key remarks: Data in this Realtor.com library is based on the most comprehensive and accurate database of MLS-listed for-sale homes in the industry. We aggregate and analyze data from hundreds of sources and produce hundreds of metrics for multiple markets, and curate figures and trends where possible for reliability and comparability. However, as we continue to evolve our coverage and fine-tune our definitions, some data points may be too volatile or incomparable over time or across markets. This is particularly true for data in a) smaller geographies; b) markets with special or changing definitions of active inventory; c) markets with limited or partial listing; and d) coverage markets with limited or partial sales coverage. Where possible, these cases are annotated for individual metrics (see data dictionaries for more info). Also, every month, we reissue the full historical series, and past data points may change as we improve data breadth and accuracy, and/or re-state the data altogether.
Note: With the publication of the December 2023 data, metro-level data has been updated to reflect the latest OMB metro area definitions, published July 2023, and historical data has been revised to be consistent with the new geographies. As a result of these changes, this release is not directly comparable with previous data releases and reports.
Stock Price Prediction using machine learning is the process of predicting the future value of a stock traded on a stock exchange for reaping profits. With multiple factors involved in predicting stock prices, it is challenging to predict stock prices with high accuracy, and this is where machine learning plays a vital role.
To begin with, we can use moving averages (or MA) to understand how the amount of history (or the number of past data points) considered affects the model's performance. A simple moving average computes the mean of the past N data points and takes this value as the predicted N+1 value.
Another moving average is the exponential moving average (EMA), giving more weight to the more recent samples. With this, we can look at more data points in the past and still not diminish the more recent trends in fluctuations.
Where Pt is the price at time t and k is the weight given to that data point. EMA(t-1) represents the value computed from the past t-1 points. Clearly, this would perform better than a simple MA. The weight k is computed as k = 2/(N+1).
While implementing these methods, we will see how EMA performs better than SMA, proving that assigning higher weights to more recent data points will yield more fruitful results. But for now, let us assume that that is the case with stock prices as time series data.
So considering more past data and giving more importance to newer samples, EMA performs better than SMA. However, given the static nature of its parameters, EMA might not perform well for all cases. In EMA, we have fixed the value of k (the weight/significance of past data), and it is linked with the window size N (how much past we wish to consider).
It can be difficult to set these parameters manually and impossible to optimize for this project on stock market prediction using machine learning. Thus, we can use more complex models that can compute the significance of each past data point and optimize our predictions. This can be achieved with weight updation while training a machine learning model. And thinking of using past data to compute the future, the most immediate model that comes to mind is the LSTM model!
A standard LSTM cell comprises of three gates: the input, output, and forget gate. These gates learn their weights and determine how much of the current data sample should be remembered and how much of the past learned content should be forgotten. This simple structure is an improvement over the previous and similar RNN model.
As seen in the equations below, i, f, and o represent the three gates: input, forget, and output. C is the cell state that preserves the learned data, which is given as output h. All of this is computed for each timestamp t, considering the learned data from timestamp (t-1).
The forget gate decides what information and how much of it can be erased from the current cell state, while the input gate decides what will be added to the current cell state. The output gate, used in the final equation, controls the magnitude of output computed by the first two gates.
Looking closely at the formula of RMSE, we can see how we will be able to consider the difference (or error) between the actual (At) and predicted (Ft) price values for all N timestamps and get an absolute measure of error.
Load the CSV file as a DataFrame using Pandas. Since the data is indexed by date (each row represents data from a different date), we can also index our DataFrame by the date column. We have taken the data from March 2019 to March 2022. This will also challenge our model to work with the unpredictable changes caused by the COVID-19 pandemic.
It will be challenging for a model in the stock prediction using machine learning project to correctly estimate the rapid changes that we can see in March 2020 and February 2022. We will focus on evaluating the model performance in predicting the more recent values after training it on the past data.
The code for plotting these graphs is as shown below. We use matplotlib to plot the DataFrame columns directly against the Date index column. To make things flexible while plotting against dates, lines 6-8 convert our date strings into datetime format and plot them cleanly and legibly. The interval parameter in line 7 defines the interval in days between each tick on the date axis.
We will be building our LSTM models using Tensorflow Keras and preprocessing our stock prediction machine learning data using scikit-learn. These imports are used in different steps of the entire process, but it is good to club these statements together. Whenever we wish to import something new, just add the statement arbitrarily to the below group.
3a8082e126