Dynamic Linear Models for modeling Alpha and Beta of the Stock

64 views
Skip to first unread message

Vlad

unread,
Nov 13, 2021, 6:26:39 PM11/13/21
to TensorFlow Probability

Hi Team,

I’d like to model relationships between stocks by using Dynamic Linear Models.

Let’s assume we have historical price data for stock X and Y and the relationship between daily returns of the stocks is defined by:

Y_returns = alpha(t) + beta(t) * X_returns + noise

 

Average alpha and beta over whole period can be easily calculated with simple linear regression, but I’d like to see how it changes dynamically (assuming that it changes relatively slow).

 

Can you advise how to construct such a model and retrieve dynamic alpha and beta from it?

 

Regards,

Vladimir 

Junpeng Lao

unread,
Nov 14, 2021, 4:50:02 AM11/14/21
to TensorFlow Probability, vol...@gmail.com
Depending on how you consider the model of the dynamic of alpha and beta, there are many different ways to express the DLM using tfd.LinearGaussianStateSpaceModel and tfp.sts

In the most simple case where we model alpha and beta as some Gaussian random walk, we can use tfp.sts.DynamicLinearRegression. Usually, we write a function like:

def make_dlm(x, observed=None): # x is the time series with shape [N_timesteps, 1]. # Append a column of 1s: design_matrx = tf.concat([tf.ones_like(x), x], axis=-1) dlm = tfp.sts.DynamicLinearRegression(design_matrx, observed_time_series=observed) return tfp.sts.Sum([dlm])
y_sts = make_dlm(x, observed=y)

for running inference, I usually will run a MCMC:

pinned_jd = y_sts.joint_distribution(observed_time_series=y)
run_mcmc = tf.function(
tfp.experimental.mcmc.windowed_adaptive_nuts,
autograph=False, jit_compile=True)

n_chains = 4
mcmc_samples, sampler_stats = run_mcmc(
500, pinned_jd, n_chains=n_chains, num_adaptation_steps=1000)

After inference, you can reconstruct the dynamic of alpha and beta with kalman filter, there are a few option here depending on you want to use the forward, backward, or forward+backward, but the simplest is to do:

y_lgssm = y_sts.make_state_space_model(num_timesteps=num_steps, param_vals=mcmc_samples)
smoothed_means, smoothed_covs = tf.function(y_lgssm.posterior_marginals)(y)

latent_dist = tfd.MultivariateNormalFullCovariance(
smoothed_means, smoothed_covs)
latent_sample = latent_dist.sample()

which latent_sample has shape = [num_mcmc_samples, num_mcmc_chains, num_timesteps, latent_dim] where latent_dim = 2 (first is alpha, second is beta) 

I wrote an end2end example here where you should find helpful: colab link

Best,
Junpeng

vol...@gmail.com

unread,
Nov 14, 2021, 11:01:59 AM11/14/21
to Junpeng Lao, TensorFlow Probability, Vladimir Bykovnikov

That’s awesome! Many thanks Junpeng!

It looks like the model perfectly estimates alpha and beta at the beginning and starts to deviate at the end of the curve.

I replaced alpha and beta with deterministic functions and got similar results (https://colab.research.google.com/drive/1oJrLJkNlQD95C1co_cg5pxkQMC48b0oR?usp=sharing).

 

Do you have an idea what can possible lead to this deviation?

 

Regards,

Vladimir

Junpeng Lao

unread,
Nov 14, 2021, 11:29:02 AM11/14/21
to TensorFlow Probability, vol...@gmail.com, Junpeng Lao
I think the model is slightly unidentifiable, which is why the part that alpha is underestimate beta is over-estimate, and vice versa: 
4DiBJgjEJ27jpn9.png




I think a better model will help here, for example modeling alpha(t) as another DLM - for example it seems to work better with a Local Linear Trend model:



vol...@gmail.com

unread,
Nov 15, 2021, 3:22:55 PM11/15/21
to Junpeng Lao, TensorFlow Probability, Vladimir Bykovnikov

Hi Junpeng,

Fantastic! The new model is much better! Thank you very much for your help!

 

As far as I understand in both examples you are using forward+backward smoothing (with tfd.MultivariateNormalFullCovariance).

 

Can you advise how to implement forward only smoothing for the model?

 

BTW with forward only smoothing would it be correct to say that for any estimation of alpha(t) and beta(t) the model using only information from the past (e.g. can be used for the back test)?

 

Regards,

Vladimir

 

From: Junpeng Lao <junpe...@gmail.com>
Sent: Sunday, November 14, 2021 5:29 PM
To: TensorFlow Probability <tfprob...@tensorflow.org>
Cc: vol...@gmail.com <vol...@gmail.com>; Junpeng Lao <junpe...@gmail.com>
Subject: Re: Dynamic Linear Models for modeling Alpha and Beta of the Stock

 

I think the model is slightly unidentifiable, which is why the part that alpha is underestimate beta is over-estimate, and vice versa: 

image001.png

Junpeng Lao

unread,
Nov 15, 2021, 4:16:42 PM11/15/21
to vol...@gmail.com, TensorFlow Probability

> Can you advise how to implement forward only smoothing for the model?

 

> BTW with forward only smoothing would it be correct to say that for any estimation of alpha(t) and beta(t) the model using only information from the past (e.g. can be used for the back test)?

Exactly - posterior marginal is basically running the backward filter after you get the result from forward filter: https://github.com/tensorflow/probability/blob/76eaa7e1fd949c4796d2da70143f35c82f37c465/tensorflow_probability/python/distributions/linear_gaussian_ssm.py#L1055-L1061

--

vol...@gmail.com

unread,
Nov 21, 2021, 9:09:50 AM11/21/21
to Junpeng Lao, TensorFlow Probability, Vladimir Bykovnikov

Hi Junpeng,

Thank you again this is exciting!

 

Basically I am trying to implement the example with Capital Asset Pricing Model (CAPM), described in the book of Petris et al. (section 3.3.3; p. 101)which is available for download from:

https://www.researchgate.net/publication/226410454_Dynamic_Linear_Models_with_R (2009)

 

The result would be to get a graph for alpha and beta like the one shown in the book:

 

And the next steps would be to go a bit further. The example in the book makes an assumption that the alpha is time-invariant. So I am trying to see if it works for varying alpha as well.

 

In your approach you feed the model with absolute values of X and Y but the example in the book operates with the monthly returns (from the table https://raw.githubusercontent.com/rstudio/ai-blog/main/_posts/2019-06-25-dynamic_linear_models_tfprobability/data/capm.txt).

 

I’ve found a couple of implementations of the model - one is in R and another is in TF but it’s not complete:

https://blogs.rstudio.com/ai/posts/2019-06-25-dynamic_linear_models_tfprobability/

https://github.com/tensorflow/probability/issues/1068

 

I am using the following code to feed your model:

 

capm_path = https://raw.githubusercontent.com/rstudio/ai-blog/main/_posts/2019-06-25-dynamic_linear_models_tfprobability/data/capm.txt

capm_data = pd.read_fwf(capm_path)

dep_var_ts = (capm_data["IBM"] - capm_data["RKFREE"]).values

ind_var_ts = (capm_data["MARKET"] - capm_data["RKFREE"]).values

 

num_steps = len( ind_var_ts )

x = tf.constant( np.float32( np.reshape(ind_var_ts, (-1,1)) ) )

y = tf.constant( np.float32( np.reshape(dep_var_ts, (-1,1)) ) )

 

Can you advise what needs to be change in your model to operate with the monthly returns to get a graph like in the book (ideally for both alpha and beta)?

image001.png
image002.png

Junpeng Lao

unread,
Nov 21, 2021, 12:55:16 PM11/21/21
to vol...@gmail.com, TensorFlow Probability
Hi Vladimir,
I dont think you need to change anything in the model, as the set up of the dynamic linear regression model here does not make assumptions whether it is daily or monthly data.
However, with monthly data you will have fewer data points, so a more informative prior would likely help model convergence.
Best,
Junpeng

vol...@gmail.com

unread,
Nov 21, 2021, 3:10:16 PM11/21/21
to Junpeng Lao, TensorFlow Probability, Vladimir Bykovnikov

Hi Junpeng,

I agree with you that using monthly vs. daily data should not make any significant difference for the purpose of the exercise.

 

The main difference in the example from the book is that they are using returns (relative changes) (x(t)/x(t-1) – 1) instead of prices of x and (y(t)/y(t-1) – 1) instead of prices of y in the model.

 

I believe some changes in your model are still needed to accommodate the new data.

With the CAPM data, without the changes the model produces very different results:

https://colab.research.google.com/drive/1hZC8FqwHpkwRaFCtYH0Ragy6UToQJS98?usp=sharing

compare to the ones shown in the book (see Beta for IBM):

image001.png
image002.png
image003.png

Junpeng Lao

unread,
Nov 22, 2021, 1:08:04 PM11/22/21
to vol...@gmail.com, TensorFlow Probability
Hi,

The current model is actually exactly the same as the one in the R blogpost, which means we have both alpha and beta varies as a function of time. I think the reason it looks different is that the samples are in a bit larger scale than the latent mean, if you just plot the smoothed_mean of beta it looks like:
image.png

I also tried a few different things to match what Petris et al. did in section 3.3.3 - for example, they set alpha to constant across time, and also multiplied the values by 100. But actually with dynamic alpha, constant alpha, alpha = 0 the beta estimation looks similar (figure above, but I did multiple the values by 100 which seems make a bit of differences)
def make_dlm(x, observed=None, model_type='dynamic_alpha'):
comp = []
if model_type == 'dynamic_alpha':
# x is the time series with shape [N_timesteps, 1].
# Append a column of 1s:
design_matrx = tf.concat([tf.ones_like(x), x], axis=-1)
else:
design_matrx = x # model_type == 'no_alpha', ie alpha = 0.
if model_type == 'constant_alpha':
reg = tfp.sts.LinearRegression(tf.ones_like(x))
comp.append(reg)

dlm = tfp.sts.DynamicLinearRegression(design_matrx,
observed_time_series=observed)
comp.append(dlm)
return tfp.sts.Sum(comp)

y_sts = make_dlm(x, observed=y, model_type='no_alpha')

Furthermore, you can reproduce figure 3.10 completely, with the parameter provided in the book, an exact replication of the R code in TFP goes the following:

y = tf.constant(
capm_data.iloc[:, :4].values - capm_data['RKFREE'].values[..., None],
dtype=tf.float32)
market = tf.constant(
(capm_data['MARKET'] - capm_data['RKFREE']).values,
dtype=tf.float32)

m = y.shape[-1]

# G
transition_matrix = tf.linalg.LinearOperatorIdentity(2 * m)

W_beta = tf.constant(
[[8.153e-07, -3.172e-05, -4.267e-05, -6.649e-05],
[-3.172e-05, 0.001377, 0.001852, 0.002884],
[-4.267e-05, 0.001852, 0.002498, 0.003884],
[-6.649e-05, 0.002884, 0.003884, 0.006057]],
dtype=tf.float32
)
W = tf.linalg.LinearOperatorBlockDiag([
tf.linalg.LinearOperatorZeros(m),
tf.linalg.LinearOperatorLowerTriangular(tf.linalg.cholesky(W_beta)),
])
transition_noise = tfd.MultivariateNormalLinearOperator(scale=W)

def observation_matrix(t):
x_t = market[t]
F = tf.linalg.LinearOperatorFullMatrix([[1., x_t]])
return tf.linalg.LinearOperatorKronecker(
[F, tf.linalg.LinearOperatorIdentity(m)]
)

V = tf.constant(
[[41.06, 0.01571, -0.9504, -2.328],
[0.01571, 24.23, 5.783, 3.376],
[-0.9504, 5.783, 39.2, 8.145],
[-2.328, 3.376, 8.145, 39.29]],
dtype=tf.float32
)
observation_noise = tfd.MultivariateNormalTriL(
scale_tril=tf.linalg.cholesky(V))

initial_state_prior=tfd.MultivariateNormalDiag(
scale_diag=tf.ones([2 * m]) * 1e2)

capm_lgssm = tfd.LinearGaussianStateSpaceModel(
num_timesteps=y.shape[0],
transition_matrix=transition_matrix,
transition_noise=transition_noise,
observation_matrix=observation_matrix,
observation_noise=observation_noise,
initial_state_prior=initial_state_prior
)

smoothed_means, smoothed_covs = tf.function(capm_lgssm.posterior_marginals)(y)
smoothed_mean_df = pd.DataFrame(smoothed_means[:, -4:].numpy(),
index=capm_data.index,
columns=capm_data.iloc[:, :4].columns)
ax = smoothed_mean_df.plot(figsize=(10, 6), lw=2.5, color=['r', 'g', 'b', 'cyan'])
ax.hlines(1., *ax.get_xlim(), ls='--');
image.png

You can then specify the full model as a JointDistribution and run inference - that part still needs a bit more twisting (maybe more informative prior) for HMC/NUTS to run (I am not familiar with the inference they use in the book).

PS: Me and 2 friends recently wrote a book Bayesian Modeling and Computation in Python with a time series chapter that contains state space mode / dynamic linear regression in TFP. You might find it helpful as well :-) 

Best,
Junpeng





Reply all
Reply to author
Forward
0 new messages