Are you using the Layers API? If so, it is fairly easy to freeze weights of given layers by setting their `trainable` property.
const model = tf.sequential();
model.add(tf.layers.dense({units: 4, inputShape: [2], activation: 'relu'}));
model.add(tf.layers.dense({units: 1}));
const layer0 = model.getLayer(null, 0);
const layer1 = model.getLayer(null, 1);
// Freeze the first layer.
layer0.trainable = false;
// Before training, print the values of the first and second layers' weights
console.log('=== Weights before training: ===');
layer0.getWeights()[0].print();
layer0.getWeights()[1].print();
layer1.getWeights()[0].print();
layer1.getWeights()[1].print();
const xs = tf.tensor2d([[1, 2], [3, 4]]);
const ys = tf.tensor2d([[-5], [6]]);
model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
model.fit(xs, ys, {epochs: 100}).then(history => {
// After training, print the weight values again.
console.log('=== Weights after training: ===');
layer0.getWeights()[0].print();
layer0.getWeights()[1].print();
layer1.getWeights()[0].print();
layer1.getWeights()[1].print();
// Notice how the weights of `layer0` remains the same after training,
// while the weights of `layer1` changes.
});
This API is consistent with Python Keras.