FeedForward NN and update bias and weight manually

27 views
Skip to first unread message

Maxime Girard

unread,
May 21, 2020, 2:48:39 PM5/21/20
to Rust for TensorFlow
Hello,

I try to build a genetic algorythm with simple FeedForward NN.

After feed the input layer and read the output, I want to modify the value of Bias And Weight on all of my percetrons.


They use this method to build hidden layer :

fn layer<O1: Into<Output>>(
    input: O1,
    input_size: u64,
    output_size: u64,
    activation: &dyn Fn(Output, &mut Scope) -> Result<Output, Status>,
    scope: &mut Scope,
) -> Result<(Weight, Bias, Output), Status> {
    let mut scope = scope.new_sub_scope("layer");
    let scope = &mut scope;
    let w_shape = ops::constant(&[input_size as i64, output_size as i64][..], scope)?;
    let w = Variable::builder()
        .initial_value(
            ops::RandomStandardNormal::new()
                .dtype(DataType::Float)
                .build(w_shape.into(), scope)?,
        )
        .data_type(DataType::Float)
        .shape(Shape::from(&[input_size, output_size][..]))
        .build(&mut scope.with_op_name("w"))?;
    let b = Variable::builder()
        .const_initial_value(Tensor::<f32>::new(&[output_size]))
        .build(&mut scope.with_op_name("b"))?;
    Ok((
        Weight {
            variable: w.clone(),
        },
        Bias {
            variable: b.clone(),
        },
        activation(
            ops::add(
                ops::mat_mul(input.into(), w.output().clone(), scope)?.into(),
                b.output().clone(),
                scope,
            )?
            .into(),
            scope,
        )?,
    ))
}

I want to know how we can update the variable "w" and "b" variable ?

I try some things but it doesn't work.

My question are :

Is this operation is possible manually ?
Is this operation should be done when we run a session ?

Actually I try something like that but it doesn't work :

    fn update_variable(&mut self) -> Result<[f32; 4], Box<dyn Error>> {
        let mut input_tensor = Tensor::<f32>::new(&[120]);
        for index in 0..20 {
            input_tensor[index] = 0.0;
        }
        let mut run_args = SessionRunArgs::new();
        run_args.add_feed(&self.bias_layer_1.variable.initializer(), 1, &input_tensor);

        let result_token = run_args.request_fetch(&self.output.operation, self.output.index);

        self.session.run(&mut run_args)?;
        let result_tensor: Tensor<f32> = run_args.fetch::<f32>(result_token)?;
        Ok([
            result_tensor.get(&[00]),
            result_tensor.get(&[01]),
            result_tensor.get(&[02]),
            result_tensor.get(&[03]),
        ])
    }


What is wrong in my method ?

Should I use PlaceHolder to store bias and weight variable in order to update it after ?

Thanks a lot !

++

Maxime.




Message has been deleted

Maxime Girard

unread,
May 22, 2020, 2:20:38 PM5/22/20
to Rust for TensorFlow
Hey, when I execute the code above, TensorFlow tell me this error :

---- machine_learning::tests::test_compute_nn_output stdout ----
Error: {inner:0x2ac66d7f480, OutOfRange: Node 'layer/b_2' (type: 'Assign', num of outputs: 1) does not have output 1}
thread 'machine_learning::tests::test_compute_nn_output' panicked at 'assertion failed: `(left == right)`
  left: `1`,
 right: `0`: the test returned a termination value with a non-zero status code (1) which indicates a failure', <::std::macros::panic macros>:5:6
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace



I'm too beginner with TensorFlow to understand what he say.

For me, I try to update the good bias variable (ie : layer/b_2) and so this layer contains 20 neuron.

For much information, my NN look like that :

32 inputs -> 20 neuron for my first hidden layer -> 12 neuron for my second hidden layer -> 4 outputs

Can you explain me what he wrong with my code to have this error ?

I just try to update the bias value of my neuron in the first hidden layer.

++

Maxime.


Reply all
Reply to author
Forward
0 new messages