Adjoint Method

372 views
Skip to first unread message

lucas sanches

unread,
Jul 24, 2024, 8:32:45 AM7/24/24
to OpenVSP
  Hello, I am working with gradient-based aerodynamic shape optimization, and I would like to know if the adjoint method has been implemented in OpenVSP and, if so, whether it is available for the Python API.

Thank you. 
Lucas.   

lucas sanches

unread,
Jul 24, 2024, 8:35:31 AM7/24/24
to OpenVSP
  I have seen that it was being developed, is it already done?  

Rob McDonald

unread,
Jul 24, 2024, 12:38:45 PM7/24/24
to OpenVSP
VSPAERO has an adjoint capability.  It has not been integrated into the OpenVSP GUI or the API.  to use it, you must run VSPAERO from the command line.

This capability is very much experimental.  In fact, it has all been ripped out and re-done.  The updated version will be released at the end of the year, or early next year.

Rob

lucas sanches

unread,
Jul 30, 2024, 9:32:15 AM7/30/24
to OpenVSP
Is there any tutorial to learn how to use the adjoint method via the command line? 

Lucas S.

Rob McDonald

unread,
Jul 30, 2024, 2:33:11 PM7/30/24
to ope...@googlegroups.com
Unfortunately, there is not.  That feature is entirely experimental and there will be substantial change before it is documented, supported, and ready for general use.

Rob


--
You received this message because you are subscribed to the Google Groups "OpenVSP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openvsp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openvsp/8614f132-48fd-4957-860a-3645afd73dd9n%40googlegroups.com.

Brian Guenter

unread,
Aug 15, 2024, 12:08:59 PM8/15/24
to OpenVSP
Rob,

I have expertise in automatic differentiation and am happy to help if I can (see https://github.com/brianguenter/FastDifferentiation.jl for my open source automatic differentiation project, or here https://www.researchgate.net/profile/Brian-Guenter for a couple of papers about AD algorithms and applications, among many other things).

-brian

Rob McDonald

unread,
Aug 16, 2024, 12:54:03 PM8/16/24
to OpenVSP
Thanks for the offer.

Dave (the VSPAERO author) has things well under control.  His first pass at adding a derivative capability to VSPAERO (what is released now) was based on an AD tool (Adept 2).  This works, but the performance was unsatisfactory.

Consequently, he went through and re-wrote the whole thing from scratch by hand.  The new version (not yet released) is _much_ faster -- it really is incredible.  There are a lot of other changes that will come with this update, so there is no point in me updating OpenVSP to support the current adjoint version of VSPAERO because everything is going to change soon anyway.

We may look at adding a derivative capability to OpenVSP in the future.  I don't know if you've ever taken a close look at OpenVSP's architecture and how things generally work, but I would appreciate any thoughts you might have about setting up OpenVSP for differentiation.

Rob

Brian Guenter

unread,
Aug 16, 2024, 6:15:35 PM8/16/24
to OpenVSP
I haven't taken a close look at the OpenVSP architecture but I did look at the OpenVSP github repo. There didn't seem to be much in the way of developer documentation. Are there white papers or other documentation I could take a look at?

I'm biased toward my own algorithm, FastDifferentiation, but I'll be happy to talk about other, more conventional AD algorithms if FastDifferentiation doesn't work for you.
 
The FastDifferentiation algorithm is quite different from conventional AD. First the good:

    It computes a true symbolic derivative - this can be handy for gaining intuition about the derivative expressions and gives 
    you an expression you can do further symbolic analysis and simplification on.
    
    It compiles the symbolic derivative expression into a very efficient executable. I've benchmarked FastDifferentiation against 4 or 5 other AD solutions in the Julia ecosystem and 
    it is as fast as, or faster than, all of them. When it is faster it can be by orders of magnitude (see the docs on the FastDifferentiation repo for benchmarks).

    It generates very efficient sparse or dense Jacobians, gradients and Hessians. There is no reason to avoid computing the full Jacobian.

    You can easily generate any sparse subset of partials you need and it will generate efficient code for evaluating them.

    It computes J*v and transpose(J)*v
    
    It can compute derivatives of arbitrary order, although performance is not great beyond the 4th derivative or so.

   It is easy for end users to figure out how to compute a derivative; none of this forward over reverse magic.

Now the less good:

    It's a compiler: if you are only going to evaluate the derivative once then the amortized execution speed, including the compilation time, is slow. You need a problem where the derivative will
    be evaluated enough times to make the compilation time not matter.

    It does not currently support conditionals, although I am writing the code now to add this feature.

    Because it is a compiler there are limits on the size of the expressions you can differentiate. This is primarily an LLVM limitation since
    compilation time gets very long when the source gets big. 
    If the expression graph representing your derivative is smaller than 10^5 or maybe as much as 10^6 operations 
    it should take at most a minute or two to compile the derivative executable function, correspondingly less time as the expression gets smaller. 
    Evaluating the compiled derivative function is extremely fast.

    It's written in Julia. It would have to be rewritten in C++ or else you'd have to call the Julia code from C++, which would add a dependency on FastDifferentiation.

    It dynamically compiles and executes derivative code. In Julia this is trivial. I don't know how hard this is in C++. I've written a similar, less efficient, algorithm in C# so I'm sure there is a way
    to get dynamic compilation working in C++.


Here's a simple example:

```julia
julia> @variables x y # comment: declare variables
y

julia> f = [cos(x)*sin(y)-log(x),cos(x/y)]
2-element Vector{FastDifferentiation.Node}:
 ((cos(x) * sin(y)) - log(x))
                 cos((x / y))

julia> jac = jacobian(f,[x,y]) # comment: note true symbolic form of Jacobian
2×2 Matrix{FastDifferentiation.Node}:
 ((sin(y) * -(sin(x))) + -((1 / x)))                     (cos(x) * cos(y))
         (-(sin((x / y))) * (1 / y))  (-(sin((x / y))) * -(((x / y) / y)))

julia> exe = make_function(jac,[x,y]) # comment: the text below is the source of the runtime generated function which will be compiled the first time exe is called
RuntimeGeneratedFunction(#=in FastDifferentiation=#, #=using FastDifferentiation=#, :((input_variables,)->begin
                      result_element_type = promote_type(Float64, eltype(input_variables))
                      result = Array{result_element_type}(undef, (2, 2))
                      var"##234" = sin(input_variables[2])
                      var"##236" = sin(input_variables[1])
                      var"##235" = -var"##236"
                      var"##233" = var"##234" * var"##235"
                      var"##238" = 1 / input_variables[1]
                      var"##237" = -var"##238"
                      var"##232" = var"##233" + var"##237"
                      result[CartesianIndex(1, 1)] = var"##232"
                      var"##242" = input_variables[1] / input_variables[2]
                      var"##241" = sin(var"##242")
                      var"##240" = -var"##241"
                      var"##243" = 1 / input_variables[2]
                      var"##239" = var"##240" * var"##243"
                      result[CartesianIndex(2, 1)] = var"##239"
                      var"##245" = cos(input_variables[1])
                      var"##246" = cos(input_variables[2])
                      var"##244" = var"##245" * var"##246"
                      result[CartesianIndex(1, 2)] = var"##244"
                      var"##249" = var"##242" / input_variables[2]
                      var"##248" = -var"##249"
                      var"##247" = var"##240" * var"##248"
                      result[CartesianIndex(2, 2)] = var"##247"
                      return result
                  end
              end
      end))

julia> exe(1.0,2.0) # comment: evaluate the compiled jacobian at x = 1.0 y = 2.0
2×2 Matrix{Float64}:
 -1.70807   0.291927
 -0.841471  0.841471
```

Brian Guenter

unread,
Aug 16, 2024, 6:23:05 PM8/16/24
to OpenVSP
Just realized that you may not need dynamic compilation, unless you are allowing arbitrary user defined functions you have to differentiate through. If you have a small set of predefined API functions you need to differentiate then you can run the analysis/compilation phase offline and generate C++ source which you can compile into your project. I've done exactly this in the past.

Rob McDonald

unread,
Aug 16, 2024, 8:33:44 PM8/16/24
to OpenVSP
Thanks for the in-depth reply, it'll take me a few reads to fully digest...

I have no particular opinions guiding my preference for one AD tool over another.  I'm happy to be educated as to the benefits of your approach.

There are some slides on the general code layout here

A video recording of an older version of that talk is online here 
https://www.youtube.com/watch?v=7LJkIhVVY9A

These are probably best used together -- watch the video and click through the updated slides to get an approximation of an updated experience...



OpenVSP is a large event driven GUI application.  Including libraries that are 'ours', it entails about 300,000 lines of C++.  Since it is event driven, there is no simple way to describe the flow of execution.

Most numerical codes that one would consider using AD tools have relatively simple control flow.
1) Set up the problem (set variables or read a text file)
2) Do the algorithm
3) Dump our results to file.

This is clearly not the case for OpenVSP.  In the application, control flow can go in any direction at any time -- it isn't useful to think of differentiating that.


OpenVSP's code is roughly divided into three chunks:

1) Geometry core
2) GUI
3) 3D Graphics stuff (OpenGL)

It is possible to compile just the geometry core into a stand-alone executable, or a library that can be linked into other programs.

I think we would not want to do any AD on parts 2) and 3).  There really is no point and they are hopelessly complex and irrelevant to the differentiation workflow that matters to us.

Ideally, we don't need to touch much of 1) (only parts of it).  Unfortunately, some of our core libraries will undoubtedly get pulled into the mix.


The relevant use case is based around optimization.  Basically speaking, the non-differentiated code path is something like this...

1) Build a model up, or read in a *.vsp3 file.
2) Update() the model -- this generates the Piecewise Bezier Surfaces that are the true geometry, and then also generates a wireframe from that to be displayed on screen.
3) Do some analysis starting from the Bezier surfaces or the Wireframe, or export data based on those to some file format.


A Bezier surface is a parametric surface.  Consequently (x,y,z)=f(u,v) -- the x,y,z coordinates of a point on the surface are a function of u,v.  The Bezier surface is defined by a finite number of control points (Cp_i,j)  each, a point in x,y,z.  If we enumerate the x,y,z as k, then we have C_p,i,j,k for the control points.  The surface evaluation could more properly be written as X_k=f(C_pk, u, v) -- where C_pk is a matrix of control points across i,j for a given surface.


So, the derivative codepath will be something like this...

1) Build a model up, or read in a *.vsp3 file.

2) Identify one or more Parms (OpenVSP variables are called parameters) to take derivatives with respect to (call them P_a)  All Parms in a model have a unique ID.  At the time OpenVSP is compiled, we would not know which Parms will be selected.  If it made a huge difference, we could theoretically know which Parms at compilation time, but this means that ordinary users would need to be able to easily re-compile OpenVSP on the fly.

3) Update() the model -- this generates the Piecewise Bezier Surfaces 
4) UpdateDerivative() -- calculate the derivative of the control points with respect to the identified Parms.  i.e. d C_p,i,j,k / d P_a .  It should be obvious that the product ni*nj*nk is going to be much larger than na, so traditionally a forward mode would be preferred.  Also, if we serialized i,j,k to say index b, I would expect dCpb/dPa to be very sparse.
5) Evaluate the derivative of the Bezier surface at a set of user supplied u,v points.  d Xk / d P_a = f( d C_pb / d P_a, u, v)


That is probably where we will stop for an initial proof of concept.  Taking the derivative of all the possible downstream analyses is intractable.  We'll have to handle them one at a time as demand requires.

We would want an API call to do 5) repeatedly, or at least for a large vector of u,v points.  Likewise, we would want to be able to do 4,5 for different Parms, or at least a moderate size vector Pa.


I can almost convince myself that we only ever need to calculate derivatives from the API -- never from an interactive session.  Any interactive session needs could dump a *.vsp3 file and call the library version.  It would be faster to use the same model that aready exists in memory, but the simplification may be worth it.

I believe that use cases that include normal operation with occasional derivative operation either result in some overhead due to duplicate computation and holding the model in memory twice -- or elaborate lengths to prevent such things.  But it may be un-preventable.

One idea would involve wrapping the Geometry core in a namespace at compile time.  That way we could have normal::foo() and derivative::foo() at the same time.  We would still have to duplicate things in memory and re-compute, but it could help to keep the code touched by AD contained.


This is the sort of thing I'm most interested in your thoughts -- how do you approach modifying a huge complex program for automatic differentiation when vast swaths of code do not need to be involved in AD?

Rob

Brian Guenter

unread,
Aug 17, 2024, 1:09:41 PM8/17/24
to OpenVSP
Rob,

I'll take a look at the docs you sent pointers to.

Here's how you'd use a FastDifferentiation based system with your code (you may know some or all of this but let's put it in writing so everybody has a common baseline of understanding):

1. make your functions generic enough to accept a new kind of number that contains the information necessary to compute derivatives. For FastDifferentiation it's a Node struct. 
    For example, using Julia syntax, if your original function was bezier(points::Vector{Float64}) where points is a vector of floating point numbers, you'd change that to be something 
    generic enough to accept both Node and Float64 arguments. In Julia this would be bezier(points::Vector{Real}). That one change to the type declaration of the argument is all 
    that's necessary in Julia. I think you can do something similar in C++ but I'm not a C++ programmer.

2. override all the standard math operators to accept and return this new kind of number. For example you'd have 
    
    function +(a::Node,b::Node)
       return Node(+,a,b)
    end

    and a bazillion more like this for the other standard operations. In Julia this boilerplate code is easy to generate using a macro. I assume something similar would work in C++.

3. call your functions with Node struct arguments instead of Float values. For example, if your original function returned a single floating point number that function 
    called with Node arguments will create and return a graph of Node structs. This expression graph is a symbolic representation of your function.

4. apply the FastDifferentiation differentiation algorithm to compute derivatives of this expression graph. This returns a new expression graph of Node structs.

5. convert the new expression graph into source code. This should be easy even in C++, maybe 100-200 lines of code (famous last words - at least it's less than 200 lines in Julia).

6. if you only need a fixed set of derivative functions then statically compile the generated source into a .cpp file and add that to your project. If the derivative functions change 
    at run time then you need to dynamically compile and link and load the generated source.

Now to answer your question about how to change a large code base to support AD. Only apply step 1 to functions that you want to AD enable. Obviously all functions called by the functions you AD enable also have to be AD enabled. All the new code for steps 2-6 can be kept separate from the existing code base.
 
I should mention that once you've got the code to create expression graphs of Node structs then it is easy, trivial actually, to do the equivalent of either forward or reverse AD on those expression graphs. The result will be a new symbolic expression representing the derivative of the original function, which you can then compile statically or dynamically. 

These symbolic forms of forward and reverse will generate derivative code that is at least as fast as any conventional forward or reverse AD. For f:𝐑¹->𝐑ᵐ  the forward method will be close to optimal; for f:𝐑ⁿ->𝐑¹ the reverse method will be close to optimal.

This would be the easiest way for you to start. Do steps 1-3 and instead of using the FastDifferentiation differentiation algorithm in step 4 use symbolic forward or reverse AD, either of which is much simpler than FastDifferentiation. Then do steps 5 and 6.

If you're not getting the speed you need then you could implement the FastDifferentiation differentiation algorithm and swap it in for step 4.

Here, for example, is all the code you need to do reverse AD on Node expression graphs (don't expect you to understand all the Julia syntax, just look at the total length of the code). Forward is equally easy:

function reverse_AD(a::DerivativeGraph, variable_order::AbstractVector{<:Node})
    @assert length(roots(a)) == 1 #only works for Rⁿ->R¹ functions

    let visited = Dict{Int64,Tuple{Int64,Node}}()
        all_vars = Vector{Node}(undef, length(variables(a)))

        function _reverseAD(a::DerivativeGraph, curr_deriv::Node, curr_node::Int64, all_vars, visited)
            if (tmp = get(visited, curr_node, nothing)) === nothing
                visited[curr_node] = (1, curr_deriv)
            else
                visit_count, val = visited[curr_node]
                visited[curr_node] = (visit_count + 1, val + curr_deriv)
            end

            visit_count, val = visited[curr_node]

            if visit_count < length(parent_edges(a, curr_node))
                return
            else
                for c_edge in child_edges(a, curr_node)
                    _reverseAD(a, val * value(c_edge), bott_vertex(c_edge), all_vars, visited)
                end

                if is_variable(node(a, curr_node))
                    all_vars[variable_postorder_to_index(a, curr_node)] = val
                end
            end
        end


        _reverseAD(a, one(Node), root_index_to_postorder_number(a, 1), all_vars, visited)

        result = Vector{Node}(undef, length(variable_order))

        #now map variable values to variable_order
        for (i, node) in pairs(variable_order)
            if (tmp = variable_node_to_index(a, node)) === nothing
                result[i] = zero(Node)
            else
                result[i] = all_vars[tmp]
            end
        end

        return result
    end
end
export reverse_AD

Brian Guenter

unread,
Aug 17, 2024, 1:38:52 PM8/17/24
to OpenVSP
if you decide you want to try this I can explain in detail how the symbolic forms of forward and reverse work. They are easy to understand and easy to implement.

Brian Guenter

unread,
Aug 17, 2024, 2:05:01 PM8/17/24
to OpenVSP
One other consideration is whether your code calls functions defined in some library you don't control. For example, assue the function you want to AD enable calls SVD from the Lapack library. You can't modify the SVD program to accept new number types. Instead you will have to write a wrapper around the SVD call that will run custom derivative code. This is something you'd have to do for all types of AD not just FastDifferentiation like algorithms.

In Julia a lot of this work has been done for you in ChainRules.jl https://github.com/JuliaDiff/ChainRules.jl/tree/main. Presumably there is something similar in the C++ world.

Rob McDonald

unread,
Aug 17, 2024, 6:58:12 PM8/17/24
to ope...@googlegroups.com
Thanks again for the information.

Right now, I'm just trying to better understand how automatic differentiation would work in a giant code base like ours -- without making everything absolutely hideous.  Any actual development effort would be at least months away.  

I've done minimalistic autodiff (complex variable approach) in the past.  I understand how an operator overloading approach would work.  I appreciate the additional description of how your approach works.

Unfortunately, OpenVSP is not written as a function with a clearly defined entry / exit point.  Real life is far more messy than that.

I agree that most operator overloading approaches should be a drop-in replacement for one another (including a complex variable approach) and I can likely proceed by just trying to get one of those to work.  From a testing point of view, I will want to implement at least a complex variable approach too.

Thanks again,

Rob




Brian Guenter

unread,
Aug 18, 2024, 12:13:22 PM8/18/24
to OpenVSP
Rob,

you mention that "OpenVSP is not written as a function with a clearly defined entry / exit point". By that do you mean that you use instance functions which destructively modify the contents of the instances? I poked around a bit in the code base and this seems to be the case. Looking at a random bit of code for the Airfoil class:

void Airfoil::UpdateCurve( bool updateParms )
{
    m_OrigCurve = m_Curve;

    Matrix4d mat;
    mat.scale( m_Chord() );

    m_Curve.Transform( mat );

the call of m_Curve.Transform( mat) appears to modify the m_curve field in place rather than having a nice functional form.

Thinking aloud, to AD enable your code to support forwarddiff style AD, using a DualNumber, or complex number, instead of float, you'd not only have to modify your functions to accept DualNumbers you'd also have to change every class declaration. The easiest way to do this would be to parameterize your classes and instantiate them with DualNumbers, for example as in this modified snippet of code from VspCurve.h:

template<class T> class VspCurve
{
public:

    T FindDistant(T &u, const vec3d &pt, const T &d, const T &u0) const;

Apologies if the C++ syntax is not quite correct. If you are using external matrix or math classes that have parameterized classes then they should support DualNumber numbers easily. If the external classes are not parameterized then that would be much harder, maybe too hard to be practical.

You could do the same thing with a FastDifferentiation style AD. Your new number type would have a conventional float field and a Node field. Your overloaded operators would do both the conventional float operation and also the Node operation. When you want to compute a derivative you'd look to see if the graph analysis had already been done on the Node field, which means you'd need a flag field in the Node object to tell you this. If it hadn't been computed then you'd run the graph analysis and generate and run the exe. 

Not as simple as a forwarddiff style, for sure.

However, forwarddiff style is not as simple as it first seems. The ForwardDiff.jl package for example, mentions a problem called perturbation confusion  https://juliadiff.org/ForwardDiff.jl/dev/user/advanced/#Custom-tags-and-tag-checking that arises when you have nested differentiation. This appears to be an issue with most (all?) forwarddiff implementations. 

Here is a paper which describes the problem https://www.bcl.hamilton.ie/~barak/papers/ifl2005.pdf. Here's a long discussion of how the ForwardDiff.jl authors eventually (incompletely) addressed the problem https://github.com/JuliaDiff/ForwardDiff.jl/issues/83. I don't understand the details but somehow they assign a tag associating operations with functions so nested differentiation operations mostly work correctly.

Maybe perturbation confusion wouldn't be a problem for the way you intend to use forwarddiff in OpenVSP. But it would be worth thinking through, this kind of bug would be maddeningly difficult to fix if you didn't understand the basic issue.

The FastDifferentiation style does not suffer from perturbation confusion.

Rob McDonald

unread,
Aug 18, 2024, 6:27:22 PM8/18/24
to OpenVSP
Yes, we destructively modify the contents of instances frequently.  However at the start of an evaluation, instances are always re-initialized to a fixed state, so we shouldn't have a problem carrying derivative information forward from the previous evaluation.

The description of the perturbation confusion problem indicates that it applies to functional programming and to situations where the functions use the differentiation operator itself.  OpenVSP does not currently use a functional style of programming -- and we currently do not use a differentiation capability anywhere (to then end up with it nesting).  I certainly see how using a perturbation based approach, you are limited to taking a single derivative at a time.

OpenVSPs' execution is very dynamic.  Not only would we need to be able to select the differentiation variable dynamically, but there are many ways that the user can dramatically change the dependence of variables.  Simple things like attachment in a model to more sophisticated things like linking and advanced linking.

OpenVSP has many library dependencies.  Fortunately, most of them are related to graphics or analysis capabilities.  Only a few are used in direct computation of bodies.

For NACA 6-series airfoils, we use an old NASA Fortran code that has been converted to C with f2c.  We would need to go through and modify that.

We use Eigen, a large and sophisticated linear algebra tool.  It is fully templated and is known to be operator overloading AD friendly.

We develop our own curve and surface library called Code-Eli.  It is mostly templated and will be our responsibility to modify.

The biggest challenge will be our built-in scripting tool, AngelScript.  Users have the ability to write scripts in AngelScript that are used in Advanced Links and also  Custom Components.

Of course these scripts are not known at compile time, so using a compile time approach would require the end-user be capable of compiling OpenVSP -- which is not a simple process.  If we require that using AD can be done by an end user without access to a compiler, then I don't see how a compile-time approach could work for us.

Rob

Brian Guenter

unread,
Aug 19, 2024, 10:07:27 AM8/19/24
to OpenVSP
Not being able to count on a user having a compiler definitely eliminates the FastDifferentiation approach, at least if you want to give users the ability to differentiate through arbitrary script code. You still have a choice between forward and reverse, both of which can be implemented without having to compile code. 

In principle reverse is more efficient for f:𝐑ⁿ->𝐑¹. However, my experience with the few reverse AD packages I've used has been that the relative simplicity and low overhead of forward makes it faster or nearly as fast until your function domain dimension, n, is pretty large. I found this table

Dimension of input, n 1         8        29         50
Relative runtime         1.13    0.81   0.45     0.26

Table 4: Relative runtime to compute a gradient with reverse-mode, when compared to
forward-mode. The table summarizes results from an experiment conducted by (Baydin et
al., 2018). The runtimes are measured by differentiating f : Rn → R. Note forward-mode
AD is more efficient when n = 1, but the result flips as we increase n

in this paper https://arxiv.org/pdf/1811.05031. You can see that reverse does prove more efficient but at n=50 it's not 50x as fast as forward, as you might expect, it's just 4x faster because of the extra overhead of creating an operation stack to be unrolled in the backward pass of reverse. 

If your domain size, n, is in the low hundreds than the extra overhead of forward could be acceptable. If n gets into the thousands then forward could be really slow.

Do you imagine that the most common use case will be function optimization, in which case you will be differentiating f:𝐑ⁿ->𝐑¹? How big would n be typically? Hundreds, thousands, bigger? If you think n might be in the thousands then you should at least consider reverse, despite the extra complexity of implementation.

Rob McDonald

unread,
Aug 19, 2024, 12:39:30 PM8/19/24
to OpenVSP
I am aiming for an optimization workflow, but OpenVSP is an intermediate step to the whole process.  So, we aren't talking about a scalar output, but instead a relatively large vector output.  After OpenVSP, there is some sort of mesher and then a solver.  The solver's results will be boiled down to a small number of scalars (objective and constraints).

n (inputs) is expected to be O(10), with a likely maximum of about 100.
m (outputs) can easily run into the 1000's.  Every bi-cubic Bezier patch will have 16*3=48 and there will be potentially hundreds of patches in a model.

So, this seems like an obvious case for forward mode.

Rob

Brian Guenter

unread,
Aug 20, 2024, 12:31:35 PM8/20/24
to OpenVSP
Rob,

it sounds like you have a good understanding of the changes you need to make to AD enable OpenVSP. If you run into problems at any point don't hesitate to ping me.
Reply all
Reply to author
Forward
0 new messages