Gsoc Project idea " Efficient Equation of Motion Generation with Python" discussion.

269 views
Skip to first unread message

Shiksha Rawat

unread,
Mar 10, 2019, 4:14:32 AM3/10/19
to sympy
Hello,

I am Shiksha , a second-year undergrad from India. I have been contributing to Sympy for more than a month now. While going through Gsoc Ideas page, I found Efficient Equation of Motion Generation with Python  interesting.I had a subject on engineering mechanics in college and I would be pleased if I get a chance to work on it.

After going through the documentation I observed that functions are implemented to find kinetic energy and potential energy, but there is no function for total energy. Also kinetic energy is returned as a sum of translational and rotational kinetic energy, what if we only want to calculate translational kinetic energy or rotational kinetic energy.

Since it is mentioned in the status of that idea that no work is done so far I am not sure where should I start from.

I would love to hear from Jason Moore as he is more familiar with the topic.

Links to the issues which I have solved(though not related to current idea):

Thanks.




Shiksha Rawat

unread,
Mar 11, 2019, 2:44:53 PM3/11/19
to sympy
Hello,

I am Shiksha , a second-year undergrad from India. I have been contributing to Sympy for more than a month now. While going through Gsoc Ideas page, I found  Efficient Equation of Motion Generation with Python  interesting.I had a subject on engineering mechanics in college and I would be pleased if I get a chance to work on it.

Following are some tasks I want to work on:
Cleaning Up The CodeBase-  Going through the files like vector.py, particle.py , frame.py on which Lagrange.py depends I found that there are a number of  ways by which we can speed up their computations like enumeration is used at places where it is not required.

Profiling To Find Slow Functions: Matrices operations take large computation time so I think they can we replaced at feasible places with some other data structures.

I am still going through the works done in sympy.mechanics and would draft a refined proposal over this and the suggestions I receive. 

Since it is mentioned in the status of that idea that no work is done so far, I am not sure where should I start from.I would love to hear from mentors as they are more familiar with the topic.

Jason Moore

unread,
Mar 11, 2019, 3:56:08 PM3/11/19
to sy...@googlegroups.com
On Mon, Mar 11, 2019 at 11:44 AM Shiksha Rawat <shiksha...@gmail.com> wrote:
Hello,

I am Shiksha , a second-year undergrad from India. I have been contributing to Sympy for more than a month now. While going through Gsoc Ideas page, I found  Efficient Equation of Motion Generation with Python  interesting.I had a subject on engineering mechanics in college and I would be pleased if I get a chance to work on it.

Following are some tasks I want to work on:
Cleaning Up The CodeBase-  Going through the files like vector.py, particle.py , frame.py on which Lagrange.py depends I found that there are a number of  ways by which we can speed up their computations like enumeration is used at places where it is not required.

Yes this is fine.
 

Profiling To Find Slow Functions: Matrices operations take large computation time so I think they can we replaced at feasible places with some other data structures.

Matrix operations are likely the best you will get, but we can use more efficient matrix calcs if we know the structure and type of matrics. Many matrics in EoM derivation are always positive definitive, symmetric, or semi-positive definite, etc.

You can see a couple of mechanics benchmarks here: http://www.moorepants.info/misc/sympy-asv/

Extensive profiling needs to be done on a variety of mechanics problems (big ones preferably) and many speed ups can be made to core algorithms in SymPy that will affect mechanics (and other modules too).
 

I am still going through the works done in sympy.mechanics and would draft a refined proposal over this and the suggestions I receive. 

Since it is mentioned in the status of that idea that no work is done so far, I am not sure where should I start from.I would love to hear from mentors as they are more familiar with the topic.

Links to the issues which I have solved(though not related to current idea):

Thanks.

--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at https://groups.google.com/group/sympy.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CAKVsmS4c_LKAxJ2OoZ%2BjKKcYLyGL45A2s1DfP8g7BT1c-1B%2Bgg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Vishesh Mangla

unread,
Mar 11, 2019, 4:00:32 PM3/11/19
to sy...@googlegroups.com

Well, representation theory is a part of mathematics which maps a matrix to a small representation which is easily computable than those big matrix. This is also related to block diagonalization. Otherwise Karatsuba fast multiplication and strassen’s algo can be used for reducing complexity from n^3 to n^1.52.

 

Sent from Mail for Windows 10

Shiksha Rawat

unread,
Mar 12, 2019, 11:55:20 AM3/12/19
to sympy
Thanks for the info Jason Moore and Vishesh Mangla.

I analyzed the mechanics benchmarks. Many  of the commits which are increasing the computation time are related to ode, printing , matrices. 

I tried to find to find substitute for them. I think LU Decompositon  is used in lagrange.py whose time complexity is O(n^3). but it can be replaced by better algo like 
Coppersmith- Winograd Algorithm(time complexity O(n^2.376)). 

Also as suggested by Vishesh Mangla, Karatsuba fast multiplication and strassen’s algo can be used for reducing complexity from n^3 to n^1.52.

Please correct me if I am wrong. I am still analyzing the benchmarks and trying to find substitutes for other algos. 




Oscar Gustafsson

unread,
Mar 14, 2019, 5:08:14 AM3/14/19
to sy...@googlegroups.com
I am personally not convinced that Karatsuba, Coppersmith-Winograd and Strassen will provide much help here. Basically because only rarely, the size of the problem is the main issue. These algorithms show excellent asymptotic behaviour, but also has an overhead which leads to that quite large problems are needed to actually see a speedup in practice. I can imagine Karatsuba being useful for (non-sparse) polynomial multiplication, but most likely not the other ones, at least for the typical problem sizes I can imagine that people are working with symbolically. In addition, one should investigate the complexity difference between a multiplication and an addition in SymPy. My guess is that the overhead from everything except for the actual computation is much higher than the computation itself and these algorithms (primarily) reduce the number of multiplications and the expense of additions.

(For the record: Karatsuba decreases multiplication from n^2 to n^1.52, Strassen matrix multiplication from n^3 to n^2.83, and Coppersmith-Winograd matrix multiplication for n^3 to n^2.376. For Coppersmith-Winograd I requote from Wikipedia: "However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware.", and this is for numbers, not symbolic computation, which require even more resources.)

BR Oscar

Shiksha Rawat

unread,
Mar 14, 2019, 5:30:47 AM3/14/19
to sympy
Will importing sympy and using it for computations be helpful here ?

Shiksha Rawat

unread,
Mar 14, 2019, 5:31:17 AM3/14/19
to sympy
I mean numpy.

abhinav....@vitstudent.ac.in

unread,
Mar 14, 2019, 5:37:51 AM3/14/19
to sympy
I think you are confused with numpy and scipy.I don't think the algorithm mentioned by you is much helpful to this cause. Could you please check out some more algorithm for the same.

Vishesh Mangla

unread,
Mar 14, 2019, 6:18:32 AM3/14/19
to sy...@googlegroups.com
Well if you see these algorithms are not for general purpose matrices but for where high accuracy is required. I would rather say to use concepts of group theory and representation theory (i do not have lot of knowledge about this but 'm studying) which can reduce matrices to lower dimensions.If you or your friends are from mathematical backgrounds they might be able to tell you better if this can make it easier what you want to do. 

On Thu, Mar 14, 2019, 15:07 <abhinav....@vitstudent.ac.in> wrote:
I think you are confused with numpy and scipy.I don't think the algorithm mentioned by you is much helpful to this cause. Could you please check out some more algorithm for the same.

--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at https://groups.google.com/group/sympy.

Shiksha Rawat

unread,
Mar 14, 2019, 9:19:11 AM3/14/19
to sympy
Yes, I have studied group theory in my college curriculum.
I tried to find ways by which group theory cap to used to simplify matrix multiplication and came across https://web.wpi.edu/Pubs/ETD/Available/etd-012318-234642/unrestricted/zli.pdf

The ways suggested here can be used even when the dimensions of matrix are not very large.

Can this be of any help?

Vishesh Mangla

unread,
Mar 14, 2019, 9:26:14 AM3/14/19
to sy...@googlegroups.com

Well, I can’t say much because I ‘m not a maths student and just study maths because I like doing so.

In this case you would be knowing it better than me.

 

Sent from Mail for Windows 10

 

From: Shiksha Rawat
Sent: 14 March 2019 18:49
To: sympy
Subject: Re: [sympy] Gsoc Project idea " Efficient Equation ofMotionGeneration with Python" discussion.

 

Yes, I have studied group theory in my college curriculum.

Shiksha Rawat

unread,
Mar 14, 2019, 11:04:05 AM3/14/19
to sympy
Can Jason Moore or Oscar suggest anything,please?



Shiksha Rawat

unread,
Mar 14, 2019, 12:31:32 PM3/14/19
to sympy
https://web.wpi.edu/Pubs/ETD/Available/etd-012318-234642/unrestricted/zli.pdf in this I think the description on "Embedding Matrix Multiplication in a Group Algebra "  on page number 10 can be helpful.



Jason Moore

unread,
Mar 14, 2019, 1:26:57 PM3/14/19
to sy...@googlegroups.com
Work to speed up matrix algorithms given assumptions on matrices would help.

Jason

Vishesh Mangla

unread,
Mar 14, 2019, 2:04:48 PM3/14/19
to sy...@googlegroups.com

Give me 2 days since currently I am having my mid sems. I will respond asap once I read it.

 

Sent from Mail for Windows 10

 

Aaron Meurer

unread,
Mar 14, 2019, 3:50:44 PM3/14/19
to sy...@googlegroups.com
For matrices in sympy, I suspect in most cases the best speed ups would come from removing  overhead from the calculations, rather than from algorithmic improvements. Many of the algorithms mentioned here are only theoretically faster, or only faster asymptoticly. In some cases, they would only be faster for matrices that are larger than anything sympy could reasonably handle. 

Benchmarking and profiling are very important if you are looking to improve performance. Also take a look at the benchmarking idea on the GSoC ideas page. 

Aaron Meurer

Oscar Benjamin

unread,
Mar 14, 2019, 4:21:56 PM3/14/19
to sympy
I haven't looked at SymPy's specific code for generating equations of
motion but I have used SymPy for generating equations of motion and
other mechanics related problems.

The example here:
https://github.com/sympy/sympy/issues/16207
comes from a mechanics problem and was slow because of slow matrix
calculations where the matrix has symbolic coefficients.

Note that the matrix in question is 13x13 so much smaller than anyone
would consider these large matrix multiplication algorithms to be
useful. The slowness of symbolic calculations makes it possible to
consider entirely different optimisations than those used in numeric
libraries though. In #16207 I can get a 50x speed up by factoring the
block structure of the matrix. The speed difference would be even
bigger for larger matrices. For a numeric library the additional
checks I performed to discover if that optimisation was possible would
cause a noticeable slowdown in the cases where the method doesn't
apply so that kind of thing wouldn't be considered.

Although #16207 is about eigenvalues the same principles can apply to
matrix multiplication and to solving systems of equations etc.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CAKgW%3D6%2BXZxaVKN%3DcMKzYcTgBYiTn2%3DWjb4DOwEBtuDeygvdoKQ%40mail.gmail.com.

Vishesh Mangla

unread,
Mar 14, 2019, 4:22:16 PM3/14/19
to sy...@googlegroups.com
@asmeurer
Well group theory concept is quite different and is really worth implementing unlike karatsuba, ffts, or cook s algo etc. 

Oscar Benjamin

unread,
Mar 14, 2019, 4:52:53 PM3/14/19
to Alan Bromborsky, sympy
(Replying on-list)

On Thu, 14 Mar 2019 at 20:37, Alan Bromborsky <abrom...@gmail.com> wrote:
>
> Since most pc these days have multiple cores and threads what not use
> parallel algorithyms. For honesty I must state I have a vested interest
> since I have a pc with a threadripper cpu with 16 cores and 32 threads.

Parallel algorithms can offer improvement. Your 16 cores might amount
to a 10x speed up if used well for this kind of thing. The
double-threading probably can't be exploited in CPython.

However I think that many of the things that SymPy is slow for have
*really* bad asymptotic performance: think O(N!) rather than O(N^2).
Many orders of magnitude improvements can be made by spotting these
where more efficient methods are possible. It's not hard in a CAS to
accidentally generate enormous expressions and end up simplifying them
down again. This leads to many situations where it would be vastly
more efficient to somehow take a more direct route.

Aaron Meurer

unread,
Mar 14, 2019, 5:19:39 PM3/14/19
to sy...@googlegroups.com, Alan Bromborsky
I agree. The biggest challenge with symbolic matrices is expression
blow up. In some cases it is unavoidable, for instance, symbolic
eigenvalues/eigenvectors use the symbolic solutions to polynomials,
which are complicated in the general case for n > 2.

One thing I meant by "overhead" is that if the type of a matrix's
entries is known to all be rational numbers, for instance, we can
operate directly on those numbers, ideally using fast number types
like gmpy.mpq. If they are all rational functions, we can use
polynomial algorithms that operate on rational functions. These always
keep rational functions in canonical form, and the zero equivalence
testing becomes literally "expr == 0" (no simplification required).
These can be more efficient than general symbolic manipulation.

This is how the polys module is structured. See
https://docs.sympy.org/latest/modules/polys/internals.html. It would
be nice to have a similar structure in the matrices, where a matrix
can have a ground domain (or type) associated with its underlying
data.

Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
> To post to this group, send email to sy...@googlegroups.com.
> Visit this group at https://groups.google.com/group/sympy.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CAHVvXxTeAGZUv1kdtKCvBRodMZPyX5jHh76G0M49VshwMziJZA%40mail.gmail.com.

Jason Moore

unread,
Mar 15, 2019, 11:16:10 AM3/15/19
to sy...@googlegroups.com, Alan Bromborsky
The mechanics speedup idea is really just a narrow version of the profiling and benchmarking idea (focuses on just a couple of packages). Maybe a proposal that focuses on figuring out the main bottlenecks for sympy, creating benchmarks for them, and then improving performance is a good proposal idea that will ultimately help all the packages. I'm happy to support and mentor on that idea if someone wants to submit.

Jason

Shiksha Rawat

unread,
Mar 15, 2019, 12:19:17 PM3/15/19
to sympy
I am really interested in taking up that idea. Can you suggest where or how should I start from because up till now I was just focusing on the physics module and benchmarks related to it? 
I am still trying to find how could we optimize matrix operations.


Shiksha Rawat

unread,
Mar 19, 2019, 2:23:37 PM3/19/19
to sympy
I did further digging on the idea mentioned by Jason Moore.

Figuring out the main bottlenecks for sympy : The best way to figure out these bottlenecks would be to designing a typical problem for each module for example mass spring damper for physics and computing time taken by sympy to give the output.If it is greater then expected than or a predefined threshold than analyzing the codebase of that module for possible changes to decrease computation time. And the results of predefined benchmarks could also be used.

I think this documentation could come in handy for creating the benchmarks. The requirement of a particular benchmark could be made on the basis of the bottlenecks which we will figure out.

Improving performance:  I think the best way to improve performance would be cleaning up the codebase first and then making changes in the algorithms used according to the requirements.

Future Scope: Figuring out a method by which each PR also has to give information about the time the modules related to that PR will take to give output of problems associated with that module. (those mentioned in figuring out the bottlenecks point).

I might be wrong about the ideas mentioned above. So I want suggestions from the mentors.

Thanks.

Shiksha Rawat

unread,
Mar 27, 2019, 5:56:07 AM3/27/19
to sympy
https://github.com/sympy/sympy/wiki/GSoC-2019-Application-SHIKSHA-RAWAT-:-Benchmarks-and-performance

I have designed a proposal for Benchmarks and Perfromance idea, though it is not complete yet.

Can Jason Moore, Aaron and Oscar please review that and suggest changes?

Oscar Benjamin

unread,
Mar 27, 2019, 7:11:17 PM3/27/19
to sympy
This looks like good work to do. I don't know how these applications
are evaluated but my thought if I was reviewing this would be that it
seems quite vague. This would probably be a more enticing proposal if
it had some specific suggestions of changes that would speed things
up.

I can tell you now what is slow in the ODE module: currently even for
the simplest ODEs all matching code is run for all the possible
methods even after a suitable method has been found. It would be much
better to identify the most immediately usable solver and then use
that without matching all the others. This needs a refactor of the
module though and a redesign of the basic approach used by dsolve. I
want that to happen as an ultimate goal but I would like to see better
test coverage first.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CAKVsmS7P3w9nL%2BA9UJOnpZ4oBmc7UjGUdVjpmcFyog%2B5QwuP5g%40mail.gmail.com.

Aaron Meurer

unread,
Mar 27, 2019, 8:17:32 PM3/27/19
to sy...@googlegroups.com
I agree with Oscar. I would also add that it's usually not trivial to
determine where the bottlenecks are in SymPy. So I would write more
about how you intend to profile the code.

Perhaps it would be useful to take an existing thing that is slow in
SymPy (you can use the performance issue label as a guide, or find
something yourself,
https://github.com/sympy/sympy/issues?q=is%3Aopen+is%3Aissue+label%3APerformance),
and try to fix the performance, documenting how you went about finding
the bottleneck and fixing it. This can be used as a case study in your
application.

Also I would note that currently the benchmarking infrastructure for
SymPy is quite bad (basically nonexistent). See
https://github.com/sympy/sympy/wiki/GSoC-2019-Ideas#benchmarks-and-performance.
It's fine if you do not want to work on that specifically, but you
should note that you will be running the benchmarks on your own
computer to find performance regressions. Not all performance issues
are regressions either (some things have always been slow), so you
should consider absolute numbers as well as relative numbers.

Aaron Meurer

On Wed, Mar 27, 2019 at 5:11 PM Oscar Benjamin
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CAHVvXxRXE%2BF%2BWVk2pY%2Bsbzem6ZTG6QrDZXh24qPtYnjdojoDmA%40mail.gmail.com.

Jason Moore

unread,
Mar 28, 2019, 12:08:45 PM3/28/19
to sympy
We have a benchmark repository that is run periodically: https://github.com/sympy/sympy_benchmarks

I recommend starting there. You can find a number of regressions that can be investigated.

Jason

Shiksha Rawat

unread,
Mar 29, 2019, 2:07:36 PM3/29/19
to sympy
Thank you for the replies. 

As suggested by Aaron , I figured out ways to fix the performance of https://github.com/sympy/sympy/issues/16249.
One of the easy way is to disable _find_localzeros 
The function is creating a set of non-minimal(non-maximal) numbers and to identify these it is making comparison between every two possible combination of numbers.
But these non-minimal values can be computed by sorting the values in ascending or descending order.
This sorting can be done by using algorithms like merge-sort with complexity (nlogn) which is much better than currently used n**2.
So second option is to use better algorithms.

Which would be better or which one should i use to fix this issue?
 Please suggest.

I am also trying to find regressions on which i could work in https://github.com/sympy/sympy_benchmarks.

Aaron Meurer

unread,
Mar 29, 2019, 2:36:47 PM3/29/19
to sympy
On Fri, Mar 29, 2019 at 12:07 PM Shiksha Rawat <shiksha...@gmail.com> wrote:
>
> Thank you for the replies.
>
> As suggested by Aaron , I figured out ways to fix the performance of https://github.com/sympy/sympy/issues/16249.
> One of the easy way is to disable _find_localzeros
> The function is creating a set of non-minimal(non-maximal) numbers and to identify these it is making comparison between every two possible combination of numbers.
> But these non-minimal values can be computed by sorting the values in ascending or descending order.
> This sorting can be done by using algorithms like merge-sort with complexity (nlogn) which is much better than currently used n**2.
> So second option is to use better algorithms.
>
> Which would be better or which one should i use to fix this issue?
> Please suggest.

It depends on what the performance is like, and what the tradeoffs
are. Often when trying to make something faster you may think that
something will improve performance, but after implementing it you'll
find that it doesn't change it at all, or it even makes it worse. So
you always have to try it out and profile it.

It would be better to move the discussion of this specific issue to
the issue itself.

Aaron Meurer
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/CAKVsmS6W9PQQi4coLzj%2Bw6AQBJewPxZUHYa4TP5%2B8snpSKZPhg%40mail.gmail.com.

Shiksha Rawat

unread,
Mar 29, 2019, 4:00:08 PM3/29/19
to sympy
Okay, i have continued the discussion on the issue itself.

Shiksha Rawat

unread,
Apr 1, 2019, 10:25:23 AM4/1/19
to sympy
 I am currently trying to improve the performance in the PR https://github.com/sympy/sympy/pull/16509
To complete my gsoc proposal should i write the way i am trying to improve the performance and how i have planned to proceed ?

Because the idea of benchmarking and performance mainly involves trying to find suitable substitute for a bottleneck.

Shiksha Rawat

unread,
Apr 4, 2019, 2:03:55 AM4/4/19
to sympy
I have added case study for the performance issue i am working on.

Please review the proposal and suggest changes.
I have not completed the implementation plans. But i will add that part too by tonight.

Vishesh Mangla

unread,
Apr 4, 2019, 4:53:03 AM4/4/19
to sy...@googlegroups.com

About sorting: Python is made such that it automatically shifts it’s algorithm  according to list size.

 

Sent from Mail for Windows 10

 

From: Shiksha Rawat
Sent: 30 March 2019 01:30
To: sympy
Subject: Re: [sympy] Gsoc Project idea " Efficient EquationofMotionGenerationwith Python" discussion.

 

Okay, i have continued the discussion on the issue itself.

Oscar Benjamin

unread,
May 14, 2019, 8:56:01 AM5/14/19
to sympy
On Thu, 14 Mar 2019 at 21:19, Aaron Meurer <asme...@gmail.com> wrote:
>
> I agree. The biggest challenge with symbolic matrices is expression
> blow up. In some cases it is unavoidable, for instance, symbolic
> eigenvalues/eigenvectors use the symbolic solutions to polynomials,
> which are complicated in the general case for n > 2.
>
> One thing I meant by "overhead" is that if the type of a matrix's
> entries is known to all be rational numbers, for instance, we can
> operate directly on those numbers, ideally using fast number types
> like gmpy.mpq. If they are all rational functions, we can use
> polynomial algorithms that operate on rational functions. These always
> keep rational functions in canonical form, and the zero equivalence
> testing becomes literally "expr == 0" (no simplification required).
> These can be more efficient than general symbolic manipulation.
>
> This is how the polys module is structured. See
> https://docs.sympy.org/latest/modules/polys/internals.html. It would
> be nice to have a similar structure in the matrices, where a matrix
> can have a ground domain (or type) associated with its underlying
> data.

There is an example of this here:
https://github.com/sympy/sympy/issues/16823

The matrix is all numbers of the form q1+I*q2 for rational q1 and q2
and the expressions blow up leading to terrible asymptotic
performance. It could probably be made a lot faster with judicious use
of expand but actually having a fast only complex number matrix
routine would speed that up massively.

--
Oscar

S.Y. Lee

unread,
Jan 30, 2020, 4:12:45 PM1/30/20
to sympy
This looks like an old topic, but I stumbled across the Coppersmith-Winograd algorithm so I'm going to reply over this
it was quite difficult to understand the paper, but I'd suspect that the coppersmith algorithm is about 'approximating' the matrix product rather than computing the exact values.
If that is the case, it won't be interesting topic outside of numeric computations.

I wonder if anyone familiar with the topic can clarify that the algorithm is approximate.

On Sunday, March 10, 2019 at 5:14:32 PM UTC+9, Shiksha Rawat wrote:
Hello,

I am Shiksha , a second-year undergrad from India. I have been contributing to Sympy for more than a month now. While going through Gsoc Ideas page, I found Efficient Equation of Motion Generation with Python  interesting.I had a subject on engineering mechanics in college and I would be pleased if I get a chance to work on it.

After going through the documentation I observed that functions are implemented to find kinetic energy and potential energy, but there is no function for total energy. Also kinetic energy is returned as a sum of translational and rotational kinetic energy, what if we only want to calculate translational kinetic energy or rotational kinetic energy.

Since it is mentioned in the status of that idea that no work is done so far I am not sure where should I start from.

I would love to hear from Jason Moore as he is more familiar with the topic.

Links to the issues which I have solved(though not related to current idea):

Thanks.




Oscar Benjamin

unread,
Jan 30, 2020, 5:06:24 PM1/30/20
to sympy
I don't see any connection between the original post and your reply but...

My understanding is that the Coppersmith Winograd algorithm is not
really used anywhere:
https://en.wikipedia.org/wiki/Galactic_algorithm

I'm not sure if that is the same algorithm. The paper you cite looks
like an earlier work.
> --
> You received this message because you are subscribed to the Google Groups "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/1e2a0dba-229c-4af0-8e65-d0b39da8d783%40googlegroups.com.

Jason Moore

unread,
Jan 30, 2020, 5:17:13 PM1/30/20
to sympy
The idea behind this topic would be to profile the physics.vector and physics.mechanics codebases using non-trivial problems, then implement more efficient algorithms where needed. No knew dynamics algorithms are likely needed and most speeds up might result in work with the matrix module. Skills needed would really just be tied to programming with sympy and profiling. You may not even need to know the dynamics stuff.


They've slowed down some over time. I think that could be reversed and the code could be made faster.

Jason

On Sun, Mar 10, 2019 at 12:14 AM Shiksha Rawat <shiksha...@gmail.com> wrote:
Hello,

I am Shiksha , a second-year undergrad from India. I have been contributing to Sympy for more than a month now. While going through Gsoc Ideas page, I found Efficient Equation of Motion Generation with Python  interesting.I had a subject on engineering mechanics in college and I would be pleased if I get a chance to work on it.

After going through the documentation I observed that functions are implemented to find kinetic energy and potential energy, but there is no function for total energy. Also kinetic energy is returned as a sum of translational and rotational kinetic energy, what if we only want to calculate translational kinetic energy or rotational kinetic energy.

Since it is mentioned in the status of that idea that no work is done so far I am not sure where should I start from.

I would love to hear from Jason Moore as he is more familiar with the topic.

Links to the issues which I have solved(though not related to current idea):

Thanks.




--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at https://groups.google.com/group/sympy.
Reply all
Reply to author
Forward
0 new messages