[GSoC 2026] Proposal Inquiry: Extending physics.continuum_mechanics with Arbitrary Symbolic Loads

265 views
Skip to first unread message

Chidroopa Kanaparthy

unread,
Mar 21, 2026, 2:49:54 AMMar 21
to sympy

Hi SymPy Community,

I'm Chidroopa, and I'm working on a proposal to improve the physics.continuum_mechanics.beam module for GSoC 2026.My goal is to provide a high-level interface for arbitrary symbolic load distribution.

Problem: The existing Beam module is optimised for MacCauley's technique using SingularityFunction. Although this is ideal for point loads and constant distributed loads, applying a transcendental or non-polynomial symbolic load, such as q(x) = sin(x) or e^x, requires the user to manually create intricate singularity representations, which is counterintuitive and restricts the module's applicability for complex engineering problems.

My Proposed Solution: I am exploring a Symbolic-to-Singularity Transformer.

1. For polynomial loads, I plan to map them directly to singularity powers.

2. To handle non-polynomial equations, I'm considering adding a Taylor Series expansion fallback to the apply_load method. This would approximate complex loads as high-order polynomials that the current integration engine can handle without a core change.

My Question for Mentors:

Before finalising my approach, I wanted to ask:

  • Is a more straightforward integration path for arbitrary functions within the SingularityFunction class itself preferred by the community, or is this Taylor-expansion approximation a better approach for complex symbolic loads?
  • Are there any cases in the present beam solver that I should know about when using interval-based symbolic expressions?

I've begun getting acquainted with the codebase through recent PRs (#29411, #29449, #29339, #29332), and I would appreciate any feedback to make sure my idea aligns with the module's long-term goals.

Best regards,

Chidroopa Kanaparthy

Tom van Woudenberg

unread,
Mar 23, 2026, 2:33:55 AMMar 23
to sympy
Hi Chidroopa,

I like your idea to support arbitrary loads. However, I'm not sure about the Taylor-expansion: it's not an exact representation and the expression become cluttered with many terms while a big benefit of the singularity function (for polynomial loads) is the compact notation. I think I'd prefer using a singularity fuction as 'activation' function with a <x-a>^0 mutiplier. However, that isn't perfect either: after integration/differentation the number of terms increase.

Tom

Op zaterdag 21 maart 2026 om 07:49:54 UTC+1 schreef chidro...@gmail.com:

Chidroopa Kanaparthy

unread,
Mar 23, 2026, 10:48:52 AMMar 23
to sympy
Hi Tom,

Thanks for the feedback on the Series Expansion Idea. Like you said, the term inflation would make the output unreadable for anyone using complex loads.
i have worked out various alternatives using your suggestion, one of which is to to use  <x-a>^0 as an "activation" multiplier like you suggested. I think we can make that work well by using a lazy-rewrite approach. My idea is to keep the load stored in that compact form (f(x) . <x-a>^0) so the user-facing notation stays clean and follows the Macaulay style engineers expect. Only when we actually need to calculate the shear or bending moment, the internal methods (like _get_shear_force) would rewrite those specific terms as Piecewise or Heaviside expressions before calling integrate(). This lets us use the existing integration engine to get exact results (like keeping a sin(x) load as a cos(x) result) without the user ever having to see a wall of piecewise logic in the initial load definition.
In order to maintain the diagrams' continuity, I'm planning to handle the integration constants by zeroing out the antiderivative at the start point alpha. This should prevent any weird jumps at the load boundaries.

Does this Lazy-Rewrite direction sound like a solid path forward to you? I'd love to get your take on this before I finalize the implementation details for my GSoC proposal.

Best Regards,
Chidroopa

Chidroopa Kanaparthy

unread,
Mar 23, 2026, 11:24:37 AMMar 23
to sympy
I’ve just worked through a few test cases (including trigonometric and partial distributed loads) to validate this logic . I noticed that the graphs are significantly more precise compared to the Taylor series method, which tended to drift or require too many terms for a clean plot.

Tom van Woudenberg

unread,
Mar 24, 2026, 3:08:36 AMMar 24
to sympy
>  the internal methods (like _get_shear_force) would rewrite those specific terms as Piecewise or Heaviside expressions before calling integrate(). This lets us use the existing integration engine to get exact results (like keeping a sin(x) load as a cos(x) result) without the user ever having to see a wall of piecewise logic in the initial load definition.

It sounds like a core-issue is that the integration engine cannot handle your new loading type. It would be better to solve that in the integration functions itself than patching it in the bea module.

Op maandag 23 maart 2026 om 16:24:37 UTC+1 schreef chidro...@gmail.com:

Chidroopa Kanaparthy

unread,
Mar 24, 2026, 8:17:43 AMMar 24
to sympy
i spent some time brainstorming a few different ways to handle this using your suggestion. I initially considered a specialized Structural Dispatcher within the Physics module or to add a manual _eval_rewrite step for user, but both of these felt like a surface fix or a hack that didn't solve the underlying math problem of the core. I am considering a Core Enhancement of the SingularityFunction class ( specifically in sympy/functions/special/singularity_functions.py). 

The plan is to extent the internal _eval_integral method so it can natively handle products of arbitrary expressions and singularity functions. i am still considering the implemention of the Piecewise-folding logic and boundary constants (F(x)-F(a)) directly in the core, Beam module can now naturally support these loads without needing any extra workaround. This in turn keeps the physics module clean and also benefits other parts of sympy that uses singularity functions. i realized that the piecewise function can become very long so i am also planning to add a simplification pass in my work. 

i'm revising my Gsoc proposal to reflect this core-first framework. i'd love to hear your thoughts on this descision.

Tom van Woudenberg

unread,
Mar 24, 2026, 11:39:45 AMMar 24
to sympy
>  The plan is to extent the internal _eval_integral method so it can natively handle products of arbitrary expressions and singularity functions

Sounds good

> i am still considering the implemention of the Piecewise-folding logic and boundary constants (F(x)-F(a)) directly in the core,

I'm not sure what you mean with this
Op dinsdag 24 maart 2026 om 13:17:43 UTC+1 schreef chidro...@gmail.com:

Chidroopa Kanaparthy

unread,
Mar 24, 2026, 1:17:45 PMMar 24
to sympy
to clarify this :

When we integrate a load q(x) that starts at a, the resulting shear force V(x) must be zero for x < a. If we simply return an indefinite integral F(x) .H(x-a), we get a mathematical jump at x=a unless F(a)=0.

My plan is to implement the definite-style subtraction directly in the core:

int q(x) <x-a>^0 dx = [F(x) - F(a)] . Heaviside(x - a)

This ensures the constant of integration is 'reset' exactly at the singularity point, providing physical C^0 continuity natively. 

Since these integrations often result in nested Piecewise objects (especially with multiple loads), I want the core _eval_integral to automatically call piecewise_fold() on its own output. This ensures that the Beam module receives a single, flat mathematical expression rather than a deeply nested logical tree which could cause the solver or the plotter to crash.



Tom van Woudenberg

unread,
Mar 25, 2026, 3:09:15 AMMar 25
to sympy
Isn't the definite-style subtraction you are described already implemented? See https://github.com/sympy/sympy/blob/master/sympy/functions/special/singularity_functions.py#L27C5-L29C57

 >  which could cause the solver or the plotter to crash.
If it causes problems, that might be a solution. I don't know the dtails of piecewise_fold() to be able to judge whether this would be a fail-proof method.

Op dinsdag 24 maart 2026 om 18:17:45 UTC+1 schreef chidro...@gmail.com:

Chidroopa Kanaparthy

unread,
Mar 25, 2026, 6:11:08 AMMar 25
to sympy
The `_eval_rewrite_as_Piecewise` and `_eval_rewrite_as_Heaviside` here handle the standard MacCauley cases for polynomials perfectly. However, the issue I'm targeting is that these are representational rewrites, not integration logic. When we move beyond polynomials to arbitrary symbolic loads q(x) (like e^x, sin(x), etc.), relying on the current rewrite leads to a loss of physical C^0 continuity.

For Example : If we integrate q(x) = e^x starting at a=2:
- Current core behavior: It rewrites to e^x . H(x-2). The integral of this is e^x . H(x-2) + C.  At the boundary x=2, the value jumps from 0 to e^2 approx. 7.38. In beam mechanics, this implies a massive phantom point load/moment at the start of the distributed load.
- My Proposal: By implementing a dedicated `_eval_integral` in the core, we can natively return [F(x) - F(a)] . H(x - a). For e^x, we get the result (e^x - e^2) . H(x - 2), which is 0 at the boundary. By doing this, any symbolic expression is guaranteed to be physically valid without the user having to manually shift the integral.

I understand your caution about `piecewise_fold()` being a catch-all. My reasoning for including it in the core `_eval_integral` (or a simplification pass) is specifically to solve the Nested Piecewise problem. Currently, when the Beam module handles multiple loads, it creates deeply nested logical trees. These often lead to `RecursionError` or crashes during plotting/solving. By my proposal we flatten these expressions inside the core's integration step, we ensure the Beam module (and any other module using SingularityFunctions) receives a single, clean mathematical expression rather than a recursive tree.

I'd be happy to look into a more compact version of folding if a full `piecewise_fold` feels too heavy for the core to handle, but i believe some form of logic flattening is essential for the stability of the Beam solver.

Chidroopa Kanaparthy

unread,
Mar 26, 2026, 12:32:16 PMMar 26
to sympy
Hi Tom and possible mentors, 

I have drafted my GSoC proposal for Refactoring the Singularity Engine to Enable Arbitrary Symbolic Load Integration and would love to get our review and feedback on this. Before finalizing the proposal i want to ensure it aligns with the community's expectations and requirement. 

 
Looking forward to your feedback.

regards, 
chidroopa

Tom van Woudenberg

unread,
Mar 30, 2026, 6:23:56 AMMar 30
to sympy
Hi Chidroopa,

A few comments:
  • https://github.com/Chidroopakanaparthy/Beam-Demo is a dead link
  • You propose to use a Piecewise fallback. I would be nice to have a demo of that since you're providing such a big change in the logic used in the beam module.
  • You name some bugs in the current beam module which don't seem related to the polynomial assumptios (for example 'loads applied at L/2' and 'Nested or complex expressions'). It's not clear to me whether you plan to fix those with your piecewise fallback or with improvement of the current logic
  • It's not clear to me what was the problem with and what your fixing: Define the correct symbolic behavior using F(x) - F(a)
  • 'Normalize Piecewise expressions using piecewise_fold and simplify to reduce nested conditions and ensure consistent structure'. Although I see a need for that, I'm wondering whether we won't included too 'general'/'heurstic' approaches here.
  • I cannot oversee any further compatibility requirements from your propose changes (you mention `lambdify`, but aren't there many more?). I feel like your overview in 'Risk Analysis & Mitigation' is incomplete, but I wouldn't be able to help you with this.
  • I wouldn't see tests as a separate step/phase but integrated in all the other one. Adding examples to the docs might be more appropriate as a separate phase, if you want to do so.
  • It's not fully clear to me which of your ideas link to SymPy's core functionality and which to the beam module

Op donderdag 26 maart 2026 om 17:32:16 UTC+1 schreef chidro...@gmail.com:

Chidroopa Kanaparthy

unread,
Mar 30, 2026, 8:27:50 AMMar 30
to sy...@googlegroups.com
Hi Tom,

Firstly Thank you for the detailed feedback.

- Repository Visibility & Piecewise Demo
I apologize for the broken link. The Beam-Demo Repository is now Public. I had it as a private repo. I'm adding  more comprehensive tests to show proposed changes while I write this msg.

- The Bug Fixes
You correctly noted that bugs like "loads at L/2" are not strictly polynomial issues. My plan separates these into two workstreams:
  • Structural Logic (The L/2 & Nesting Bugs): These stem from how `Beam._assemble_loads` and the boundary condition solver handle symbolic coordinates. I will refactor the internal dictionary management to ensure symbolic expressions are simplified before they are passed to the solver.

  • Functional Logic (The Piecewise Fallback): This addresses the type of math that can be handled. By fixing the structural logic first, the Piecewise fallback will automatically benefit from more robust boundary condition handling.

- The F(x) - F(a) Approach

The issue I’m addressing is the "Integration Constant Drift" in singularity functions. Currently, integrating <x-a>^n  sometimes leads to expressions that aren't zero for x < a without manual intervention. By defining the behavior as int {a}->{x} f(t) dt = F(x) - F(a),  ensures:

  1. Shear Force (V) is zero at the very start of a partial load.

  2. Continuity is maintained across the entire beam length.
I will implement this by ensuring the Beam module wraps every integration step in a subtraction of the lower-bound evaluation.

Heuristics and Normalization
I understand the concern regarding "general" heuristics like `piecewise_fold`. To mitigate this, I will implement Targeted Simplification. Instead of a global fold, I will use a specific visitor pattern that only simplifies Piecewise expressions if they share the same condition boundaries (e.g., x > a), preventing the expression tree from exploding in complexity.

Risk Analysis & Compatibility 
i understand that `lambdify` is the tip of the ice berg. i have also identified the following risks:
  •      1. Plotting: The `piecewise` loads must be compatible with `sympy.plotting.plot`. I will ensure the `Beam.plot()` method can handle the transition between singularity-based and piecewise-based diagrams.

  •      2. Evalf: Numerical evaluation of complex nested Piecewise structures can be slow. I will include performance benchmarks as part of my milestones.

  •      3. Core Impact: I plan to keep the heavy lifting inside the `continuum_mechanics` module to minimize the risk of breaking SingularityFunction behavior in other areas of SymPy .

  • - Integrated Testing & Documentation

  • I believe adding tests with every pr will make the function much better. I am updating my timeline. Tests will no longer be a separate phase. Every PR will include:

  • Unit tests for the specific math fix.

  • Integration tests comparing the output against known Euler-Bernoulli analytical solutions.

  • Docstring examples to ensure the Beam module's documentation remains the source of main info.

- Core vs. Module Scope

To clarify what my idea is:

  • SymPy Core: Any improvements to the SingularityFunction class itself or its integration/differentiation rules.

  • Beam Module: The application of these functions, the Piecewise fallback logic, and the boundary condition solvers.

 I will explicitly tag each task in the timeline as either [Core] or [Module] for better clarity while reviewing.

Thank you again for the guidance. It’s helping me shape this into a much more meaningful project.

regards,
Chidroooa


--
You received this message because you are subscribed to a topic in the Google Groups "sympy" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sympy/waOD96T0vrg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sympy+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/sympy/526a5b68-9a65-49c7-bc41-bb217f4c0fc1n%40googlegroups.com.

Tom van Woudenberg

unread,
Mar 30, 2026, 9:16:25 AMMar 30
to sympy
It has been mentioned before, but your answer feels like a LLM reply.

Op maandag 30 maart 2026 om 14:27:50 UTC+2 schreef chidro...@gmail.com:

Chidroopa Kanaparthy

unread,
Mar 30, 2026, 10:16:26 AMMar 30
to sympy
i tried making my answer look detailed and easy to read, if it still feels like an llm reply ill make sure i format it differently in the future. I would like to ask if there are any particular markers that make the paragraph written by me look AI generated?

Tom van Woudenberg

unread,
Mar 30, 2026, 10:21:39 AMMar 30
to sympy
It's the phrasing of sentences and heavy use of capitalized generic terms. If you don't use LLMs for writing it, please ignore me. 
Op maandag 30 maart 2026 om 16:16:26 UTC+2 schreef chidro...@gmail.com:

Chidroopa Kanaparthy

unread,
Mar 30, 2026, 11:06:07 AMMar 30
to sympy
fair point, i'll keep it more direct henceforth.  thanks for the heads up.
to add to it i have made changes to the beam demo it is almost ready.

Chidroopa Kanaparthy

unread,
Mar 30, 2026, 3:12:22 PM (14 days ago) Mar 30
to sympy
After a long session of brainstorming on how to make my proposal better i have decided to put forward these two pillars to my proposal:
1. Automated Matrix Assembly:
to handle the integration constants (C^n) arising from arbitrary Piecewise loads without expression bloat, I am proposing a Global Matrix Solver (A . C = B). the Beam module will automatically assemble a Coefficient Matrix (A) based on the beam's segments. For a beam with n discontinuities, the solver will enforce 2(n-1) matching conditions at every junction x_j
  - Deflection Continuity (C^0): y_i(x_j) - y_{i+1}(x_j) = 0
  - Slope Continuity (C^1): theta_i(x_j) - theta_{i+1}(x_j) = 0
The results of the arbitrary load integrals (using the F(x) - F(a) method) forms the Result Vector (B). Kepping the symbolic complexity of the load trapped in the vector, ensuring the system remains a stable linear solve no matter how weird the load function is.

2. Making the logic usable all over the physic modules
I am planning to implement this(Transcendental-to-Piecewise Transformer) as a standalone utility in sympy.utilities. this allows the exact-integration logic developed here to be leveraged by other modules (e.g., physics.mechanics or control) that might need to integrate non-polynomial singularities. While the utility resides in sympy.utilities, its design is strictly dictated by the requirements of Euler-Bernoulli Beam Theory (specifically the need for C^1 continuity in transcendental integration). It is a Physics-driven enhancement to the SymPy core. Mainly to ensure taht this isn't a localized code and can be tracked and verified easily by the maintainers.

3. Ambiguity regarding the bug and Pipeline Integration
I propose a Structural Pre-processor that separates beam geometry from the integration engine in order to address Issue #26639. Before any calculus starts, symbolic places (such as L/2 or k.L) are normalised into a stable, dictionary-indexed state by using a Coordinate Mapper. This changes the logic from symbolic comparisons to identity-based mapping, ensuring a clear separation of numerical and symbolic inputs.

This pass completes my Unified Symbolic Pipeline: Normalization (Placement) --> Transformation (Global Utility) --> Assembly (Matrix Solver).

i would be grateful if you could give this a final go, while i refine both my proposal and my demo before submitting it.

Reply all
Reply to author
Forward
0 new messages