Acommon problem in a Computer Laboratory is that of finding linear least squares solutions. These problems arise in a variety of areas and in a variety of contexts. Linear least squares problems are particularly difficult to solve because they frequently involve large quantities of data, and they are ill-conditioned by their very nature. In this paper, we shall consider stable numerical methods for handling these problems. Our basic tool is a matrix decomposition based on orthogonal Householder transformations.
Reproduction in Whole or in Part is permitted for any Purpose of the United States government. This report was supported in part by Office of Naval Research Contract Nonr-225(37) (NR 044-11) at Stanford University.
Looks not too complicated, does it ? Now to the weired thing: I believe that u=Tanh[x/Sqrt[2]] is a solution to Laplacian[u, x] + (1 - u^2)* u == 0 using the boundary condition f[-inf]==-1 && f[inf]==1. Mathematica doesn't seem to be able to deliver me any solution, though. Well, I might be wrong with my guessed solution, so I just plugged it into the equation to see how far off I am.
So that plot looks way too noisy and within tiny numbers that I suspect a numerical problem here (and tanh(x/sqrt(2)) actually being a solution). Is that possible ? Why is that ? Could that also be the problem why Mathematica is unable to solve for u with the said boundary condition ?
Thanks the replies.So is there no way Mathematica can solve this equation for the boundary condition and give me the tanh(x/sqrt(2)) ? Do I have to guess all my solutions for specific boundary problems ? ;)
Even though I have luckily guessed the solution for the 1-dimensional case I was not so lucky with the 2- and 3-dimensional case (where u^2 becomes u.u and the boundary condition at x^2+y^2+z^2==inf being x,y,z/Sqrt[x^2+y^2+z^2]).Is there a way to solve this problem with Mathematica ?
Yes, using spherical coordinates would certainly make sense as the solution must be spherically symmetric. I just wasn't sure how to do it with Mathematica (I'm still a beginner).And to reformulate the boundary condition: Basically the condition should be that vectors at r=inf should have a length=1 and point away from the origin. At the origin (r=0) the vector length=0.How could I formulate it for mathematica ? Though I'm still struggling with the 1D case (see comment to your next post).
In 2- and 3-D the equation $\beginequation\fracd^2u(x)dx^2+(1-u(x)^2) u(x)=0\endequation$ no obvious generalization for a vector valued function $\vecu(\vecx)$ exists because the Laplace operator $\Delta=\nabla\cdot\nabla$ as a gradient of the divergence of a scalar function $u(\vecx)$ will not happen to be applicable. If one applies the divergence to a vector valued function, a matrix valued function appears ...
hmm, i don't really understand what you are saying. But this equation is the Ginzburg-Landau equation which works in any dimension (so u yields a 3-vector for every given x, also a 3-vector). So for 3 dimensions it works on a 3d vector field. the laplacian of a 3d vector field is again a 3d vector field (and the second term is also a 3d vector field to be added). and i know from numerical experiments that there is a solution to it. so what's the problem here ?
Ah okay, but that's somewhat cheating as I have to know the solution in advance already. ;) If I wouldn't have had that lucky guess of tanh(x/sqrt(2)) in the first place there would be no way to derive the solution, i guess ?
A very common problem is complaints in the solver about numerical problems, or other diagnostic codes which indicate issues solving the problem. There are many possible causes for this, from benign issues causing minor warnings in the solver, to completely disastrous problems in the model.
Solvers work in finite precision in floating-point numerics, and this means most computations all the way down to addition and substraction, only are approximations. As the solver works, these small errors add up, and in some models with bad data this can lead to failure.
So what is bad data? As the saying goes you know it when you see it. There is no distinct definition of bad data in a model. Large numbers and very small numbers are typically the root cause, in particular when the model contains both, as a scaling strategy then can be hard to apply for the solver. This leads to the question what are small and large numbers? Once again, there is no strict definition. Roughly speaking, the larger spread in the order of magnitudes among coefficients the worse. In other words, for non-zero numbers, the further away from 0, in an absolute logarithmic measure, the worse. You can typically start to expect issues when you go below \(10^-6\) or above \(10^6\) or so. Once again though, this is not a certain fact. In a lucky situation your solver might work very well with data in the order of \(10^12\) but then another day it fails on data with seemingly better coefficients. It is an intricate interplay between the solver algorithms, the data, the numerical libraries, floating-point magic, and finally properties of the feasible set and optimal solutions.
A typical issue might be that you are working in the wrong units. You are planning a trip to Mars and measure distance in meters instead of kilometers, or you are computing energies in atoms and express distances in meters instead of nanometers. In control theory, a common cause is badly conditoned state-space realizations. A way to obtain better data then can be to perform a balanced realization before defining the model.
To debug this issue, you have to take a good look at your data and the process generating your data. To see where you have bad data in your model, you can display objects and they list the smallest and largest (absolute) coefficients.
A second category of issues arise in ill-posed problems. A simple example could be minimizing \(x^-1\) on \(x\geq 0\). A solver might run into trouble as the iterates of \(x\) will diverge to infinity. This is a very common situation in control theory where optimal state-feedback solutions can involve controllers which force some poles to \(-\infty\), which requires some decision variables to grow arbitrarily large.
To debug this issue, simply add a large bound (but not so large that it causes issues with bad data) and study the solution to see if any variable appear to approach infinity. If some variable ends up at the bound, no matter what you set the bound to, you have most likely found the issue.
Having found the issue, you should first try to understand why some variables tend to infinity, and if it makes sense. Maybe you have missed some constraints. Although it is optimal, is it good? Coming back to the control example, the solution with somes poles at \(-\infty\) might be optimal, but it is often a very fragile solution and not practically relevant. Hence, you might want to add constraints on some variables, or add some kind of suitably selected penalty on some or all variables to avoid this problematic escape to infinity.
A common scenario is that you define a problem and then replace a non-strict onstraint with strict constraint by adding some small margin to a constraint. If the problem lacks a strict solution, and you have added a margin which is so small that it drowns in the general tolerances of the solver, the solver can easily run into numerical problems as it thinks it is very close to solve the problem, but struggles on the last bit (naturally, as the problem is infeasible).
If you suspect you are experiencing an issue with a non-strictly feasible solution space, you can solve the problem with the strictness margin as a decision variable, and then try to maximize this. If it ends up at 0 (up to expected solver tolerances) you have probably identified the issue.
A very common scenario is models where the theory uses strict inequalities, but since this is impossible in practice we relax it to non-strict inequaliites, and we obtain various mysterious warnings and diagnostics by the solver. The root ause then can be the the model as a whole only is feasible for the zero solution, i.e. the original strict variant is completely infeasible.
Some solvers will return the feasible solution \(P=0\) which naturally solves the non-strict problem (but is completely useless) while sme solvers might struggle since the feasible space is a singleton.
Adding any kind of de-homogenizing constraint on \P\) to avoid the trivial solution will render the problem infeasible, thus revealing to us that the problem is infeasible, and there is no remedy as the original problem is infeasible.
Generally it refers to the difficulty of solving problems mathematically that give you the exact answer, or trying to get approximate answers using techniques that involve numeric approximations, that allow you to get close to the solution sooner or more easily.
For instance, since computers can only store numbers with bits and bytes, many solutions you obtain are only approximations. You cannot store the number known as Pi, for instance. But you can use an approximation of Pi with a large number of bits.
Another example is that certain classes or problems, like partial differential equations, are very difficult to solve. However, you can use methods that are known to give you something that is close to the answer, a numerical approximation. Typically such methods use approximations and computer power. So you never get the exact answer but you get something close to it.
The tags you used are all related. Numerical integration is what I just described for the problem of integration. Discrete mathematics can be used to approximate infinite or continuous values to get solutions numerically. Numerical methods are a collection of such methods to solve problems numerically.
For applications, you need the numerical value (you need to know the area in m to purchase fabric, you don't care about the formula that gave it). But for insight into a problem, the formulas are more useful as they give you a feeling on trends when you change parameters in the problem setting.
3a8082e126