Sorry, what I meant was, when I use the assume(abs(x) < 1), but still plug in a value for x that is greater than 1 into the function. For example, f(1.5) runs fine, even when I have assume(abs(x) < 1).
Here is a cocalc sheet with the code:
Yes, exactly! This is what was confusing me. So, when i run my original code with assume( abs(x) < 1), and then define my function, it seems to automatically define my function as f(x) = -1/(x-1). Originally, I was just curious about how it knew to do this substitution (though I assumed it was just some built in substitution/simplification). But, I didn't understand why it let me plug in values for x that were greater than 1, when I had defined the function to take inputs less than 1. I don't think this is a bad thing, I just thought it was interesting and was curious about it. But then, I tried integrating over the divergence at 1, doing the integral from 0 to 2, and i expected it to tell me it was a divergent integral, but the result was -I*pi. I was even more intrigued by this, because I assume it is substituting in some sort of analytic continuation, but I wasn't sure how it was doing that exactly, since the function I originally defined as an infinite sum had obviously been replaced by f(x) = -1/(x-1), which clearly has a divergent integral over these limits. Why is it that when I integrate over 1/(1-x), it is divergent with these limits, but when I integrate over the function I defined originally, it is not? Why does the substitution happen in the second case and not the first, and how is this substitution happening?
Thanks!