That's a known issue with dependent services that make up a whole system, whether microservices or not, when you pair tests by separate repositories. And that's not just for testing, it can affect actual app code too when set up the same way in terms of grouping and dependencies across distributed repos.
A possible solution to this is dependency management and constructing a central repository for (test) artifacts. How you do it is up to you and may also depend on your technology stack. You could have it as you say with test code (POMs and test cases) stored with their respective microservice repos. On successful builds, the test artifacts (the source code if scripting language based, or compiled (binary) assets such as JARs, Java class files, or .NET DLLs, etc.) are archived to a central repository just like app code (as releases, versions). The integration test repo on test runs as part of the test pipeline will pull from the central asset repo the test artifacts needed to run the integration tests. And this can be version managed as well, if the integration test needs to test with specific versions of each microservice (otherwise, it will use the latest available for each). If test stack is Java, you could do this with maven and internally hosted maven repos. There should be something similar for other languages like nuget for .NET, Ruby gems, pip and Python packages, npm for node.js, CPAN for Perl. Although for these languages I'm unsure whether there are options for private repo hosting or if they're all public, but if they're public, you could still package/compile the asset into a bundle that is normally distributed via public repo but then you just host the files internally over HTTP and wget/curl them from the integration test pipeline and use the package installer of your language platform to install the package. In essence, treat your POMs as if they are distributable libraries to be used with your programming language, only instead of using them to build apps or as utility libraries, you're using them to test apps.
There would be no test code change in terms of code logic with this solution. The only change you have to make is how you structure the end to end test code file structure. e.g. it should be a master file tree with placeholder directories that house each microservice's POMs and test cases. And on test runtime, it pulls those missing/outdated files into the directory by installing the missing/dependent (test) packages into it. And those directories also don't have to exist beforehand, could be created on first use (e.g. say for a newly created microservice). Only the end to end test cases need to be aware of the dependent POMs and test cases it needs to call (in the file tree).
Think of this in Unix terms as target symlinks from end-to-end test repo to the microservice repo sources, or Windows shortcuts on Windows. This could be done in that way in fact, for test code that is all source code (that doesn't need compiling but works for compilable tests too if you choose to compile at end to end test runtime), with source code repo updates before running end to end tests. Here's a sample illustration:
End-to-end Test root
|-- end to end test suite(s)
|-- end to end test case directories
|-- end to end test cases
|-- microservice A
|-- POMs via directory structure
|-- microservice A test cases
|-- microservice B, and so on
In clean state (in source code repo), the end to end test repo's microservice directories are empty. On each run, the end to end test pipeline does this as a prerequisite:
1. update the local repo of the end to end tests, as needed pulling off source control
2. go into each microservice directory
2. then run git pull origin master (of microservice repo in source control) to fetch the latest changes, or it be git clone (microservice repo) if in default clean empty state. If other version control, similar process with SVN, etc.
then when that's all done, can execute tests, all the dependent microservice test code has been fetched. And if no updates were made/needed, the git pull (or equivalent for other tools) will do nothing, so you don't waste needless copying of unchanged files.
FYI, version management with this type of solution should be possible for packages and assets (github releases, maven, gems, python packages, npm) but not sure how that's done with (git) source code, although I assume there is likely some method to it. So you won't have the issue of promoting the microservice test changes up to integration too soon.
I guess in a nutshell, you could say, treat your test code the same way you treat the app code in how you run them separately and yet integrate together. Use the same techniques. Your overall test framework is just another app comprised of (test) micro services.