Hi Suhaas, welcome! By choosing to try to write sensible test cases and not focusing just on test coverage, I think you will be on the right path to building a test suite that helps you and your team improve your software.
What do you mean when you say you want to generate test cases, what do you mean? Do you mean that you want to pass some production source code files to a tool and have it emit files that contain JUnit test case classes? Or are you looking for some code libraries that will help programmers write their test cases by generating example inputs and outputs? Or something else?
If it's the format (generate test code from scratch given production source code files), I've never found a tool that could both a) create sensible cases that made sense for my problem domain, and b) generate tests that were clear in their intent and would be easy to understand and maintain. I haven't tried to use that many, so perhaps this has changed. When I write tests I want to make them as clear as possible so I understand what they mean later, focused on specific cases so they don't break when unrelated logic changes, and when they fail I want them to point clearly to one problem. I'll admit that I'm skeptical that a tool can do this well, but if one can, I'd love to hear about it! :)
There's another type of testing library where the programmer writes specifications for a program / object / unit of production code and the testing library attempts to generate inputs that either prove or disprove those specifications. The most popular one I know of is QuickCheck for the Haskell programming language (
https://wiki.haskell.org/Introduction_to_QuickCheck1) which has been ported to many languages. In the JVM world, there's the Scala port called ScalaCheck (
https://www.scalacheck.org/) which may work for your team if you can use Scala for your testing code. I'm not sure if there's an equivalent Java version.
You had mentioned you wanted to improve test code coverage from cases such as 15 to 50%. One way I've found to start gaining confidence that my production code is covered by tests (and make dramatic gains in code coverage) is to start by writing a few end-to-end tests that exercise the happy-paths of my system. These would interact via a web browser in the case of a web application, or through an HTTP endpoint in the case of a web service. These tests tend to cover large paths of the code and start to give me confidence that I have a safety net to catch errors.
The danger I've seen with this approach is that these end-to-end tests are often very hard to write in a way that makes them consistent -- they tend to be flaky, and when a test fails it's not always obvious what code was wrong. That's why I'd try to write just a few and then try to focus on writing more tests that work with lower-level objects in the system - and when something is hard to test, I see if there are ways to change the test that makes the code easier to take and also improves the design of the system by reducing coupling, introducing domain concepts, etc.
Cheers,
Dan