Depends what you are trying to do. When I worked on a waterfall team and had to automate my test cases I liked the idea of each test being independent. I would write the test case documentation such that there were preconditions (setup) but if you read the preconditions and the steps for any given single test case, you could execute it using the information provided. So when I automated it I would do the open browser, do the setup, execute the test, close the browser, teardown the setup. The nice thing about this was when I ran the test cases I would see more tests which got executed. One failure did not impact the other test cases.
As a manual tester, if I could not test everything fast enough I could hire another tester. They would take half the tests, I'd take the other tests and we'd get it done twice as fast. Translate this to automated testing, because I made the test cases independent when I documented/automated them, I could run them in parallel on two machines. If I needed things tested faster I could hire more testers for manual testing or if they are automated I could have them run on more computers.
So in theory this seemed to be the way to go. There are also a lot of good practices from unit testing I could apply to automated testing when I used xUnit frameworks.
But the reality was that management would not always hire 10 testers to execute things in parallel. Even if they did there was a level of coordination. At some point the overhead of managing many testers would make it non-viable. In theory it should scale but in reality it just would not scale. The same is true of automating parallel execution. As a manual tester I might have a test plan where each test is written to be independent but I would look over the entire test plan and group tests that I could execute together. So I might open the browser, run a dozen tests then close the browser. In other cases I would open the browser, leave it open, execute all the test cases then close the browser (use @BeforeClass and @AfterClass rather than @Before and @After when using an xUnit framework). Since i was doing this when manual testing, why not do the same thing when automating testing?
Now I have worked on project where we realized that things were getting cached or the order of execution after logged in made a difference. If we had been opening and closing the browser before certain tests, it would have behaved differently. So as a manual tester I knew that one group of tests could be run using "open browser once, run all the tests, close the browser" but other tests had to be run differently. When I'd automate the tests I would categorize the tests. In the documentation I would put a 'tag' or 'note' on the tests which could be run without opening/closing the browser. When I automated them I could (a) put them in a different package space or (b) add a @category tag to them. How I documented them didn't matter so long as the testers knew the convention. How I categorized the automation did not matter so long as the automation framework knew the convention.
What goal are you trying to achieve or problem you are trying to solve? How can you do it manually? Is there an equivalent through automation? This is how I have grown my automation frameworks over the years.