Why do you need to call parallelize? That is something that really should be avoided, since it implies moving all of the data from the driver into the cluster -- and that, in turn, implies a very expensive consumption of network resources if you're working with any sizable amount of data. (And if you don't have a lot of data, then why are you using a data-oriented cluster framework in the first place?) The parallelize operation should really only be used in tests or small-scale explorations. Once you intend to use your Spark cluster in production and at scale, your code should be relying on neither handling large amounts of data in the driver process nor on moving large amounts of data between nodes.
In other words, if your data are already distributed across the cluster within RDD1, then you need to find a way to transform RDD1 directly into RDD2 without moving the data back to the driver so that you can call parallelize and move it back to the cluster. And as Josh pointed out, creating new, nested driver processes/SparkContexts on the worker nodes is not an option. Ideally, your transformations of RDD1 will not only not involve the very expensive back-and-forth of data between the driver and the workers, but will also retain the existing partitioning of RDD1 in order to avoid moving data between worker nodes.
If the above doesn't make immediate sense to you, then you're not yet understanding some of the fundamental concepts behind programming and working with Spark.