Hey Prasanna, that is a great idea. Although I'm just getting started with TensorFlow and Magenta, I have decades of experience in music. I see that part of what you're wishing for should be possible, but other parts will not be possible.
What should work is to get the separate tracks from the original recording and train the system with the mixed track as input data and the separate tracks as output (training) data. If you don't have the tracks from Coldplay, you could also train it on tracks of smaller bands, where it might be easier to get a hold of the files from the studio.
Then it depends on whether we want to extract the audio of the track or "just" the notes played. To get the audio of the track, the system must get the raw waveform sample data as input and output, which would be very interesting to see (I have no feeling how feasible it is). If you want to extract the notes (i.e. MIDI) from the track, this touches the more familiar realm of extracting note frequencies from a song.
As for the audio FX and the mix: you will never be able to extract these. It's like if you bake a cake and then try to extract a raw egg from it – just too much entropy and too much information "lost". But what should be possible is to extract the general type of effect (reverb, delay, chorus, distortion), which would already help. I suppose you would have to train the system on how certain effects sound like (or "look like" in the waveform).
All in all very interesting. I'd be willing to try to produce waveform sample output from waveform sample input – if anyone wants to jump in.
Cheers,
Matthias