I completely agree with Konstantinos Mitsopoulos' answer.
In addition, among the available MOOCs, I believe Martha and Adam White's coursera RL class has a very good reputation.
To play with RL algorithms using them as blackboxes, probably the most used library is Stable baselines3:
As the name implies, it is very useful if you want to compare your algorithm to the literature without having to tune the hyper-parameters, but it may not be a good choice if you want to learn RL by coding the algorithms.
Besides, if I may self-advertise, I have a youtube channel on RL from which I get good feedback from beginners (or maybe just from too polite people :)).
A first playlist is about the basic concepts in the tabular case, close to the content of the 1998 version of Sutton and Barto's book, but it continues to DQN, DDPG and a quick look at TD3:
A second playlist is more about the policy search and policy gradient view, covering policy gradient methods, A2C, TRPO, ACKTR, PPO, DDPG and TD3 again, SAC and TQC:
And recently I have made available the pytorch-based library that I'm using for teaching how to code deep RL, designed specifically for educational purpose, named BBRL:
In the README you will find a list of notebooks to gradually learn how to code the above list of algorithms. Being recent, it certainly needs improvement, but I hope it may help.
Don't hesitate to send feedback.