The fires model (ProteinMPNN) appears to be more or less the reverse of AlphaFold, as John says. Given the 3D structure of a protein, it attempts to find a sequence of amino acids that would fold into that shape. Algorithms already existed in this domain, but they managed to improve accuracy using deep learning (from 32.9% to 52.4%). I have no idea how "good enough" the latter number is, and if there is more room for improvement with deep learning alone.
Then the second paper is about a generative model, that provides a protein given some high-level features (number of protomers and protomer length). My understanding is that this allows one to design a "key" that fits some specific "lock". Generative models are capable of learning the mapping from one feature space to another (for example, from images of apples to textual descriptions of apples e.g. "big red apple") and then operate in reverse. Here they must have trained the model with known 3D structures and the associated features they are interested in, and can now "hallucinate" new molecules from sets of features that the model has never seen before, but that allow for some novel application.
Telmo