INP, by definition, needs an interaction to measure. PSI does a simple load of the page, without any interactions. Even if PSI simulated a basic interaction it wouldn't be able to simulate every interaction on your page. And if it just clicked at random there is no saying whether that random click is typical of your INP events.
This is why INP needs to be measured in the field from real users. And PSI does show INP at the top of the page based on field data.
The lower part, based on Lighthouse, which is what I presume you are asking about, does not include INP. Lighthouse can simulate INP when run though the DevTools
using Timespan mode, which can be useful in CI to monitor key workflows but again, is a guess as to what interactions users will do. As that needs to be done on a site by site basis by knowing what your flow is, this is unlikely to be added to PSI.
Total Blocking Time (TBT) is the nearest you can get in a lab based environment and
can be used as a basic proxy for INP. It measures how much blocking was going on through the Lighthouse run. It does
correlate with INP, as the more blocking, the more likelihood of potential INP issues if a user interacted during that time. However it is still not a perfect match. Users may not interact at all during that busy time as they wait for the page to load, and their interactions may be fast. Or maybe there isn't a lot of blocking time, but interactions are slow because the interactions themselves are slow.
INP can show at url-level in the top field level, if you get sufficient data in CrUX. As not all users interact with a page, it's not unusual to see the other metrics (LCP, CLS) with user-level data if they had sufficient interactions, but N/A showing for INP, if it didn't have sufficient interactions. Looking at origin-level data, or page grouping level-data in Google Search Console is the best we can do here.