In AI, "credit assignment" is the problem of distributing the overall credit (or blame) to the involved steps. Back-prop in ANN is for a similar problem -- to adjust the weights on a path to get a desired overall result. I'm trying to use a simple example to should how it is handled in NARS.
Here is the situation: from <a --> b>, <b --> c>, and <c --> d>, the system derives <a --> d> (as well as some other conclusions). If now the system is informed that <a --> d> is false, it will surely change its belief on this statement. Now the problem is: how much it should change its beliefs on <a --> b>, <b --> c>, and <c --> d>, and in what process?
In the attached text file, I worked out the example step by step, using the default truth-value for the inputs. In the attached spreadsheet, the whole process is coded, so you can change the input values (in green) to see how the other values are changed accordingly. In particular, you should try (1) giving different confidence values to <a --> b>, <b --> c>, and <c --> d>, and (2) giving confirming observation on <a --> d>.
In the spreadsheet, there are two places where a conclusion can be derived in two different paths and the truth-values may be different. I have both results listed, and in the system the choice rule will pick the one that has a higher confidence.