interpreting intermediate solution

39 views
Skip to first unread message

Simon Toubeau

unread,
Jun 26, 2024, 9:36:12 AM6/26/24
to qcaw...@googlegroups.com

Problem with interpreting findings from intermediate solution

I have just conducted an analysis of sufficiency for an outcome “hierarchical centralisation”.

Following convention, I have opted to examine the intermediate solution after setting certain directional expectations. I have also followed an enhanced standard analysis by removing and contradictory simplifying assumptions and logically impossible logical remainders.

The result I obtain is below.

I am struggling with how to interpret the output produced by R.

The solutions are divided between C1P1, C1P2, C1P3 on the one hand, and C1P4, C1P5, C1P6 on the other hand.

Q1) Which one should I choose?

-          Is this on basis of the overall consistency, coverage scores of the different solutions

-          Or should it be on the basis of the prime implicants?

My understanding C1P1, C1P2, etc.. refers to the solutions that are produced on the basis of the (6) different prime implicants.

Looking at prime implicant chart below, it looks like all primitive expressions are covered by P.I 1,2,3 so that 4,5,6, are logically redundant.

Does that mean that I should only consider the solution produced by C1P1, C1P2, C1P3?

Q2) Is the analysis of which P.I are logically redundant something that is done manually by the analyst or something that can be set for R to compute?

Q3) Is it possible to set certain P.I as logically redundant before solving for the intermediate solution?

I can’t seen to make much progress with these questions with the textbooks I have.

Apologies if all these questions sounds amateurish- it’s because I am an amateur


image.png

Ingo Rohlfing

unread,
Jun 26, 2024, 4:09:45 PM6/26/24
to QCA with R

Dear Simon:

1) On your first question about what solution to choose for interpretation. Opinions differ on what to do in such a situation. All three models (see below) are equally valid logical minimizations of the truth table. From this standpoint, there is no choice to make because one should interpret them all. For practical purposes, you may pick the model that is theoretically or empirically most interesting, judged by whatever standard. I would strongly recommend reporting the other models too and explaining why you pick a subset of them for interpretation.

2) and 3): I think you are looking at the wrong prime implicant chart. This seems to be the one for the parsimonious solution. The information about the intermediate models can be accessed by typing inter1$i.sol$C1P1$; where 

- inter1 would need to be replaced by the name that you assigned to the solution output;

- C1P1 is only the information about the intermediate solution that is sandwiched by the conservative solution 1 and the parsimonious solution 1. The prime implicant chart for inter1$i.sol$C1P1 can be retrieved through inter1$i.sol$C1P1$PIchart. When you type inter1$i.sol$, you should see that there are six attributes of the R object on this level: C1P1, C1P2 and so on. Each has its prime implicant chart. The issue here is that you have six parsimonious models, P1 to P6. The intermediate solution is always determined regarding one conservative solution, which is here only C1, and one parsimonious model. Since you have six parsimonious models, there are six intermediate models at first. When the intermediate models for C1P1, C1P2, C1P3 are identical, which they are in your analysis, they are collapsed into one model. Similarly, C1P4, C1P5 and C1P6 yield the same intermediate model and are collapsed.

Now, you have a complicated situation at hand because you have two-fold ambiguity on the level of the intermediate solution. First, there are two sets of three model pairs - C1P1, C1P2,C1P3, and C1P4, C1P5,C C1P6 - that yield a different intermediate solution. Second, there is model ambiguity within the first pair because there are two intermediate models regarding the pairs C1P1, C1P2, C1P3 that summarize the truth table equally well. So, all seems in order, it only happens that you are dealing with the in my view most complicated constellation that one can have for intermediate solutions.

Besides, the question of what logically redundant is and what not is decided by the algorithm and by the settings for row.dom and all.sol in the minimize() function. One should not interfere in this process manually.

I hope this helps

Ingo

 

-- Professor für Methoden der Empirischen Sozialforschung (Methods for Empirical Social Research) phone: +49 851 5092720 fax: +49 851 5092722 Sozial- und Bildungswissenschaftliche Fakultät Innstr. 41 Universität Passau D-94032 Passau

Adrian Dușa

unread,
Jun 27, 2024, 3:38:08 PM6/27/24
to Simon Toubeau, qcaw...@googlegroups.com, Ingo Rohlfing
Dear Simon,

Further to Ingo's excellent explanation, I should probably add that R is by default giving you all possible models (that is, full model ambiguity).
This is deliberately different from Charles Ragin's fs/QCA software, where users obtain a (single) solution model only <after> selecting the prime implicants of interest, out of all surviving prime implicants.

This is something different from selecting (out) "redundant" prime implicants. In fact, there is no such thing as a redundant prime implicant: if it survived the minimization process, it is not redundant. Perhaps, as Ingo mentioned, activating the row.dom argument (row dominance) but that still doesn't dramatically decrease the model ambiguity.

To deal with the two-fold ambiguity, my own advice would be to focus on the most theoretically relevant parsimonious model, and ignore the rest. Then repeat the same procedure for the intermediate models generated from that particular parsimonious model, hence selecting the most theoretically relevant intermediate model.

This way, you would mimic obtaining the (single) solution model out of Ragin's fs/QCA software, by selecting that model out of all possible R generated models (and indeed, report the rest into the Annex).

I would also take a look at the function modelFit(), which intersects the generated model with a theoretically expected model, perhaps that would add some more insight in selecting that particular model that best fits your theoretical expectations.

I hope this helps,
Adrian


--
You received this message because you are subscribed to the Google Groups "QCA with R" group.
To unsubscribe from this group and stop receiving emails from it, send an email to qcawithr+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/qcawithr/CAD-YXgbbtJU%2BGPtieQkwdZTVxQaam8L%3DOZWP6SjoRqO4QMDEEw%40mail.gmail.com.
Reply all
Reply to author
Forward
0 new messages