Global Innovation Index: General Deconstruction and Critique

6 views
Skip to first unread message

Prateek S

unread,
Mar 14, 2018, 2:15:05 PM3/14/18
to spicy ip

In October 2017, I posted a brief report on the Global Innovation Index (GII) 2017 (10th edition). Over the course of the ensuing write-up, I have attempted to provide a more general review of the GII, albeit basing most viewpoints on recent reports (including the 2017 report). The objective is to acquire a better understanding of (a) the process behind such indices, (b) the inherent subjectivity and also, (c) take note of a few limitations of the GII.

The GII provides data that enables academicians to understand innovation across the world, governments to reconsider policy changes and businesses to decide upon the most suitable economies to make R&D investments in. Further, please note that all data provided is absolute and therefore, can be utilized for various purposes by relevant actors. Only the rankings, obviously, are relative.

The latest report has been jointly-published by Institut Européen d’Administration des Affaires (INSEAD), Cornell University and the World Intellectual Property Organization (WIPO). The first report, published in 2007, was INSEAD and World Business’ initiative. INSEAD has been publishing reports annually ever since, while WIPO and Cornell have been associated with the report since 2012 and 2013 respectively. Further, knowledge partners, such as the Confederation of Indian Industry,  assist in the report making.

The GII 2017’s press release notes that the report covered 127 countries, using hard data, surveys and composite indicators (discussed later), with 90% confidence intervals (the probability of the data being accurate is 90%). Further the Joint Research Centre of The European Commission has conducted a statistical audit of the report (Annex 3 ; Pg. 59). 

BASIC FUNCTIONING (Input and Output)

Let us briefly understand the process behind compiling the GII. (Annex I; Pg. 47)

Two kinds of data were taken into account: Input related and Output related.

Input (“Innovation Input Sub-Index”):

The data used to arrive at this figure involves variables that affect innovation, as opposed to those that measure innovation itself.

The broad areas examined were:

  1. Institutions
  2. Human Capital and Research
  3. Infrastructure
  4. Market Sophistication
  5. Business Sophistication

One might question the necessity of considering Inputs at all, when Output data is available to measure the actual innovation. Input data is relevant because measuring innovation holistically on Output data alone is complex and could possibly be inaccurate. To explicate further, where there exists a dearth of standards for determining the degree of innovativeness ascribed to an economy’s products, Input data could serve to fill the void.

Output (“Innovation Output Sub-Index”):

The “Innovation Output Sub-Index” was arrived at by using data that measures results of innovation (e.g. number of patent filings), rather than the factors that cause it.

The broad areas considered were:

  1. Knowledge And Technology Outputs
  2. Creative Outputs

Moving on, the GII rankings are based on a final figure arrived at by calculating the average of the “Innovation Input Sub-Index” and the “Innovation Output Sub-Index”.

The “Innovation Efficiency Ratio” is the ratio of the Input and Output Sub-indices. Essentially, it is a measure of the output generated with respect to the input. While assessing an economy’s performance on the basis of the IER, we need to take note of the Output index as economies may have a favourable ratio merely due to a lack of inputs, rather than actual outputs.

CLASSIFICATION ON THE BASIS OF  THE DATA’ S NATURE (Hard, Composite, Survey)

For arriving at the Input and Output Sub-Indices, 81 more specific indicators were considered (Indicator explanations at Appendix 3; Pg. 403) . The indicators can be classified into “composite indicators“, “surveys” and “hard data“. Let us have a look at each and attempt to identify the elements of human subjectivity.

Hard data” refers to objective, quantitative data. It needs to be contrasted with “surveys” (qualitative, subjective data; “soft data“). Also, both hard data and soft data cannot be split into further components, unlike “composite indicators“, which are an average of other base indicators (soft, hard or a combination thereof). (Refer to Appendix IV; Pg. 419)

Hard Data (57 of 81 indicators)

Hard data is objective, quantitative data, that isn’t derived from opinion. Examples include “Gross Expenditure on R&D (% of GDP)” and “Government Expenditure On Education Per Pupil, Secondary (% of GDP per capita)“.

We needn’t delve too deep into hard data, as it is largely uncontentious. While there exist considerations related to scaling to account for differences in population, purchasing power parity and GDP amongst economies, there is little for the legal analyst to note here.

Composite Indicators (19 of 81 indicators)

As stated above, such indicators are the result of combining multiple base indicators (could include both soft and hard data). In other words, they are an “index within an index“. Composite indicators are a slightly contentious kind of indicator, as including them could lead to the duplication of variables. The same variable might feature in multiple composite indicators, or even independently,  and thus, unsettle the balance. The report’s “Technical Notes” section underplays the problem by stating that very few such indicators (19) were used and the inclusion of expert opinion counteracts the detrimental effects of composite indicators. Examples of composite indicators in the GII include “Political Stability and Safety“, “Government Effectiveness Index” and “Environmental Performance Index“. 

If you are wondering about the means behind incorporating subjective expert opinion into a quantitative analysis, hold onto that thought as we will discuss it shortly.

Soft Data (5 of 81 indicators)

As stated earlier, soft data involves surveys and quantification of subjective opinion. The quantification is carried out by ascribing a numeral to a particular opinion. By way of illustration, “intensity of local competition” was arrived at by averaging expert ratings of an economy’s state of competition between 1 (“Not intense at all“) to 7 (“Extremely Intense“). Other examples of such data include “ICTs and Business Model Creation” and “ICTs and Organizational Model Creation”. 

Further, it is pertinent to note that certain factors, such as “Regulatory quality” and “Rule of law“, have been given lesser weightage (1/2) than others. This isn’t problematic by itself, but it is significant to note, as it further highlights the elements of subjectivity and compromises in accuracy that such indices inevitably entail. For instance, one could argue that dividing factors, by weightage, into a bipartite classification overlooks the various degrees of significance that different factors could have. In other words, a particular factor might deserve 1/4 the weightage, while another might deserve 3/4 weightage. Yet, such factors, in the current model, are approximated to be slotted into either category (full weightage or half-weightage), thereby detrimentally affecting the Index’s accuracy.

CRITIQUE

The definition of “innovation” followed by the GII is that of the European Communities and the Organisation for Economic Co-operation and Development’s Oslo Manual:

An innovation is the implementation of a new or significantly improved product (good or service), a new process, a new marketing method, or a new organizational method in business practices, workplace organization, or external relations.

The 2017 report notes the development of the concept of innovation from being perceived solely as a result of developing radicalbreakthrough technologies through industry based R&D, to also being understood as a result of incremental innovation and “innovation without research” through actors outside the industry as well. By “innovation without research” the report is referring to an economy’s capability to capitalize on combinations of existing technologies, as opposed to innovation due to breakthrough developments. The report accounts for such innovations by incorporating factors that are indicative of an ecosystem suitable for innovation of the incremental kind, rather than focusing excessively on factors that indicate traditional markers of innovation (patent filing, et cetera).

The criticisms of the GII’s adopted definition of “innovation”  pick on the report’s excessive emphasis on the element of “new(ness)” in innovations, instead of giving due consideration to innovation’s benefit to society. In other words, critics argue that innovation should be judged by its beneficial impact on society, rather than focusing on the novelty aspect.

The definition notwithstanding, I submit that the GII, effectively, hasn’t overemphasized the element of novelty and in fact, has attempted to arrive at a rather holistic analysis through the methodology involving Input and Output factors outlined above.

Moving on, one of the stronger criticisms I have come across is regarding the excessive significance attributed to factors that aren’t integral to innovation. For instance, “Ease of Paying Taxes“, “Electricity Output“(half-weightage) and “Ease of Protecting Minority Investors” are factors alongside “Ease of Getting Credit” and “Venture Capital Deals“. Hence, an argument could be made to reduce the scope of indicators to isolate the most relevant indicators of innovation.

While it’s true that quite a few indicators used in the GII share an arguably tenuous relationship with innovation, analysts, policy-makers and other stakeholders would be better served through a comprehensive trove of data, rather than a narrow analysis. Of course, the perceived loss of accuracy in an economy’s represented innovativeness could be viewed as a limitation. However,  I feel that the benefit of data dissemination is a sufficiently countervailing value. Therefore, analysts should take the rankings with the proverbial “pinch of salt“.

Lastly, the official press-conferences (accessible here, here, here and here), also highlight a few more limitations. Innovation in the defence sector isn’t factored in directly, though their downstream applications in other sectors is factored into. Also, a few jurisdictions allow for “secret patents“, which haven’t been accounted for.

TAKEAWAY

The write-up aims to help readers understand the elements of human subjectivity and inaccuracies that are inherent to indices in general and GII specifically. Now that we have understood the limitations, we can better appreciate and utilize the data provided in such indices.

Limitations notwithstanding, the GII continues to serve as a valuable source of data for all sorts of actors to base their decisions on. Be it policy-makers attempting to identify good practices, or an academic seeking reliable data for research.

Please click here to view this post on SpicyIP and leave a comment.

Reply all
Reply to author
Forward
0 new messages