Hi Alexandre,
I've been applying for jobs and similar thoughts crossed my mind.
I guess one possible answer is that it would be good to be in CORE if
the rating was good. Conferences like IJCAI, NIPS, and CHI are all A*.
If someone cares a lot about ratings they will surely take this into
consideration when they think about where to submit a paper...
So yeah: if we want the conference to get a good rating (or be
equivalent to one that does) then there are fairly clear guidelines
published (
goo.gl/DrNxnE). To me ICCC sounds like it matches the
description of an A-grade conference:
"An A Conference may be the center of an ecosystem, including
workshops and tutorials."
"Reviews of papers are generally undertaken by people who have
published in the area of the submitted work, and provide detailed and
extended feedback."
So perhaps at this point the question becomes procedural: what does it
take to get noticed by the people who run CORE?
Regarding the h5 index ("h5-index is the h-index for articles published
in the last 5 complete years. It is the largest number h such that h
articles published in 2011-2015 have at least h citations each.")... I
would say, let's avoid doing anything whatsoever related to this metric
*directly*. There was a big scandal related citation hacking a few
years ago, you're probably aware of that.
https://blogs.unimelb.edu.au/sciencecommunication/2013/08/31/scandal-yelling-and-mess-the-outcome-of-hyping-factors/
I guess everyone is familiar with "Goodhart's law", even if not by that
name. Variously:
"When a measure becomes a target, it ceases to be a good measure."
"Any observed statistical regularity will tend to collapse once
pressure is placed upon it for control purposes."
@incollection{goodhart1984problems,
title={Problems of monetary management: the {UK} experience},
author={Goodhart, Charles AE},
booktitle={Monetary Theory and Practice},
pages={91--121},
year={1984},
publisher={Springer}
}
Naturally, we could do more for dissemination without creating a
scandal, but to my mind this is best approached from a holistic
standpoint.
Within computational creativity we can and do have quite a few more
interesting ways to measure "quality" than bibliometrics. If I do say
so myself, I put forth several interesting suggestions in this regard
last year:
@inproceedings{corneli2016institutional,
author={Corneli, Joseph},
title={An institutional approach to computational social creativity},
editor={Cardoso, Amilcar and Pachet, Fran\c{c}ois and Corruble, Vincent and Ghedini, Fiammetta},
booktitle={Proceedings of the Seventh International Conference on Computational Creativity, ICCC 2016},
year={2016},
url={
http://www.computationalcreativity.net/iccc2016/wp-content/uploads/2016/06/paper_9.pdf},
abstract={Modelling the creativity that takes place in social settings presents a range of theoretical challenges. Mel Rhodes's classic "4Ps" of creativity, the "Person, Process, Product, and Press," offer an initial typology. Here, Rhodes's ideas are connected with Elinor Ostrom's work on the analysis of economic governance to generate several "creativity design principles." These principles frame a survey of the shared concepts that structure the contexts that support creative work. The concepts are connected to the idea of computational "tests" to foreground the relationship with standard computing practice, and to draw out specific recommendations for the further development of computational creativity culture.}
}
Naturally, I'd be interested to see more discussion of this stuff!
Another point to emphasise here: we should be concerned about social
value, not just academic profiles. E.g., for a nice juicy metric,
consider how much money "superstar" AI programmers are making these days
in the field of machine learning...!
Joe