I started to write a blog/article on "Do not waste your time with ‘paper-ware’ models and algorithms."
My intention of posting this article here is: I often found students and researchers seeking answers for implementing a paper-ware models and algorithms and meaninglessly spending their time and effort to implement them on simulators such as ns-2 or ns-3. Generally, that kind of effort on implementing a paper-ware work will just end as another paper-ware.
The intention of this Article is to help researchers to identify a ‘catching idea /model’ described in a journal paper, whether it is a ‘paper-ware’ or not. But anyway, identifying it will need a lot of skills (later, we may discuss it at the end of this article – after getting some feedback from the community). But for now, here, I try to present the knowledge that AI taught me.
The following is the present content of that article.
If students, researchers and the member of this ns-3-user forum having some insights on paper-ware and techniques for identifying them, then you can share it here so that I may latter incorporate them in this article(with citation).
Definitely, we are not talking about ‘paperware’ that you see near to this paragraph!.
Most of the published journal papers only talk about some imaginary thing. If the research is using a simulation tool such as ns-3 to implement a proof of concept model then always that implementation is questionable.
During a discussion on ns-3 user group, one of the ns-3 developer ‘Tommaso Pecorella’ called the software implementation of the algorithm explained in such paper as “paper-ware”.
I do know the origin of the word ‘paper-ware’ – For the first time, I saw that word in Tommaso Pecorella’s message on ns-3 user group. I believe that “paper-ware” is an excellent name for such software/model that is discussed in most of the journal papers – they can only live in a journal paper – not in the real world.
In fact, always I had an inexpressible feeling of understanding on ‘unimplementable, impractical ideas’ commonly found in journal papers – particularly in Computer Science, Electronics and Communication Engineering related ones. But I had no simple words to express it with suitable words. ‘Paper-ware’ is an excellent single word that will express everything – (-: thank you Tommaso .
After knowing about the word ‘paper-ware’, I searched ‘google’ to read some articles on ‘paper-ware’. But, what I found mostly on ‘google’ was: items made of paper similar to the one you see here – the ‘paperware‘. Even Wikipedia itself only know about ‘paperware’ and knows nothing about ‘paper-ware’.
As a last hope, yesterday, I decided to learn more about paper-ware by asking some questions with an AI system. Amazingly, that AI system taught me more about ‘paper-ware’. (Sorry. to avoid further academic crime related to plagiarism, I am not mentioning the name of the AI system and the questions that are asked with it)
Most students and scholars and researchers in general try to collect LOT of reference papers during the initial stage of their research. Even, they select one paper with ‘so called’ novel idea that is described in the paper and spend several months or even years without realizing it as a ‘paper-ware’.
A lot of ‘paper-ware’ can be easily identified in published journal papers if the authors mention the use of simulators such as ns-2 and ns-3 for implementing their model or algorithm if you are aware of ‘paper-ware’. My intention of posting this article here is: I often found students and researchers seeking answers for implementing a paper-ware and meaninglessly spending their time and effort on it. Generally, that kind of effort on implementing a paper-ware work will just end as another paper-ware.
The intention of this Article is to help researchers to identify a ‘catching idea /model’ described in a journal paper, whether it is a ‘paper-ware’ or not. But anyway, identifying it will need a lot of skills (later, we may discuss it at the end of this article – after getting some feedback from the community). But for now, here, I try to present the knowledge that AI taught me.
While publishing research is an important part of scientific careers, the focus should always be on producing high-quality, impactful research that advances the field and contributes to the broader scientific community’s understanding of a particular topic. Publishing paper-ware, on the other hand, is generally seen as a disingenuous and ultimately self-defeating practice.
Over-hyped research papers: Some research papers in AI and networking may present over-hyped claims or unrealistic results that are difficult to reproduce or validate. For instance, a paper might claim to have developed a new AI algorithm that significantly outperforms state-of-the-art methods, but when other researchers attempt to replicate the results, they find that the performance gains are not as significant as claimed.
Insignificant improvements: Some papers may propose small or incremental improvements to existing algorithms or techniques, without providing much insight or impact in the field. For instance, a paper might propose a slight modification to an existing AI model that improves its accuracy by only a few percentage points, but without offering any significant advantage over existing techniques.
Lack of practical application: Some research papers in AI and networking may focus on theoretical or abstract concepts that have little practical application or relevance. For example, a paper might describe a new neural network architecture that is highly complex and difficult to implement in practice, or a networking protocol that is not suitable for real-world applications.
Low-quality or incomplete research: Some papers may be rushed or poorly executed, lacking rigor or statistical analysis. For example, a paper might have a small sample size or insufficient experimental setup, making it difficult to draw meaningful conclusions from the results.
It’s important to note that not all papers that fall into these categories are necessarily paper-ware, as there may be valid reasons for proposing incremental improvements or focusing on theoretical concepts. However, when the primary goal is to simply pad a researcher’s publication record or CV, without providing significant value or impact in the field, then the research can be considered as paper-ware.
Nature-inspired optimization techniques have gained popularity in networking research, and as with any other field, paper-ware can be found in this area as well. Here are a few examples of what could be considered paper-ware in the domain of nature-inspired optimization techniques in networking:
Lack of comparison with existing methods: A paper may present a new optimization technique inspired by nature, but without comparing it to existing methods in the field. This makes it difficult to determine the relative performance of the proposed method, and whether it offers any significant advantage over existing techniques.
Overly simplistic evaluation: Some papers may propose a new optimization technique, but only evaluate it using a small, simple testbed or dataset. This may not be sufficient to show the practical relevance or scalability of the proposed method.
Lack of impact: Some papers may propose a new optimization technique, but fail to demonstrate its impact in real-world networking scenarios or applications. This can make it difficult to determine the practical relevance of the proposed method, and whether it offers any significant advantage over existing techniques.
Over-exaggerated results: Some papers may present unrealistic or over-exaggerated results for the proposed optimization technique. For example, a paper might claim that the proposed method significantly outperforms existing techniques in terms of performance, but the results may not be statistically significant or may not be reproducible.
Lack of theoretical foundation: Some papers may propose new optimization techniques without providing a solid theoretical foundation or justification for the proposed approach. This can make it difficult to evaluate the validity and generalizability of the proposed method.
It’s important to note that not all papers that fall into these categories are necessarily paper-ware, as there may be valid reasons for the approach taken or for the limitations of the evaluation. However, when the primary goal is to simply pad a researcher’s publication record or CV, without providing significant value or impact in the field, then the research can be considered as paper-ware.
--
Posting to this group should follow these guidelines https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting
---
You received this message because you are subscribed to the Google Groups "ns-3-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ns-3-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ns-3-users/CAHXeiM9GJPBGJB9A7wim%2B70BM4NY58iSKB%2BnJQ9VWayCGuJPfQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ns-3-users/8fb0be48-97e4-e83b-9ac5-d0f85d420b7c%40tomh.org.
"I think I did made up the term in that post"
--
Posting to this group should follow these guidelines https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting
---
You received this message because you are subscribed to the Google Groups "ns-3-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ns-3-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ns-3-users/b4e59337-16b9-44b6-b4cc-acbc2ccb6117n%40googlegroups.com.
Here are a few articles and papers that discuss paper-ware in academic publishing:
"The Conference Review Process: In Need of Reform" by Michael Mitzenmacher and Eli Upfal, which discusses the role of conferences in promoting paper-ware: https://people.csail.mit.edu/madry/papers/conferences.pdf
"The Prevalence of Bad Data and How to Avoid It" by Jeffrey Leek, which discusses the problem of paper-ware and offers advice for researchers to avoid it: https://simplystatistics.org/2017/10/23/the-prevalence-of-bad-data-and-how-to-avoid-it/
"In search of scientific integrity: how to combat scientific misconduct" by Mariëtte Le Roux, which discusses various forms of academic misconduct, including paper-ware: https://www.nature.com/articles/nbt0303-299
"When the Difference between “Novel” and “Old” Doesn’t Matter: An Analysis of Computer Science Papers and their Impact" by Emre Kıcıman and Aleksandrs Slivkins, which discusses the impact of paper-ware on the field of computer science: https://www.microsoft.com/en-us/research/publication/when-the-difference-between-novel-and-old-doesnt-matter-an-analysis-of-computer-science-papers-and-their-impact/
"The Perils of Publish or Perish" by Audrey Watters, which discusses the pressure to publish and the resulting problem of paper-ware: https://www.insidehighered.com/blogs/hack-higher-education/perils-publish-or-perish
These resources provide a good starting point for understanding paper-ware and its impact on academic publishing.
But amazingly, I could not find any single reference to any one of the above articles anywhere on the internet.
Can anyone explain the reason?
If you find the source of any of the above articles, please post a link here or kindly forward to my mail ID
Dear Sirs and Madams,
The train of thought that is being traced in
paperware dialog, needs to explicitly define its aim. There is
some drifting and obfuscation.
2. In current times, serious research is done only in an
established Ecosystem. Just look at the work done and paperware
coming out from Eindhoven, MIT.edu, CTTC, Beijing University,
Stanford.edu. Compare it with Tandon Institute's NYUSim. NYUSim is
yet constitutes a limited area ecosystem, while using Matlab as
backend in NYUsim. They do have some excellent researchers and
Test-Beds.
3. "Matlab", a system level simulator, constitutes an Ecosystem
for research, widely used by Institutes and Universities around
the world. Matlab is very resource rich and very versatile.. It
is grounded in Matrix Numerical analysis, Digital image
processing, Statistical optimization methods and more. It allows
mathematical equations to be solved and results plotted in a very
pleasing presentation.Talking of Numerical results, please
separate the Chaff from the Grain. What Information scientists do
in numerical analysis from what is done with Matlab based
Monte-Carlo Simulations, while proving their hypotheses.
4. The next Ecosystem for research in Telecom mainly (though not
limited to it) is the NS3. Its authors set out to create one such
ecosystem in opensource, moving away from NS2, finding it less
extensible and needing a major architectural rehash. TomH, TomP,
PdpBarnes, NicolaB, and many others, have succeeded in making it a
very viable Ecosystem for research, despite limitations of funding
and manpower. Many Kudos to them. NS3 is an excellent open source
ecosystem for research, though it needs very many more
contributions to extend its scope. Imagine if it were to
incorporate a direct access to opensource GNU Octave!!! It would
then establish a very profound ecosystem, which would beat Matlab
hands-down in performance, and cover Telecom, DSP, Bionics
research, ML/AI...Every piece of code in both GNU Octave and
NS3/CTTC-5G-NR being in C++ (std=11 and above). I say this
because, I use GNU Octave and NS3/5GNR in my work and also use an
IPC between GNU Octave and NS-3. The two seem to be made for each
other. As I recall TomH's WNS-2022 talk, NS3 has established some
access to some test-beds in USA.
5. If one is considering "What constitutes original and
reproducible research", perhaps is best exemplified by the group
of researchers led by Professor Emil Bjornson from Eindhoven in
the area of 4G-6G wireless communications, albeit all their
published papers and excellent books use the very rich Matlab
system level simulator. With very few changes, it is possible to
use GNU Octave instead.
6. There are some excellent groups involved at the edge of
technology research in many universities in USA, often funded by
Defense (say, Wright-Patterson AirForce Base). They work under
established leaders in the concerned areas of technology , across
many universities as well. This the beauty of higher research in
USA.
7. Please do not judge paperware produced during MS/MTech work.
It is not original research and one cannot expect it from a fresh
graduate, he has not done a deep dive yet in to any domain. It is
somewhat like the large amount of paperware coming from some
Chinese universities and institutes these days, which just
extends the work of the GURU professor. Even with largest research
funding done by the Government in China, they have not publicly
established any Research ecosystem. Most of the work uses Matlab.
8. To conclude, most real research work is done around an established Ecosystem, ably led by subject-matter experts. Every attempted paper is reviewed by three referees, picked from subject matter exper. Thus, if a paper "passes muster" while inadequate in the research content, it is the organizers who are to be blamed for choosing poor referees for their so-called "Peer-Reviewed, Scopus-Indexed, Flag-Ship Conferences", please pardon the epithet used by all the mushrooming conferences.
The days of winning a Nobel prize working in the basement of
Cavendish Labs, U.K. or in the basement of MIT Labs are now over
(after the successful work on Apollo Module recovery the basement
lab of C.S. Draper, became the Charles Starc Draper Lab of MIT),
the Ecosystem remained the MIT Lab.
With Regards,
Madan Pande
P.S. My views are mine alone and are not meant to upset anyone...
=====================
To view this discussion on the web visit https://groups.google.com/d/msgid/ns-3-users/f0dba199-d9ee-464e-a8f8-d98cefc4b5a7n%40googlegroups.com.
https://www.acm.org/publications/policies/artifact-review-and-badging-current
This year, for accepted papers to WNS3, I am forming a committee to
evaluate all papers to check their eligibility for such badges. This
will involve the committee inspecting whatever code and documentation
that the authors have provided in the links in their paper and
attempting to reproduce the results and figures. This evaluation would
occur in the spring timeframe (April-June).
To view this discussion on the web visit https://groups.google.com/d/msgid/ns-3-users/65efe7e7-224d-2cbd-81a0-a7ed6200c4bb%40yahoo.com.
After around five months of starting this thread on this group, I tried to ask a question with Google's Bard AI about paper-ware.
In fact, I asked 'Bard' almost the same question (that I asked earlier with another AI ): "What is a paper-ware on published journal paper?"
The following is the answer from Bard AI :
The term "paper-ware" is a derogatory term used to describe software or algorithms that are described in a published journal paper, but that are not actually implemented or tested in practice. Paper-ware algorithms are often theoretical in nature, and they may not be feasible to implement in real-world systems.
The term "paper-ware" was coined by Tommaso Pecorella, a developer of the ns-3 network simulator. Pecorella used the term to describe algorithms that are described in ns-3 papers, but that are not actually implemented in ns-3. He argued that these algorithms are often impractical and unrealistic, and that they do not provide a true representation of how the algorithms would perform in real-world systems.
The term "paper-ware" has been criticized by some researchers, who argue that it is too harsh and that it discourages researchers from publishing theoretical work. However, the term has also been defended by others, who argue that it is a useful way to identify and criticize unrealistic and impractical algorithms.
Here are some of the characteristics of paper-ware:
The algorithm is described in detail in the paper, but there is no implementation or testing of the algorithm in practice.
The algorithm is based on unrealistic assumptions, such as perfect knowledge of the system state.
The algorithm is too complex or inefficient to be implemented in a real-world system.
The algorithm has not been evaluated in terms of its performance or scalability.
If you are reading a published journal paper, you can look for the following signs to identify paper-ware:
The paper does not provide any implementation details or performance results.
The paper is based on unrealistic assumptions.
The paper is too theoretical or mathematical.
The paper is not well-written or organized.
If you find that a paper contains paper-ware, you should be cautious about using the results of the paper. The algorithm may not be feasible to implement in practice, and it may not perform as well as the paper claims.
So, Bard started to believe that Tommasso Pecorella, is the inventor of the term 'paper-ware'.
I asked the same question on my mobile phone.
So, finally, Bard came to the conclusion that "The word 'paper-ware' is a neologism coined by Tommasso Pecorella".
Congratulation Tommasso Pecorella! At last, really provided a new word to the future English Dictionary.
...
But it raised another concern on AI here.
As far as we know, there are only very few sites on the internet that are talking about 'paper-ware'.
Even though Bard is providing a 'correct or accurate' answer, it shows another dangerous possibility.
Our experience shows that, theoretically, it will be very easy to give 'incorrect knowledge' to an AI system by intentionally putting some wrong information here and there on the internet.
What do the people of this group think about this danger of possibility in misleading an AI system?
Charles Pandian