Requesting your feedback for preparing an article: "Do not waste your time with ‘paper-ware’ models and algorithms"

236 views
Skip to first unread message

Charles Pandian

unread,
Feb 17, 2023, 3:23:09 AM2/17/23
to ns-3-...@googlegroups.com

I started to write a blog/article on "Do not waste your time with ‘paper-ware’ models and algorithms."

My intention of posting this article here is: I often found students and researchers seeking answers for implementing a paper-ware models and algorithms and meaninglessly spending their time and effort to implement them on simulators such as ns-2 or ns-3. Generally, that kind of effort on implementing a paper-ware work will just end as another paper-ware.

The intention of this Article is to help researchers to identify a ‘catching idea /model’ described in a journal paper, whether it is a ‘paper-ware’ or not. But anyway, identifying it will need a lot of skills (later, we may discuss it at the end of this article – after getting some feedback from the community). But for now, here, I try to present the knowledge that AI taught me.


The following is the present content of that article. 

If students, researchers and the member of this ns-3-user forum having some insights on paper-ware and techniques for identifying them, then you can share it here so that I may latter incorporate them in this article(with citation).


Charles Pandian,

--------------------------------------------------------------------------------------------------------the content of that article ------------------------------------------------------------------------------------


Do not waste your time with ‘paper-ware’ models and algorithms.

Article Written by:  CharlesPandian

 February 17, 2023 

A. Introduction.

Definitely, we are not talking about ‘paperware’ that you see near to this paragraph!.

Most of the published journal papers only talk about some imaginary thing. If the research is using a simulation tool such as ns-3 to implement a proof of concept model then always that implementation is questionable.

During a discussion on ns-3 user group, one of the ns-3 developer ‘Tommaso Pecorella’ called the software implementation of the algorithm explained in such paper as “paper-ware”.

I do know the origin of the word ‘paper-ware’ – For the first time, I saw that word in Tommaso Pecorella’s message on ns-3 user group.  I believe that “paper-ware” is an excellent name for such software/model that is discussed in most of the journal papers – they can only live in a journal paper – not in the real world.

In fact, always I had an inexpressible feeling of understanding on ‘unimplementable, impractical ideas’ commonly found in journal papers – particularly in Computer Science,  Electronics and Communication Engineering related ones. But I had no simple words to express it with suitable words.  ‘Paper-ware’ is an excellent single word that will express everything – (-: thank you Tommaso 🙂 .

After knowing about the word ‘paper-ware’, I searched ‘google’ to read some articles on ‘paper-ware’. But, what I found mostly on ‘google’ was: items made of paper similar to the one you see here – the ‘paperware‘. Even Wikipedia itself only know about ‘paperware’ and knows nothing about ‘paper-ware’.

As a last hope, yesterday, I decided to learn more about paper-ware by asking some questions with an AI system. Amazingly, that AI system taught me more about ‘paper-ware’. (Sorry. to avoid further academic crime related to plagiarism, I am not mentioning the name of the AI system and the questions that are asked with it)

Most students and scholars and researchers in general try to collect LOT of reference papers during the initial stage of their research.  Even, they select one paper with ‘so called’ novel idea that is described in the paper and spend several months or even years without realizing it as a ‘paper-ware’. 

A lot of ‘paper-ware’  can be easily identified in published journal papers if the authors mention the use of simulators such as ns-2 and ns-3 for implementing their model or algorithm if you are aware of ‘paper-ware’.  My intention of posting this article here is: I often found students and researchers seeking answers for implementing a paper-ware and meaninglessly spending their time and effort on it. Generally, that kind of effort on implementing a paper-ware work will just end as another paper-ware.

The intention of this Article is to help researchers to identify a ‘catching idea /model’ described in a journal paper, whether it is a ‘paper-ware’ or not. But anyway, identifying it will need a lot of skills (later, we may discuss it at the end of this article – after getting some feedback from the community). But for now, here, I try to present the knowledge that AI taught me.

B. Paper-ware – Definition.

In the context of scientific publishing, “paper-ware” refers to the practice of publishing papers solely for the purpose of padding a researcher’s CV or publication record, rather than for the advancement of scientific knowledge. Paper-ware is generally considered to be low-quality research that is often rushed or poorly executed, and which may not contribute significantly to the broader scientific community’s understanding of a given field.
The term “paper-ware” can also refer to research papers that are of little practical use or relevance, such as those that are highly theoretical or abstract and which do not have clear implications for real-world problems or applications. In some cases, paper-ware may be published in order to fulfill a requirement for academic promotion or tenure, rather than because it represents a significant contribution to the scientific literature.
 

While publishing research is an important part of scientific careers, the focus should always be on producing high-quality, impactful research that advances the field and contributes to the broader scientific community’s understanding of a particular topic. Publishing paper-ware, on the other hand, is generally seen as a disingenuous and ultimately self-defeating practice.

a) Paper-ware in the Domain of AI and Networking.

In the domains of AI and networking, paper-ware can take various forms. Here are a few examples of what could be considered paper-ware:
  1. Over-hyped research papers: Some research papers in AI and networking may present over-hyped claims or unrealistic results that are difficult to reproduce or validate. For instance, a paper might claim to have developed a new AI algorithm that significantly outperforms state-of-the-art methods, but when other researchers attempt to replicate the results, they find that the performance gains are not as significant as claimed.

  2. Insignificant improvements: Some papers may propose small or incremental improvements to existing algorithms or techniques, without providing much insight or impact in the field. For instance, a paper might propose a slight modification to an existing AI model that improves its accuracy by only a few percentage points, but without offering any significant advantage over existing techniques.

  3. Lack of practical application: Some research papers in AI and networking may focus on theoretical or abstract concepts that have little practical application or relevance. For example, a paper might describe a new neural network architecture that is highly complex and difficult to implement in practice, or a networking protocol that is not suitable for real-world applications.

  4. Low-quality or incomplete research: Some papers may be rushed or poorly executed, lacking rigor or statistical analysis. For example, a paper might have a small sample size or insufficient experimental setup, making it difficult to draw meaningful conclusions from the results.

It’s important to note that not all papers that fall into these categories are necessarily paper-ware, as there may be valid reasons for proposing incremental improvements or focusing on theoretical concepts. However, when the primary goal is to simply pad a researcher’s publication record or CV, without providing significant value or impact in the field, then the research can be considered as paper-ware.

b) Paper-ware in the Domain of Optimization Techniques in Networking.

Nature-inspired optimization techniques have gained popularity in networking research, and as with any other field, paper-ware can be found in this area as well. Here are a few examples of what could be considered paper-ware in the domain of nature-inspired optimization techniques in networking:

  1. Lack of comparison with existing methods: A paper may present a new optimization technique inspired by nature, but without comparing it to existing methods in the field. This makes it difficult to determine the relative performance of the proposed method, and whether it offers any significant advantage over existing techniques.

  2. Overly simplistic evaluation: Some papers may propose a new optimization technique, but only evaluate it using a small, simple testbed or dataset. This may not be sufficient to show the practical relevance or scalability of the proposed method.

  3. Lack of impact: Some papers may propose a new optimization technique, but fail to demonstrate its impact in real-world networking scenarios or applications. This can make it difficult to determine the practical relevance of the proposed method, and whether it offers any significant advantage over existing techniques.

  4. Over-exaggerated results: Some papers may present unrealistic or over-exaggerated results for the proposed optimization technique. For example, a paper might claim that the proposed method significantly outperforms existing techniques in terms of performance, but the results may not be statistically significant or may not be reproducible.

  5. Lack of theoretical foundation: Some papers may propose new optimization techniques without providing a solid theoretical foundation or justification for the proposed approach. This can make it difficult to evaluate the validity and generalizability of the proposed method.

It’s important to note that not all papers that fall into these categories are necessarily paper-ware, as there may be valid reasons for the approach taken or for the limitations of the evaluation. However, when the primary goal is to simply pad a researcher’s publication record or CV, without providing significant value or impact in the field, then the research can be considered as paper-ware.

c) Paper-ware in Journal Papers with Software Implementation

Journal papers that include software implementation can also fall prey to paper-ware practices. Here are a few examples of what could be considered paper-ware in such papers:
  1. Poorly documented or inaccessible software: Some papers may include software implementation, but fail to provide sufficient documentation or make the code readily accessible to others. This can make it difficult for other researchers to evaluate, reproduce, or build on the work presented in the paper.

  2. Insufficient evaluation: Some papers may include software implementation, but only evaluate it using a small or limited dataset or testbed. This may not be sufficient to demonstrate the practical relevance or scalability of the proposed method, or to compare it against existing methods.

  3. Lack of novelty or impact: Some papers may present software implementation of an existing or well-known method, without demonstrating any significant novelty or impact in the field. This can make it difficult to justify the value of the work presented in the paper, or to distinguish it from existing research.

  4. Incomplete or non-functional software: Some papers may present software implementation that is incomplete, non-functional, or has limited functionality. This can make it difficult to evaluate the quality of the software or to use it in practical scenarios.

  5. Over-exaggerated results: Some papers may present unrealistic or over-exaggerated results for the software implementation. For example, a paper might claim that the proposed software significantly outperforms existing methods in terms of performance, but the results may not be statistically significant or may not be reproducible.

It’s important to note that not all papers that include software implementation that fall into these categories are necessarily paper-ware, as there may be valid reasons for the approach taken or for the limitations of the software. However, when the primary goal is to simply pad a researcher’s publication record or CV, without providing significant value or impact in the field, then the research can be considered as paper-ware.

d) Fake Research Paper

A fake research paper is a document that is intentionally created to deceive readers into thinking that it is a legitimate research paper. Fake research papers can take many forms, but they typically involve presenting false or fabricated data, results, or conclusions in order to make a point or support a particular agenda.

Fake research papers can be created for a variety of reasons, including to manipulate public opinion, to discredit legitimate research, or to promote a particular product or service. They can also be created as part of a deliberate effort to spread misinformation or propaganda.

It is important to note that fake research papers are considered unethical and can have serious consequences, both for the individuals involved in their creation and for the broader scientific community. In addition, they can cause harm to individuals or organizations that rely on the information contained in the paper for important decisions or actions.

Scholars are not only the people who will do fake papers and paper-ware and publish them in a reputed journal – even some of the journals and their review mechanisms were fake; so one can easily publish any ‘nonsense paper’ in some of such journals[3].

Identifying Fake Papers and Paper-ware

Identifying paper-ware in a published paper can be challenging, as it often involves looking beyond the paper’s content and evaluating the quality, relevance, and impact of the research presented.  Identifying whether a published paper is a “paper-ware” or a “paper-only” study, that is, a study that lacks empirical data or significant findings, there are a few things you can look for.

Here are some tips that can help you identify paper-ware in a published paper:

  1. Look for the research question or hypothesis: A paper-ware study may lack a clear research question or hypothesis that is being tested.

  2. Check the research methodology: Look for details on the research methodology used in the paper, such as the sample size, data collection process, and statistical analysis. If the methodology is flawed, incomplete, or poorly explained, it could be a sign that the research is paper-ware. A paper-ware study may lack a detailed description of the methodology used to collect and analyze the data. The authors may also use vague or unclear language when describing the methodology.

  3. Look for the results: A paper-ware study may have little or no significant results. If the authors report significant findings, make sure that they are based on empirical data and not just speculation or theory.

  4. Look for the discussion: A paper-ware study may have a weak or inadequate discussion section that does not draw clear conclusions from the results or explain the implications of the findings. 

  5. Look for the references: A paper-ware study may have few or no references to other empirical studies or research in the field.

Keep in mind that it can be difficult to identify a paper-ware study based on the published paper alone, so it is important to consider the reputation of the journal and the authors, as well as any additional information about the study that is available.

In fact, personally, I am following some simple rules to identify and avoid fake papers and paper-ware. They are very crude & rude rules – I can not disclose them in an open forum now – I will polish them and dignify them for a public forum post and discuss them here later.

If an AI can tell this many ideas to identify paper-ware, certainly, humans also have some insights on this.
If students, researchers and members of this forum have some insights on paper-ware and techniques for identifying them, then you can share them here so that I may latter incorporate them in this article(with citation).

References :

  1. I can not find a single reference on this topic other than ‘Tommaso Pecorellas’ post on the group
  2. Some answers from an undisclosed AI system.
  3. Fake Journals – A Humiliation to Scientific Research

Tommaso Pecorella

unread,
Feb 18, 2023, 9:30:10 PM2/18/23
to ns-3-users

Hi Charles,

I think I did made up the term in that post, but I totally agree on defining it, and even using it. Feel free to poke me privately to discuss further.
Best way to do so... Zulip (https://ns-3.zulipchat.com)

Tom Henderson

unread,
Feb 19, 2023, 10:49:23 AM2/19/23
to ns-3-...@googlegroups.com
Charles,

I agree with the intent of your draft blog to try to steer people away from bad research practices.  However, using the word 'paper-ware' as a shorthand for this, and defining it along the lines of 'unimplementable... can only exist in a paper', discounts some potentially useful cases.   For instance, suppose that you devise some practical approach but want to explore how close it is to an idealized, unimplementable alternative.  For example, consider "instantaneous perfect feedback" for some kind of feedback-based adaptive algorithm.

There are also plenty of examples of what you describe in non-simulation contexts, including purely math-based models, and testbeds containing non-commercial-grade implementations.  Much of research literature uses purely math, at a high level of analytical abstraction, making approximations about the real world like independence of events, infinite buffers, etc.  Still, this kind of analysis is useful to understand performance bounds and trends in behavior.

I think the real problem to solve is poor research methodology, whether it is with math, simulations, or testbeds, which your post later gets into more broadly by discussing fake conferences and journals.  Maybe I would rephrase your specific complaint here with "Do not waste your time with simulation models of low relevance to your research"-- I think the problem is often that people just reach for whatever is readily available rather than think about what is a better fit and do the work required to build that better fit.

- Tom
--
Posting to this group should follow these guidelines https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting
---
You received this message because you are subscribed to the Google Groups "ns-3-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ns-3-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ns-3-users/CAHXeiM9GJPBGJB9A7wim%2B70BM4NY58iSKB%2BnJQ9VWayCGuJPfQ%40mail.gmail.com.


Judith Samson

unread,
Feb 19, 2023, 1:46:13 PM2/19/23
to ns-3-users
Charles,

Thanks for posting the blog entry--it definitely corresponds to what I have found in my work as a PhD candidate in networking. My work involves taking the routing algorithms designed by my advisor and building ns-3 simulations--this is much harder than I originally thought :-) , but it has given me innumerable insights into what actually makes a viable routing algorithm, including our own.

 Now that I have struggled with ns-3 a bit, I find that when I'm going over related work now, a useful filter is whether or not the paper provides enough information to actually implement a simulation. If not, then the probability of it being paper-ware is high. Also, some algorithms that are respectable and correct mathematically are not practical to implement--just thinking about how to build the candidate algorithm into an ns-3 simulation will tell you that very quickly, even if the computational complexity is theoretically manageable. If you can't actually build it into a protocol then it is by definition paper-ware no matter how clever it is.

I have found that building the simulation is the bridge between the mathematical exercise of designing the algorithm and the engineering craftsmanship in building the actual protocol, which has turned out to be my favorite part of routing. In fact, I'm discovering issues that I originally made in earlier work that I'm improving, which is always useful.

Judith

Tom Henderson

unread,
Feb 19, 2023, 2:27:13 PM2/19/23
to ns-3-...@googlegroups.com, Judith Samson
Judith, thanks for your insights. In reference to your comment below
that "... a useful filter is whether or not the paper provides enough
information to actually implement a simulation," I would like to
advertise something related to this that we are trying this year for WNS3.

I have long felt that ns-3 provides an excellent opportunity for people
to publish perfectly reproducible research results to allow future
readers to check the work and to extend it further. ACM has also
recognized the importance of this and has defined a badging system to
review artifacts for reproducibility:

https://www.acm.org/publications/policies/artifact-review-and-badging-current

This year, for accepted papers to WNS3, I am forming a committee to
evaluate all papers to check their eligibility for such badges. This
will involve the committee inspecting whatever code and documentation
that the authors have provided in the links in their paper and
attempting to reproduce the results and figures. This evaluation would
occur in the spring timeframe (April-June). If anyone on this list
would like to participate on this committee, please contact me.

- Tom
>> I started to write a blog/article on "*Do not waste your time with
>> ‘paper-ware’ models and algorithms.
>> <https://www.projectguideline.com/do-not-waste-your-time-with-paper-ware-models-and-algorithms/>"*
>>
>> My intention of posting this article here is: I often found
>> students and researchers seeking answers for implementing a
>> paper-ware models and algorithms and meaninglessly spending their
>> time and effort to implement them on simulators such as ns-2 or
>> ns-3. Generally, that kind of effort on implementing a paper-ware
>> work will just end as another paper-ware.
>>
>> The intention of this Article is to help researchers to identify a
>> ‘catching idea /model’ described in a journal paper, whether it is
>> a ‘paper-ware’ or not. But anyway, identifying it will need a lot
>> of skills (later, we may discuss it at the end of this article –
>> after getting some feedback from the community). But for now,
>> here, I try to present the knowledge that AI taught me.
>>
>> *
>> The following is the present content of that article. *
>>
>> *If students, researchers and the member of this ns-3-user forum
>> having some insights on paper-ware and techniques for identifying
>> them, then you can share it here so that I may latter incorporate
>> them in this article(with citation).
>> *
>>
>>
>> Charles Pandian,
>>
>> --------------------------------------------------------------------------------------------------------the content of that article ------------------------------------------------------------------------------------
>>
>>
>> *Do not waste your time with ‘paper-ware’ models and algorithms.
>> <https://www.projectguideline.com/do-not-waste-your-time-with-paper-ware-models-and-algorithms/>*
>>
>>
>> Article Written by: CharlesPandian
>>
>>  February 17, 2023
>>
>>
>> A. Introduction.
>>
>> Definitely, we are not talking about ‘paperware’ that you see near
>> to this paragraph!.
>>
>> Most of the published journal papers only talk about some
>> imaginary thing. If the research is using a simulation tool such
>> as ns-3 to implement a proof of concept model then always that
>> implementation is questionable.
>>
>> During a discussion on ns-3 user group, one of the ns-3 developer
>> ‘Tommaso Pecorella’ called the software implementation of the
>> algorithm explained in such paper as “paper-ware”.
>> <https://groups.google.com/g/ns-3-users/c/pfT0qoAeP78/m/sHxrbMT717kJ>
>>
>> I do know the origin of the word ‘paper-ware’ – For the first
>> time, I saw that word in Tommaso Pecorella’s message on ns-3 user
>> group.  I believe that “paper-ware” is an excellent name for such
>> software/model that is discussed in most of the journal papers –
>> *they can only live in a journal paper – not in the real world.*
>>
>> In fact, always I had an inexpressible feeling of understanding on
>> ‘unimplementable, impractical ideas’ commonly found in journal
>> papers – particularly in Computer Science,  Electronics and
>> Communication Engineering related ones. But I had no simple words
>> to express it with suitable words.  ‘/*Paper-ware’*/ is an
>> excellent single word that will express everything – (-: thank you
>> Tommaso 🙂 .
>>
>> After knowing about the word ‘paper-ware’, I searched ‘google’ to
>> read some articles on ‘paper-ware’. But, what I found mostly on
>> ‘google’ was: items made of paper similar to the one you see here
>> – the ‘/*paperware*/‘. Even Wikipedia itself only know about
>> ‘paperware’ and knows nothing about ‘paper-ware’.
>>
>> As a last hope, yesterday, I decided to learn more about
>> paper-ware by asking some questions with an AI system. Amazingly,
>> that AI system taught me more about ‘paper-ware’. (/Sorry. to
>> avoid further academic crime related to plagiarism, I am not
>> mentioning the name of the AI system and the questions that are
>> asked with it/)
>> 1.
>>
>> Over-hyped research papers: Some research papers in AI and
>> networking may present over-hyped claims or unrealistic
>> results that are difficult to reproduce or validate. For
>> instance, a paper might claim to have developed a new AI
>> algorithm that significantly outperforms state-of-the-art
>> methods, but when other researchers attempt to replicate the
>> results, they find that the performance gains are not as
>> significant as claimed.
>>
>> 2.
>>
>> Insignificant improvements: Some papers may propose small or
>> incremental improvements to existing algorithms or techniques,
>> without providing much insight or impact in the field. For
>> instance, a paper might propose a slight modification to an
>> existing AI model that improves its accuracy by only a few
>> percentage points, but without offering any significant
>> advantage over existing techniques.
>>
>> 3.
>>
>> Lack of practical application: Some research papers in AI and
>> networking may focus on theoretical or abstract concepts that
>> have little practical application or relevance. For example, a
>> paper might describe a new neural network architecture that is
>> highly complex and difficult to implement in practice, or a
>> networking protocol that is not suitable for real-world
>> applications.
>>
>> 4.
>>
>> Low-quality or incomplete research: Some papers may be rushed
>> or poorly executed, lacking rigor or statistical analysis. For
>> example, a paper might have a small sample size or
>> insufficient experimental setup, making it difficult to draw
>> meaningful conclusions from the results.
>>
>> It’s important to note that not all papers that fall into these
>> categories are necessarily paper-ware, as there may be valid
>> reasons for proposing incremental improvements or focusing on
>> theoretical concepts. However, when the primary goal is to simply
>> pad a researcher’s publication record or CV, without providing
>> significant value or impact in the field, then the research can be
>> considered as paper-ware.
>>
>>
>> b) Paper-ware in the Domain of Optimization Techniques in
>> Networking.
>>
>> Nature-inspired optimization techniques have gained popularity in
>> networking research, and as with any other field, paper-ware can
>> be found in this area as well. Here are a few examples of what
>> could be considered paper-ware in the domain of nature-inspired
>> optimization techniques in networking:
>>
>> 1.
>>
>> Lack of comparison with existing methods: A paper may present
>> a new optimization technique inspired by nature, but without
>> comparing it to existing methods in the field. This makes it
>> difficult to determine the relative performance of the
>> proposed method, and whether it offers any significant
>> advantage over existing techniques.
>>
>> 2.
>>
>> Overly simplistic evaluation: Some papers may propose a new
>> optimization technique, but only evaluate it using a small,
>> simple testbed or dataset. This may not be sufficient to show
>> the practical relevance or scalability of the proposed method.
>>
>> 3.
>>
>> Lack of impact: Some papers may propose a new optimization
>> technique, but fail to demonstrate its impact in real-world
>> networking scenarios or applications. This can make it
>> difficult to determine the practical relevance of the proposed
>> method, and whether it offers any significant advantage over
>> existing techniques.
>>
>> 4.
>>
>> Over-exaggerated results: Some papers may present unrealistic
>> or over-exaggerated results for the proposed optimization
>> technique. For example, a paper might claim that the proposed
>> method significantly outperforms existing techniques in terms
>> of performance, but the results may not be statistically
>> significant or may not be reproducible.
>>
>> 5.
>>
>> Lack of theoretical foundation: Some papers may propose new
>> optimization techniques without providing a solid theoretical
>> foundation or justification for the proposed approach. This
>> can make it difficult to evaluate the validity and
>> generalizability of the proposed method.
>>
>> It’s important to note that not all papers that fall into these
>> categories are necessarily paper-ware, as there may be valid
>> reasons for the approach taken or for the limitations of the
>> evaluation. However, when the primary goal is to simply pad a
>> researcher’s publication record or CV, without providing
>> significant value or impact in the field, then the research can be
>> considered as paper-ware.
>>
>>
>> c) Paper-ware in Journal Papers with Software Implementation
>>
>> Journal papers that include software implementation can also fall
>> prey to paper-ware practices. Here are a few examples of what
>> could be considered paper-ware in such papers:
>>
>> 1.
>>
>> Poorly documented or inaccessible software: Some papers may
>> include software implementation, but fail to provide
>> sufficient documentation or make the code readily accessible
>> to others. This can make it difficult for other researchers to
>> evaluate, reproduce, or build on the work presented in the paper.
>>
>> 2.
>>
>> Insufficient evaluation: Some papers may include software
>> implementation, but only evaluate it using a small or limited
>> dataset or testbed. This may not be sufficient to demonstrate
>> the practical relevance or scalability of the proposed method,
>> or to compare it against existing methods.
>>
>> 3.
>>
>> Lack of novelty or impact: Some papers may present software
>> implementation of an existing or well-known method, without
>> demonstrating any significant novelty or impact in the field.
>> This can make it difficult to justify the value of the work
>> presented in the paper, or to distinguish it from existing
>> research.
>>
>> 4.
>>
>> Incomplete or non-functional software: Some papers may present
>> software implementation that is incomplete, non-functional, or
>> has limited functionality. This can make it difficult to
>> evaluate the quality of the software or to use it in practical
>> scenarios.
>>
>> 5.
>> 1.
>>
>> Look for the research question or hypothesis: A paper-ware
>> study may lack a clear research question or hypothesis that is
>> being tested.
>>
>> 2.
>>
>> Check the research methodology: Look for details on the
>> research methodology used in the paper, such as the sample
>> size, data collection process, and statistical analysis. If
>> the methodology is flawed, incomplete, or poorly explained, it
>> could be a sign that the research is paper-ware. A paper-ware
>> study may lack a detailed description of the methodology used
>> to collect and analyze the data. The authors may also use
>> vague or unclear language when describing the methodology.
>>
>> 3.
>>
>> Look for the results: A paper-ware study may have little or no
>> significant results. If the authors report significant
>> findings, make sure that they are based on empirical data and
>> not just speculation or theory.
>>
>> 4.
>>
>> Look for the discussion: A paper-ware study may have a weak or
>> inadequate discussion section that does not draw clear
>> conclusions from the results or explain the implications of
>> the findings.
>>
>> 5.
>>
>> Look for the references: A paper-ware study may have few or no
>> references to other empirical studies or research in the field.
>>
>> Keep in mind that it can be difficult to identify a paper-ware
>> study based on the published paper alone, so it is important to
>> consider the reputation of the journal and the authors, as well as
>> any additional information about the study that is available.
>>
>> In fact, personally, I am following some simple rules to identify
>> and avoid fake papers and paper-ware. They are very crude & rude
>> rules – I can not disclose them in an open forum now – I will
>> polish them and dignify them for a public forum post and discuss
>> them here later.
>>
>> *If an AI can tell this many ideas to identify paper-ware,
>> certainly, humans also have some insights on this.*
>> *If students, researchers and members of this forum have some
>> insights on paper-ware and techniques for identifying them, then
>> you can share them here so that I may latter incorporate them in
>> this article(with citation).*
>>
>>
>> References :
>>
>> 1. I can not find a single reference on this topic other than
>> ‘Tommaso Pecorellas’ post
>> <https://groups.google.com/g/ns-3-users/c/pfT0qoAeP78/m/sHxrbMT717kJ> on the group
>> 2. Some answers from an undisclosed AI system.
>> 3.
>>
>> Fake Journals – A Humiliation to Scientific Research
>> <https://www.projectguideline.com/fake-journals-a-humiliation-to-research/>
>>
>> --
>> Posting to this group should follow these guidelines
>> https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting
>> <https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting>
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "ns-3-users" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to ns-3-users+...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/ns-3-users/CAHXeiM9GJPBGJB9A7wim%2B70BM4NY58iSKB%2BnJQ9VWayCGuJPfQ%40mail.gmail.com <https://groups.google.com/d/msgid/ns-3-users/CAHXeiM9GJPBGJB9A7wim%2B70BM4NY58iSKB%2BnJQ9VWayCGuJPfQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.
>
>
> --
> Posting to this group should follow these guidelines
> https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting
> <https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting>
> ---
> You received this message because you are subscribed to the Google
> Groups "ns-3-users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ns-3-users+...@googlegroups.com
> <mailto:ns-3-users+...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/ns-3-users/b86aefdc-76d3-47fe-9977-60142c3f27a1n%40googlegroups.com <https://groups.google.com/d/msgid/ns-3-users/b86aefdc-76d3-47fe-9977-60142c3f27a1n%40googlegroups.com?utm_medium=email&utm_source=footer>.

ABDULJABBAR ALSHARIF

unread,
Feb 19, 2023, 11:38:28 PM2/19/23
to ns-3-...@googlegroups.com
hi charles  , 
 by the way you articles come true in some sight  of research issue , clearing , statisfie , unreachable , i think when there's a good state-of art  of any related field it' create a good question and coming with high impact result actually must of paper-ware hanging around a field without disguised  a requirement of research  requirement or justify their impact so i get suffering some time for get gap into some scientific & reality  just i am  using filter  for any paper i read it by creating a question and find out if there's answer for my question and isn't enough 
 we have to create sample methods to get into source of data and avoiding unnecessary  knowledge ?I am wondering how to find and  avoid paper aware  if there's high impact research articles published in a high impact journal , must the conference journal  accept and publish  such  paper and publish it .

Jabbar
" if you want the rainbow you have deal with rains"




Tommaso Pecorella

unread,
Feb 20, 2023, 4:40:31 AM2/20/23
to ns-3-users
Hi Charles,

my 2 cents - and (again) feel free to contact me directly.

It's your call to use the term "paper-ware", but I feel that it must be contextualised in order to avoid the derogatory characteristics. I'm notoriously over the lines sometimes.

About the term itself, I feel that it must be tied not to the fact that one does simulations. Rather, it's a problem of HOW the simulations are made. ns-3 does a great job in providing a simulation framework, but then if you use it to build something that is impossible... well, that's another story.

Let's do an example. Suppose I want to build a perfect backoff technique for an ALOHA channel. The theory tells that all I need to know is the number of other nodes that have the Tx buffer full. Can I know it in a real system? NO. Can I know it using a simulator? Yes.
Like Tom Henderson said, this might be useful too - to explore what's the best performance you could get with a perfect system. That's not paper-ware, it's a theoretical limit that is useful.

A different thing is if you bury this "all-knowing" feature in your system, and then claim that the whole system works. E.g., suppose that you can know how many packets are forwarded by a node in a mesh network (this is a common mistake, you can't know it), and then you build a wonderful system to spot the nodes that are dropping packet maliciously... without telling that it's a theoretical bound.

So, if you would like my (personal) classification of paper-ware, it is:
- Is it a theoretical limit (the authors did state clearly that it is)
- Could it be implemented in a real system?

If both answers are no, then I have a huge red flag.

Note that ns-3 tries its best to let the user do the 2nd, but (as any simulated system) if the user wants to do something... they will.

Charles Pandian

unread,
Feb 20, 2023, 5:40:14 AM2/20/23
to ns-3-...@googlegroups.com
"I think I did made up the term in that post"

Dear  Tommaso,

Of course, you should be the inventor of that term 'paper-ware' – because, I didn't see the same word anywhere on the internet.

But amazingly, that AI really know TOO MUCH about  'paper-ware' – possibly, it got trained with some 'not publicly available' text sources.

Anyway, if someone saw the term  'paper-ware' in any of the articles on internet, then kindly share a link here.

I am working on refining that article based on the comments and feedbacks of you and Tom. 

Charles Pandian,



--
Posting to this group should follow these guidelines https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting
---
You received this message because you are subscribed to the Google Groups "ns-3-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ns-3-users+...@googlegroups.com.

Charles Pandian

unread,
Feb 21, 2023, 12:59:06 AM2/21/23
to ns-3-...@googlegroups.com
Thank you all for your comments and suggestions. I will refine that article by incorporating all your suggestions.
But, I was still searching for articles that discuss "paper-ware". But could not able to find a single one on the internet.

Again, as a last hope, I asked that same AI to give some reference articles on 'paper-ware' - the AI replied the following:

Here are a few articles and papers that discuss paper-ware in academic publishing:

  1. "The Conference Review Process: In Need of Reform" by Michael Mitzenmacher and Eli Upfal, which discusses the role of conferences in promoting paper-ware: https://people.csail.mit.edu/madry/papers/conferences.pdf

  2. "The Prevalence of Bad Data and How to Avoid It" by Jeffrey Leek, which discusses the problem of paper-ware and offers advice for researchers to avoid it: https://simplystatistics.org/2017/10/23/the-prevalence-of-bad-data-and-how-to-avoid-it/

  3. "In search of scientific integrity: how to combat scientific misconduct" by Mariëtte Le Roux, which discusses various forms of academic misconduct, including paper-ware: https://www.nature.com/articles/nbt0303-299

  4. "When the Difference between “Novel” and “Old” Doesn’t Matter: An Analysis of Computer Science Papers and their Impact" by Emre Kıcıman and Aleksandrs Slivkins, which discusses the impact of paper-ware on the field of computer science: https://www.microsoft.com/en-us/research/publication/when-the-difference-between-novel-and-old-doesnt-matter-an-analysis-of-computer-science-papers-and-their-impact/

  5. "The Perils of Publish or Perish" by Audrey Watters, which discusses the pressure to publish and the resulting problem of paper-ware: https://www.insidehighered.com/blogs/hack-higher-education/perils-publish-or-perish

These resources provide a good starting point for understanding paper-ware and its impact on academic publishing.


But amazingly, I could not find any single reference to any one of the above articles anywhere on the internet. 

Can anyone explain the reason?

If you find the source of any of the above articles, please post a link here or kindly forward to my mail ID


Charles Pandian,


igs...@gmail.com

unread,
Mar 5, 2023, 2:18:16 AM3/5/23
to ns-3-users
Judith  and Tom, thank you for your feedbacks and suggestions – certainly I will clarify it in my original article. I really amazed by your views and understand the tone that I missed in my article.

Tom : "
I agree with the intent of your draft blog to try to steer people away from bad research practices."


In one sense, you are correct. But it is about steering people away from bad/fake research papers – particularly 'simulation based' paper-ware. A paper with poor content will itself automatically steer the researcher away from it. But, a paper-ware (that claims the use of a simulator such as ns-3)  will resemble like an 'excellent paper with implementable idea'—that what I am worrying about.

So definitely I should stick to discussing only about paper-ware that are claiming the use of a simulator such as ns-2, ns-3 - and I should clarify it in the original blog article itself.


Tom : "There are also plenty of examples of what you describe in non-simulation contexts, including purely math-based models, and testbeds containing non-commercial-grade implementations.  Much of research literature uses purely math, at a high level of analytical abstraction, making approximations about the real world like independence of events, infinite buffers, etc.  Still, this kind of analysis is useful to understand performance bounds and trends in behavior."

Judith: "Also, some algorithms that are respectable and correct mathematically are not practical to implement--just thinking about how to build the candidate algorithm into an ns-3 simulation will tell you that very quickly, even if the computational complexity is theoretically manageable. If you can't actually build it into a protocol then it is by definition paper-ware no matter how clever it is."

Of course, I am not considering such a non-implementable mathematical model based publication as "paper-ware". 

For example, in previously published works, we may see a lot of 'location based MANET/WSN/VANET/FANET routing protocols'.  In some of those papers,  their algorithm 'magically' knows all its neighbors and location and assume the node is with GPS capability (and directly getting location information of ALL nodes from the simulator itself to simulate a GSP)  and claim that their location based protocol able to find the best route and so will give better performance than all other protocols—nothing wrong in it.  But what they 'hide' or fail to address or forget to address in such papers are, (1) how they are going to manage the actual increase in message overhead for sharing location information of nodes with one another for finding a "global, ideal route' (2) since location of nodes will rapidly get change in a mobile scenario,  how their algorithm able to get location information in a periodic fashion without increasing the (location sharing) message overhead. They publish a result with hypothetical results by forgetting the above facts. For the first look, this kind of paper will look good. So, if another researcher tries to repeat that work, in a genuine way, then certainly they will never get the 'good' results of that previous 'reference' paper. - That is an ideal example of 'paper-ware'.

Similarly, we may find papers talking about implementing a full-fledged Deep Learning Neural Network inside a tiny sensor node by forgetting all about the original capability of the sensor node.  Of course, there are real good works on AI/ML related implementations addressing MAC layer issues – the original work only will address a simple prototype case with two or three nodes and present it as a 'proof of concept' work —the original authors genuinely will state its limitation in their paper – definitely this is not paper-ware.  But without realizing anything, another paper may claim  that it is using the  same  AI/ML related MAC on 1000  nodes in a huge mobile adhoc net work and achieve improved performance – possibly this one may be a paper-ware.

Madan Kumar Pande

unread,
Mar 5, 2023, 5:17:22 AM3/5/23
to ns-3-...@googlegroups.com

Dear Sirs and Madams,

             The train of thought that is being traced in paperware dialog, needs to explicitly define its aim. There is some drifting and obfuscation.

2. In current times, serious research is done only in an established Ecosystem.  Just look at the work done and paperware coming out from Eindhoven, MIT.edu, CTTC, Beijing University, Stanford.edu. Compare it with Tandon Institute's NYUSim. NYUSim is yet constitutes a limited area ecosystem, while using Matlab as backend in NYUsim. They do  have some excellent researchers and Test-Beds.

3. "Matlab", a system level simulator, constitutes an Ecosystem for research, widely used by Institutes and  Universities around the world. Matlab is very resource rich and very versatile.. It is  grounded in Matrix Numerical analysis, Digital image processing, Statistical optimization methods and more. It allows mathematical equations to be solved and results plotted in a very pleasing presentation.Talking of  Numerical results,  please separate the Chaff from the Grain. What Information scientists do in numerical analysis  from what is done with Matlab based Monte-Carlo Simulations, while proving their hypotheses.

4. The next Ecosystem for research in Telecom mainly (though not limited to it) is the NS3. Its authors set out to create one such ecosystem in opensource, moving away from NS2, finding it less extensible and needing a major architectural rehash. TomH,  TomP, PdpBarnes, NicolaB, and many others, have succeeded in making it a very viable Ecosystem for research, despite limitations of funding and manpower. Many Kudos to them. NS3 is an excellent open source ecosystem for research, though it needs very many more contributions to extend its scope. Imagine if it were to incorporate a direct access to opensource GNU Octave!!!  It would then establish a very profound ecosystem, which would beat Matlab hands-down in performance, and cover Telecom, DSP, Bionics research, ML/AI...Every piece of code in both GNU Octave and NS3/CTTC-5G-NR being in C++ (std=11 and above). I say this because, I use GNU Octave and NS3/5GNR in my work and also use an IPC between GNU Octave and NS-3. The two seem to be made for each other. As I recall TomH's WNS-2022 talk,  NS3 has established some access to some test-beds in USA.

5. If one is considering "What constitutes original and reproducible research", perhaps is best exemplified by the group of researchers led by Professor Emil Bjornson from Eindhoven in the area of 4G-6G wireless  communications, albeit all their  published papers and excellent books use the very rich Matlab system level simulator. With very few changes, it is possible to use GNU Octave instead. 

6. There are some excellent groups involved at the edge of technology research in many universities in USA, often funded by Defense (say, Wright-Patterson AirForce Base). They work under established leaders in the concerned areas of technology , across many universities as well. This the beauty of higher research in USA.

 7. Please do not judge paperware produced during MS/MTech work. It is not original research and one cannot expect it from a fresh graduate, he has not done a deep dive yet in to any domain. It is somewhat like the large amount of paperware coming from some Chinese universities and institutes these days,  which just extends the work of the GURU professor. Even with largest research funding done by the Government in China, they have not publicly established any Research ecosystem. Most of the work uses Matlab.

8.  To conclude, most real research work is done around an established Ecosystem, ably led by subject-matter experts. Every attempted paper is reviewed by three referees, picked from subject matter exper. Thus, if a paper "passes muster" while inadequate in the research content, it is the organizers who are to be blamed for choosing poor referees for their so-called "Peer-Reviewed, Scopus-Indexed, Flag-Ship Conferences", please pardon the epithet used by all the mushrooming conferences.

    The days of winning a Nobel prize working in the basement of Cavendish Labs, U.K. or in the basement of MIT Labs are now over (after the successful work on Apollo Module recovery the basement lab of C.S. Draper, became the Charles Starc Draper Lab of MIT), the Ecosystem remained the MIT Lab.

With Regards,

Madan Pande

P.S. My views are mine alone and are not meant to upset anyone...

=====================

Charles Pandian

unread,
Mar 6, 2023, 1:31:27 AM3/6/23
to ns-3-...@googlegroups.com
Dear Madan Kumar Pande,

Thank you for your excellent feedback and suggestions. They're really giving all of us the opportunity to well define what is 'paper-ware' again – especially, if that paper is using a 'network simulator' such as ns-3 for implementing the model and producing the results.  Since we are discussing it in a ns-3 forum, mostly what we discuss here is about papers that are using a network simulator such as ns-2 or ns-3 for implementing the model and producing the results. 

What is paper-ware in 'Network simulation' based research publication?

Paper-ware is a 'idea of a software model that was claimed as implemented under Network simulator such as ns-3'  but are  practically unimplementable or 'wrong doing'  under a simulator such as ns-3.  In other words, the paper-ware is a 'software model' that can only live on that 'colorful' paper.


I understand that, you are talking about the simulation results based on Matlab or Octave like environments. Excellent points that will really help to define paper-ware in the field of Network Simulation.

For example, if someone proposes a genetic-algorithm based optimization algorithm 'X'  to find the best route in a given map, then Matlab like tools are the best since Matab has a rich toolbox for optimization based research works. So, a publication base on that Matlab Simulation of such models might not be a 'paper-ware'

But, if someone published a paper, and claim that they implemented a improved AODV MANET routing protocol based on previous work X, under the simulator ns-3, and got better results than all other existing MANET routing protocols, then possibly this  one will be a 'paper-ware'. The reason is obvious, in previous work X, they have a fixed map, so their idea will definitely work on that. But under a mobile network, that map will get change rapidly during every instance of the simulation. So, the locations of the nodes will get change rapidly, so that no optimization algorithm will practically able to find a best route—or at least will not work better than the standard AODV. - this is what a paper-ware.

I believe that there are some aspects of Networking that could not be simulated under Matlab like tools. For that purpose, only network simulators are there. Of course, Matlab and octave are good at rapid prototyping of most of the networking related concepts and algorithms. But, certainly, they are not suitable for simulating some fine “networking aspects” of a realistic networking scenario which involves  a packet level realistic communication scenario that is happening at different layers of network protocol stack. Here I am mostly talking about the 'paper-ware' from this perspective of simulation.


 7. Please do not judge paperware produced during MS/MTech work. It is not original research and one cannot expect it from a fresh graduate, he has not done a deep dive yet in to any domain. It is somewhat like the large amount of paperware coming from some Chinese universities and institutes these days,  which just extends the work of the GURU professor. Even with largest research funding done by the Government in China, they have not publicly established any Research ecosystem. Most of the work uses Matlab.

Irrespective of the 'personal title' of the authors, every published paper is a research paper for any new scholar who is tying to do his/her own research. So that, new researcher should avoid the pitfalls of selecting a paper-ware model as their main reference model. Of course, I understand that graduates, post graduates  and research scholars are publishing papers because of their academic needs. But, if some of them are publishing paper-ware because they are 'fresh graduates', then it will not justify this anyway. If someone is publishing a paper-ware only after very hard work, then that 'hard-work' will never justify it as correct in any way — because, definitely, it will spoil some other new scholar's time and effort— so that new scholar should recognize such paper-ware models and avoid them.

😊To avoid paper-ware publications from graduate or post graduate students, Universities or their supervisors may consider making paper publication is an optional one for such students while working on their course project work. In that case, we will get research quality publication for a capable undergraduate student. (Or at least insist the student put some kind of declaration in such published papers to make others to obviously understand that it is not a serious research work 😊)


2. In current times, serious research is done only in an established Ecosystem.  Just look at the work done and paperware coming out from Eindhoven, MIT.edu, CTTC, Beijing University, Stanford.edu.
8.  To conclude, most real research work is done around an established Ecosystem, ably led by subject-matter experts. Every attempted paper is reviewed by three referees, picked from subject matter exper. Thus, if a paper "passes muster" while inadequate in the research content, it is the organizers who are to be blamed for choosing poor referees for their so-called "Peer-Reviewed, Scopus-Indexed, Flag-Ship Conferences", please pardon the epithet used by all the mushrooming conferences.

Frankly saying, as far now, there is not much in established journal review mechanisms really to filter 'paper-ware'. 
In his reply Tom Mentioned the below:

https://www.acm.org/publications/policies/artifact-review-and-badging-current

This year, for accepted papers to WNS3, I am forming a committee to
evaluate all papers to check their eligibility for such badges.  This
will involve the committee inspecting whatever code and documentation
that the authors have provided in the links in their paper and
attempting to reproduce the results and figures.  This evaluation would
occur in the spring timeframe (April-June).

Only this kind of 'badge' on a published paper will avoid 'paper-ware' in  publication that are relied on 'simulation software'. But, every piece of 'complex, genuine' software that a scholar develop for his research will be considered as a precious entity by him/her – so, mostly they will not wish to share it on a public forum.

Learn to identify paper-ware is the only way to avoid them in a serious research work.

Madan Kumar Pande, thanks once again for your excellent feedback and concerns on re defining  'paper-ware' from the perspective of Matlab like simulation tools.


Charles Pandian,


igs...@gmail.com

unread,
Jul 6, 2023, 3:00:56 AM7/6/23
to ns-3-users

After around five months of starting this thread on this group, I tried to ask a question with Google's Bard AI about paper-ware.

 In fact, I asked 'Bard' almost the same question (that I asked earlier with another AI ): "What is a paper-ware on published journal paper?"

The following is the answer from Bard AI :

The term "paper-ware" is a derogatory term used to describe software or algorithms that are described in a published journal paper, but that are not actually implemented or tested in practice. Paper-ware algorithms are often theoretical in nature, and they may not be feasible to implement in real-world systems.

The term "paper-ware" was coined by Tommaso Pecorella, a developer of the ns-3 network simulator. Pecorella used the term to describe algorithms that are described in ns-3 papers, but that are not actually implemented in ns-3. He argued that these algorithms are often impractical and unrealistic, and that they do not provide a true representation of how the algorithms would perform in real-world systems.

The term "paper-ware" has been criticized by some researchers, who argue that it is too harsh and that it discourages researchers from publishing theoretical work. However, the term has also been defended by others, who argue that it is a useful way to identify and criticize unrealistic and impractical algorithms.

Here are some of the characteristics of paper-ware:

The algorithm is described in detail in the paper, but there is no implementation or testing of the algorithm in practice.
The algorithm is based on unrealistic assumptions, such as perfect knowledge of the system state.
The algorithm is too complex or inefficient to be implemented in a real-world system.
The algorithm has not been evaluated in terms of its performance or scalability.
If you are reading a published journal paper, you can look for the following signs to identify paper-ware:
The paper does not provide any implementation details or performance results.
The paper is based on unrealistic assumptions.
The paper is too theoretical or mathematical.
The paper is not well-written or organized.

If you find that a paper contains paper-ware, you should be cautious about using the results of the paper. The algorithm may not be feasible to implement in practice, and it may not perform as well as the paper claims.

So, Bard started to believe that Tommasso Pecorella, is the inventor of the term 'paper-ware'.

I asked the same question on my mobile phone.

 

So, finally, Bard came to the conclusion that "The word 'paper-ware' is a neologism coined by Tommasso Pecorella".

 

Congratulation Tommasso Pecorella! At last, really provided a new word to the future English Dictionary.

 ...

But it raised another concern on AI here.
As far as we know, there are only very few sites on the internet that are talking about 'paper-ware'.
Even though Bard is providing a 'correct or accurate' answer, it shows another dangerous possibility.

Our experience shows that, theoretically, it will be very easy to give 'incorrect knowledge' to an AI system by intentionally putting some wrong information here and there on the internet.

What do the people of this group think about this danger of possibility in misleading an AI system?

Charles Pandian

Reply all
Reply to author
Forward
0 new messages