Review Policies

Publication Requirements

We receive a large number of submissions, and we make it clear even if we publish a few papers, these papers meet standards and novelty requirements. We lay down some principles and requirements as indicated below.

The submissions should ensure novelty and originality. Content should not been previously published

Methods used should be either new or incremental to the existing models

Data should be genuine and accurate, and we advocate the use of real data rather than simulation

The submissions should ensure a significant contribution to the domain The first process is to subject the papers to plagiarism, and AI tool use checks. Each paper is subjected to plagiarism and the use of AI tools. If the similarity of the content is detected, the papers will be rejected. Also, we will scan the paper to see whether any AI tool was used for writing. We use the necessary tools to detect plagiarism and AI tools.

Plagiarism and AI tools use

Submissions received in the submission system are subjected to the desk review process. The editors/editorial board read them and ensure the paper is worthy of review. They assess the overall quality, presentation, and contributions. When the editorial board approves the paper, it is forwarded to the reviewers.

Review Model

When a paper is submitted, the chief and associate editors do a pre-publication check using a major scale: the desk review. Domain experts will review the papers that qualify for desk review. During the desk review, the editors collect and evaluate, without identifying the authors, the submitted manuscripts to determine their alignment with the journal's focus and their capacity to significantly enhance knowledge within the realm of the journal's scope. If necessary, the editors consult members of the scientific committee throughout this process.

This assessment takes place every fifteen days. If a manuscript fails to meet the journal's guidelines, the authors are informed of this decision within sixty days of submission. Papers that pass the desk review are sent to two expert reviewers affiliated with the journal for assessment through a Double Blind Review system or to external scholars chosen for their expertise in the article's subject matter. We let the authors know that the total review period usually is six months. The authors can check the status of the submissions in the online submission system.
The Journal of Digital Information Management follows a triple-blind review system where each submission undergoes review by a minimum of three domain experts. The experts are selected based on their domain knowledge and research experience, as well as their publication and citation profile. A paper will be accepted ONLY if ALL three reviewers consistently recommend the acceptance. The journal uses an extreme review scale, which includes originality, novelty, methodological strength, experiments, data, inferences, analysis, presentation and language. The authors are informed that the journal uses a robust anti-plagiarism policy. Different teams check Plagiarism at many levels, including editors, sub-editors, reviewers and plagiarism detection experts. The journal requires the reviewers to recommend any one of the four decisions. It includes clear acceptance, major revision, minor revision and rejection. In the last ten years, the journal has not received any clear acceptance from the reviewers. The papers that are recommended for major revisions are not accepted.
We convert the reviews into numerical scores using sentiment analysis. The review scores will help us to select or reject the papers.

The journal does not use student reviewers (such as Phd or Post-doctoral researchers) for the review process. All the associated reviewers have considerable experience and a strong publication profile.

Review Scale

When the reviews are available, we bring review metrics as below.
The journal uses a Likert scale ranging from 0 to 6. Zero means total rejection, and six indicates clear acceptance. These scales are based on the review recommendations. The mean scale is indicated in the published paper. We use sentiment analysis to convert the peer review texts into scores. The positive words in the reviews will get a plus score. We use the analyze sentiment model to convert the review texts into scores. Besides, the review process classifies each paper's recommendations to review scores ranging from 0 to 6.

Besides, we measure how reviewers are consistent with the review process. We make a text analysis of similarity detection between the reviewers' recommendations. If the reviewer's consistency is low, we will subject the paper to review further.

Copyright© 2016 Journal of Digital Information Management (JDIM)