Author Guidelines



Editorial and Review Process

DLINE follows a complete peer-review process. When a paper is submitted, it will be checked by the editor for basic quality. Each paper is then subjected a plagiarism check. The journals follow a double-blind review process and there are standards to observe the quality of submissions.

Pre-check

Immediately after submission, this check is initially carried out by the editor to assess:

  • Suitability of the manuscript to the journal/section/special issue;
  • Qualification and background of authors;
  • Reject the low-level papers.

Peer-review

The submissions undergo review by a minimum of two experts in the domain. The reviewers may be either independent reviewers or editorial board members. The journals follow a double-blind review process.

The reviewers are expected to provide their objective comments for the originality/novelty, methodology and design, datasets, experimentation, presentation and language and contribution to the domain. The reviewers recommend the editors to accept, or accept with major revisions, accept with minor revisions or reject. The detailed comments of the reviewers are forwarded to the authors by the editors. In the last five years, no single paper is accepted without revision.

For More Information contact:   info@dirf.org

Use of AI, including LLM and AI-assisted technologies

During the submission of papers, authors must disclose in the submitted paper whether large language models or AI-assisted tools have been used to write the paper.

Authors who employ such a technology should describe its use in the submitted work in the appropriate section, where applicable. For example, when AI is involved as an assistant for writing, it should be described in the Acknowledgements section. Furthermore, if AI is utilised for collecting data, analysis, and generating figures, the authors should include a description of such use in the Methods section. As such, any material submitted based on the utilisation of AI-assisted technologies is the responsible act of a human being. Authors are then expected to review and edit the output, as AI can present information that is incorrect, incomplete, or biased. Additionally, they should never list or cite AI or AI-assisted technology as authors or co-authors. With that said, authors should be able to vouch that their paper was not plagiarised, including the provided citations and figures/images generated through AI. The authors themselves must ensure full attribution is provided for all quotes, including proper citations.

Confusion between AI-assisted content and AI-generated content is paramount in understanding the nature of AI use in academic writing.

AI-assisted content is work primarily written by an individual and enhanced by AI tools. For example, the author might have used AI for grammar checking, to clarify sentence constructions, or for style recommendations. An author remains in control of the work, and AI is just a tool to polish the final product.

Most publishers generally permit such assistance, as long as the work is original and the integrity of the research is maintained, without requiring formal acknowledgement.

The AI is AI-generating content. This could mean that the AI tool generates large portions of text, or even entire sections, when instructed (or prompted) by the author.

This raises ethical concerns, especially regarding originality, accuracy and authorship. Generative AI draws its content from various sources, such as web scraping, public datasets, code repositories, and user-generated content – essentially, any content that it can access.

Thus, for AI-generated content, authors are required to make clear and explicit disclosures. In many cases, this type of content may be subject to restrictions. We even reject it outright, emphasising that authors must disclose when AI-generated content is used by citing this appropriately. There are different conventions for citing AI use, but all seem to agree that the name of the generative tool used, the date accessed and the prompt used should be mentioned. This level of transparency is necessary to uphold the credibility of academic work. Other aspects linked to AI assistance, such as correcting code, generating tables or figures, reducing word count, or verifying analyses, cannot be directly referenced in the body of the manuscript. In line with current best practice recommendations, this should be indicated at the end of the manuscript. Authors are responsible for checking the accuracy of any AI content, whether AI-assisted or AI-generated, ensuring it’s free from bias, plagiarism, and potential copyright infringements.

For further information, please visit- https://publicationethics.org/guidance/cope- position/authorship-and-ai-tools

Copyright© 2016 International Journal of Information Studies (ijis)