Please use this identifier to cite or link to this item: https://elibrary.khec.edu.np:8080/handle/123456789/673
Title: Prasta Nepali
Authors: Aakash Pradhan (750301)
Ashish Lawaju (750307)
Deepesh Kayastha (750311)
Sangam Thapa (750339)
Pratham Dahal (740327)
Advisor: Er. Dinesh Gothe
Keywords: Prasta Nepali, Grammar checking, Transformer models, Artificial intelligence, Contextual relationships, Accuracy, Loss, BLEU score, Candidate translation, Reference translation ii
Issue Date: Aug-2023
College Name: Khwopa Engineering College
Level: Bachelor's Degree
Degree: BE Computer
Department Name: Department of Computer
Abstract: Prasta Nepali functions as a web application that provides grammar-checking services for the Nepali language. This process involves analyzing text to identify errors in grammar and suggesting potential corrections. Traditionally, grammar checks required manual creation and upkeep of predefined grammar rules, demanding substantial effort. However, recent strides in artificial intelligence, notably the emergence of transformer models, offer novel avenues for automating this task. Transformers are a type of deep learning architecture applicable to various tasks, encompassing text generation and analysis. They comprise two key elements: an encoder to process input data and a decoder to generate output. Unlike conventional methods, transformers capture contextual relationships between words in a sentence, enabling a more nuanced understanding of grammar. The model is trained using a dataset containing both accurate and erroneous sentences. Once trained, it can produce corrections for new sentences, enhancing their grammatical accuracy. This approach reduces manual intervention while increasing the efficiency and accuracy of grammar error detection and correction. Grammar checkers such as stacked LSTMs, bi-LSTMs with attention, and transformers were used, but the transformer model performed best. The stacked LSTM model obtained training accuracy, validation accuracy, training loss, and validation loss of 73.12%, 65.00%, 54.02%, and 64.13% respectively. The Bi-LSTM with attention model obtained training accuracy, validation accuracy, training loss, and validation loss of 88.62%, 80.43%, 25.15%, and 43.47% respectively. The transformer model obtained training accuracy, validation accuracy, training loss, and validation loss of 90.45%, 92.15%, 28.12%, and 51.23% respectively. For 1-gram, 2-gram, 3-gram, and 4-gram candidates matching the reference translation, bilingual evaluation understudy (BLEU) scores were 0.9037, 0.8170, 0.7838, and 0.7694, respectively.
URI: https://elibrary.khec.edu.np/handle/123456789/673
Appears in Collections:Computer Report

Files in This Item:
File Description SizeFormat 
prasta nepali_final_printed.pdf
  Restricted Access
1.58 MBAdobe PDFThumbnail
View/Open Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.