×
Home Current Archive Editorial board
Instructions for papers
For Authors Aim & Scope Contact
Original scientific article

HORNED LIZARD-CATBOOST FRAMEWORK FOR CYBERBULLYING PREVENTION IN SOCIAL NETWORKS

By
N. Sheba Pari Orcid logo ,
N. Sheba Pari

Research Scholar, School of Computer Science Engineering and Information Systems, Vellore of Technology , Vellore, Tamil Nadu , India

K. Senthil Kumar Orcid logo
K. Senthil Kumar

Professor, School of Computer Sciences and Engineering, Vellore Institute of Technology , Vellore, Tamil Nadu , India

Abstract

Cyberbullying are becoming more susceptible to online social networks because of the large-scale user
generated content. The current methods of detection are primarily post-event methods and lack built-in 
prevention strategies, thereby limiting their ability to ensure the protection of user privacy and platform 
security. In this paper, a Horned Lizard CatBoost Framework (HLCF) is proposed to predict and prevent 
cyber threats active in social networks in advance. It uses the Horned Lizard Optimisation of adaptive 
feature selection with a CatBoost-based classifier to precisely differentiate between malicious and non
malicious activity. The existing approaches lack methods that combine feature selection with 
optimisation, and a preventive mechanism for access-control. The framework was tested with benchmark 
social media datasets, which comprised over 47000 instances, and cross-validation and standard train
test splits. The findings indicate that optimised threat prediction gained 99.98% accuracy prediction, 
99.98% precision, 99.98% recall, and a 99.98% F1score. Meanwhile, it reduced the error ratio by 
0.0001%. . The suggested HLCF provides a scalable solution for improving security and privacy. This 
work demonstrates the efficiency of combining bio-inspired optimisation with machine learning for 
cyberbullying prevention in social networks. 

References

1.
Hilal A, Kumar S, Patel D. Session-aware deep learning for social media cyber threat detection. Computers & Security. 2025;139:103952.
2.
Gaur V, Singh P, Bansal A. Multimodal deep neural models for detecting abusive and toxic social media content. IEEE Trans Comput Soc Syst. 2024;11(2):224–36.
3.
Alsharif M, Logeshwaran T. Transformer-based hybrid architectures for online social network threat detection. Neural Computing and Applications. 2024;36:21845–60.
4.
Khan I, Ahmad Z. Deep sequential attention networks for detecting coordinated cyberbullying on social media. IEEE Transactions on Knowledge and Data Engineering. 2025;12(1):45–58.
5.
Zheng W, Xu J. Bot and fake account detection through dynamic graph neural networks. IEEE Transactions on Knowledge and Data Engineering. 2024;36(2):765–78.

Citation

This is an open access article distributed under the  Creative Commons Attribution Non-Commercial License (CC BY-NC) License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 

Article metrics

Google scholar: See link

The statements, opinions and data contained in the journal are solely those of the individual authors and contributors and not of the publisher and the editor(s). We stay neutral with regard to jurisdictional claims in published maps and institutional affiliations.