× Description Téléchargement Publication(s) Contact
 Retourner à Logiciels et Ressources

Counter dataset

Un ensemble de données pseudonymisé open-source visant à faciliter la recherche sur la détection de la radicalisation avec des annotations NER. C'est le premier ensemble de données multilingue disponible publiquement pour la détection de la radicalisation, rassemblé à partir de diverses sources.

Site web principal

Description

Cet ensemble de données comprend du contenu multilingue provenant de forums, de Telegram, de réseaux sociaux et d'autres sources en anglais, français et arabe. Il couvre diverses idéologies radicales et est pseudonymisé pour protéger la vie privée tout en maintenant l'utilité des données. Il inclut des annotations pour les appels à l'action, le niveau de radicalisation et la reconnaissance d'entités nommées.

Citation et publication(s)

Si vous utilisez ce travail, merci de citer :

Arij Riabi, Menel Mahamdi, Virginie Mouilleron and Djamé Seddah. 2024. Cloaked Classifiers: Pseudonymization Strategies on Sensitive Classification Tasks.
In Proceedings of the Fifth Workshop on Privacy in Natural Language Processing. pages 123–136. Association for Computational Linguistics. Bangkok, Thailand.
HAL PDF
@inproceedings{riabi-etal-2024-cloaked,
 abstract = {Protecting privacy is essential when sharing data, particularly in the case of an online radicalization dataset that may contain personal information. In this paper, we explore the balance between preserving data usefulness and ensuring robust privacy safeguards, since regulations like the European GDPR shape how personal information must be handled. We share our method for manually pseudonymizing a multilingual radicalization dataset, ensuring performance comparable to the original data. Furthermore, we highlight the importance of establishing comprehensive guidelines for processing sensitive NLP data by sharing our complete pseudonymization process, our guidelines, the challenges we encountered as well as the resulting dataset.},
 address = {Bangkok, Thailand},
 title = {Cloaked Classifiers: Pseudonymization Strategies on Sensitive Classification Tasks},
 author = {Riabi, Arij and Mahamdi, Menel and Mouilleron, Virginie and Seddah, Djam{\'e}},
year = {2024},
 booktitle = {Proceedings of the Fifth Workshop on Privacy in Natural Language Processing},
 publisher = {Association for Computational Linguistics},
 editor = {Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi},
 pages = {123--136},
 url = {https://aclanthology.org/2024.privatenlp-1.13},
 hal_url = {https://inria.hal.science/hal-04624789v2/file/Private_NLP-7},
 hal_pdf = {https://inria.hal.science/hal-04624789v2/file/Private_NLP-7.pdf},
}
Arij Riabi, Virginie Mouilleron, Menel Mahamdi, Wissam Antoun and Djamé Seddah. 2025. Beyond Dataset Creation: Critical View of Annotation Variation and Bias Probing of a Dataset for Online Radical Content Detection.
In Proceedings of the 31st International Conference on Computational Linguistics. pages 8640–8663. Association for Computational Linguistics. Abu Dhabi, UAE.
HAL PDF
@inproceedings{riabi-etal-2025-beyond,
 address = {Abu Dhabi, UAE},
 author = {Riabi, Arij and Mouilleron, Virginie and Mahamdi, Menel and Antoun, Wissam and Seddah, Djam{\'e}},
 title = {Beyond Dataset Creation: Critical View of Annotation Variation and Bias Probing of a Dataset for Online Radical Content Detection},
year = {2025},
 booktitle = {Proceedings of the 31st International Conference on Computational Linguistics},
 publisher = {Association for Computational Linguistics},
 editor = {Rambow, Owen and Wanner, Leo and Apidianaki, Marianna and Al-Khalifa, Hend and Eugenio, Barbara Di and Schockaert, Steven},
 pages = {8640--8663},
 url = {https://aclanthology.org/2025.coling-main.578/},
 hal_url = {https://hal.science/hal-04867863},
 hal_pdf = {https://hal.science/hal-04867863v1/file/hal.pdf},
}

Contact

Pour plus d'informations ou pour poser une question, merci de contacter Djamé Seddah

djame.seddah[at]inria.fr