Paper Title
CaptchaDefender Preventing Attacks on Captcha and Safeguarding it using Adversarial Attacks

Abstract
Traditional Captcha system Captcha system were introduced in order to understand if the user is just human or botperformingDdosattackoranyothersortofattackwhich may harm website or webapp in any way, users are prompted with images or text and they are to identify this but with advancement of AI, attackers are able to make a bot which can identify this images or text clearly. Also, it is seen in many cases that to make captcha systems more efficient noise is added to images or text but still it is seen in many cases that botare able to bypass this. But this creates a problem for human users as they are not able to see the images or text clearly or properly. In our proposed system, we introduce a method to prevent bypassing captcha by bots by using adversarial methods to modify images and by improving proper implementation of captcha . In order to prevent this we will be using adversarial methods to fool the ml botsothatitclickswrongimagesanddoesnotallowbottobypass captcha.Aftercreatingadversarialimagesitmayresultinimages of high noise which may result in inability of humans to identify images correctly so to resolve this we introduce autoencoder neural networks which take in images as input and according to reconstruction error we can sort out images with minimum noise. Keywords - Captcha, Adversarial Image, Safeguarding