At any point of time in your Internet-driven life, you must have come across sites marked as NSFW (not suitable/safe for work) and wondered which organisation decides if a particular website's content is safe to explore or not. Well, the answer to this mystery is Yahoo. Yes, it has been Yahoo which has been making use of its in-house, smut-trained, porn-detecting neural network to detect offensive and adult images and mark them as NSFW till now. But now, the Web pioneer has announced on it blog post that it has decided to make the system completely open source so that everyone and anyone can use it for their content.

It is important to note that classifying a particular content as NSFW material is a highly subjective thing. While one thing might be objectionable to one person at one particular place, the same thing might be okay for another person located at some other location. This is the reason that Yahoo's model only focuses on one type of NSFW content: pornographic images on the net. This means, identification of NSFW images of graphic violence, sketches, text, cartoons or any other types of unsuitable content isn't possible with this model.

Yahoo's porn-detecting neural network is capable of combing through a vast variety of imagery and then give each image a score, from 0 to 1, on the basis of how NSFW it thinks that particular image is. This could prove to be helpful in various situations, and not just the censorship cases.

Our general purpose Caffe deep neural network model (Github code) takes an image as input and outputs a probability (i.e a score between 0-1) which can be used to detect and filter NSFW images. Developers can use this score to filter images below a certain suitable threshold based on a ROC curve for specific use-cases, or use this signal to rank images in search results.

nsfw_opensource

Yahoo's system is probably the first open source model or algorithm for identifying NSFW images. The search giant decided to advance this endeavour by releasing its deep learning model to the world. The open source model will allow developers to experiment and work around with a classifier for NSFW detection, and get back to Yahoo on ways they can improve the classifier.

Yahoo 's Caffe deep neural network model takes an image as an input and then outputs a probability (i.e. score from 0 to 1) which can further be used for detection and filtration of NSFW images. Developers all around the globe can make use of the model's score to filter images below a certain suitable threshold based on a ROC curve for specific use-cases, or use this signal for ranking images in the search results.

Available on GitHub for download, the system can also be used to inspect/monitor one's emails and messages without any human intervention or any privacy intrusion, and warn the user if an image in their emails or messages is considered potentially NSFW by the system.

Find more information on Yahoo blog post - here

[Top Image - Shutterstock]
Advertisements

Post a Comment

أحدث أقدم
Like this content? Sign up for our daily newsletter to get latest updates.