Now even minors can ask the Privacy Guarantor for the preventive blocking of intimate content

Now even minors can ask the Privacy Guarantor for the preventive blocking of intimate content

A new rule lowers the age of those who can ask to prevent the dissemination without consent of intimate photos or videos to Facebook and Instagram. But there are some gaps to be filled for everything to work

(photo: Unsplash) Now even minors (aged 14 and over) will be able to report intimate personal content to the Privacy Guarantor that they fear may be disclosed without their consent, before even if the eventual publication takes place. This is established by a rule included in the Reopening decree approved last October 7 by the Council of Ministers. An important step forward in the fight against the non-consensual dissemination of intimate images (erroneously called revenge porn, because the word revenge suggests that there was a wrong at the origin). Important both because the phenomenon now increasingly affects minors as well, and because it reinforces, with a law, what was previously a simple pilot project resulting from the collaboration between the Privacy Guarantor and the social networks of the Facebook galaxy. But it has some gaps.

A decisive signal

From 8 March the Guarantor allows any adult who is afraid of being a victim of non-consensual dissemination of intimate images or videos to act preventively . Through a special form you can report the content and the Personal Data Protection Authority blocks its dissemination, even in advance, on Facebook and Instagram. Now the decree-law 139 extends the possibility of using this tool to over 14s (previously they could use it through their parents, a possibility that still remains).

"A signal that shows us that slowly a system is being built to combat this type of crime structured by law and which denotes the desire to protect the victim more and more, who should not feel guilty, but indeed relieved from being able to act in a preventive way - explains to Wired Marisa Marraffino, lawyer expert in computer crimes -. I would recommend this tool to those who, for many reasons, do not want to report, for example for fear or because they are affected by the former partner; to those who would not be able to face a criminal case because they have problems with anxiety or depression; finally, to those who do not know the identity of the person with whom they exchanged photos or videos (think of the cases of chat on gaming platforms, ed) ".

How the Guarantor-Facebook alliance works

The pilot project involving the Guarantor and Zuckerberg's social networks is based on the use of hash codes, or numerical sequences that uniquely identify the photo or video. The Guarantor receives the report, enables the user to autonomously upload its content to a dedicated server, where it is "hashed", ie combined with its code. The path cannot be done in reverse: from the hash code it is not possible to return to the original photo or video, nor to trace the social profile of the person concerned.

These hashes are then added to a black list (to which only a small group of Facebook team members can access) and, when someone tries to post or share the reported content, they are blocked. The original content uploaded by the user is automatically destroyed after seven days. Fast, foolproof and anonymous, this mechanism would seem perfect. But is not so. A first limit immediately catches the eye: in order to protect themselves, the potential victim must have the photo or video on their device. We know this doesn't always happen. But there are two other weaknesses.

The other social networks

“What happens with the other social networks? - Marisa Marraffino points out to Wired - And with the numerous sites dedicated to the non-consensual dissemination of intimate images, where the offending videos often end up? And with messaging apps, including Telegram, which is teeming with channels dedicated to this type of content? All these platforms do not collaborate and this makes the law ineffective at a practical level ".

A limit that could be partially overcome in the future, explains Guido Scorza, of the Italian Data Protection Authority:" A step later, after this law, it could be to force all content hosts of a certain size, not just social networks, to equip themselves with a technology that also allows them to preventively block the publication of content, upon notification by the Authority. After all, with Facebook and Instagram we were able to activate the pilot project because they already used hash technology ".

With messaging apps, on the other hand, the issue is more delicate:" I don't know how far we can go there, because there are not only technological issues at stake, but also ethical ones - continues Scorza -. What compromise do we want between the repression of this type of offense and privacy? If we open a door to fight sexual offenses, which we can all agree on, then who tells us that this door is not opened for other purposes? ". The reference is to the controversy sparked by Apple's recent announcement of wanting to introduce Neural Match software on its devices to search for child pornography material.

Every intervention on the file modifies the hash code

The second limitation of the hash technology is that even a minimal modification to the file (cuts, application of filters, addition of subtitles or effects, to name some), or sending to messaging apps via end-to-end encryption, which compresses it, automatically changes the code. "Hash is a technology that has existed for many years, but is used in a different way, for management applications, not for security, because it is easily circumvented - notes Andrea Barchiesi, electronic engineer, CEO and founder of Reputation Manager -. One solution could be to 'break' the videos into many hashes and then understand in each of the individual segments if there is a manipulation: but this is potentially infinite, it would be a very complicated process. How to predict all possible changes to a given file? ”

Even the artificial intelligence systems in use today are not yet sufficiently trained. “With some technologies you could identify suspicious contents, then have them screened by a battery of analysts who progressively teach the machine why a certain content is harmful - explains Barchiesi -. After thousands of videos, the computer may begin to recognize similarities. But even there he could not establish certain information with certainty, for example if the protagonist of the video is a minor ".

" We need shared rules - underlines Marisa Marraffino -. Just as there are conventions for the rights of the child, we would need international conventions on the rights of network users, in order to also circumvent territorial limits and impose shared procedural rules and instruments on online platforms ”. Both she and Barchiesi and Scorza agree that if we want to protect everyone, all the players in this system must work together. "I believe that nothing, nowadays, can destroy a person's life, without touching it, like the non-consensual diffusion of intimate images - concludes Scorza -. I would very much like to look at this project in a few years, see it getting better and better and know that there are people who have managed to spare themselves a long ordeal of anguish, fear and shame. It will mean that we will all have done our part ”.






Social Network - 18 hours ago

Facebook is considering changing its name


Facial Recognition Comes to UK Schools


Europe also finances digital surveillance in Africa and the Balkans with humanitarian aid

Topics

Children Facebook Instagram Privacy revenge porn Wired Safe Web globalData.fldTopic = "Kids, Facebook, Instagram, Privacy, revenge porn, Wired Safe Web"

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.




Powered by Blogger.