In the era of widespread digitalization and omnipresent surveillance, the issue of personal data protection takes on particular significance. In accordance with the GDPR, facial images and other elements enabling personal identification are subject to special protection. This article analyzes issues related to anonymization of personal data in visual materials, presenting both legal aspects and technological challenges in this field.

Facial Images and License Plates as Personal Data According to the Regulations

The need for anonymization of visual materials stems from several legal acts, primarily the GDPR. In individual countries, this issue may also be regulated by civil codes and copyright laws. It is commonly indicated that a person’s image is a personal right that is subject to legal protection. The captured image in the form of photographs and their re-use is also regulated by copyright law, which generally requires obtaining consent from the person concerned.

According to the GDPR, if a person is recognizable in a photograph, such an image is treated as personal data and is subject to identical protection as names, surnames, personal identification numbers, or dates of birth. However, there are exceptions to this rule. The most obvious one concerns images captured during coverage of mass events (especially when appearing in a crowd, somewhat as a part of a “background” for e.g. football match). In such cases legislation in most countries does not require obtaining consent from individuals. Another obvious exception concerns public figures, such as government officials, where media outlets do not need to file for a separate permission for publication, when the photo or video material is taken while one is fulfilling his or her public duties.

When it comes to images depicting license plates with recognizable numbers, regulations and court jurisprudence can differ across European countries. Therefore, in Germany, their anonymization is a common practice, whereas in Poland, the Supreme Administrative Court ruled that license plate numbers themselves do not constitute personal data.

Protection of personal data isn’t the only basis for blurring faces and license plates in video materials and collections of photographs. The necessity of applying anonymization may also result from business contract requirements, provide additional privacy protection, and safeguard company interests against the use of this data by competitors.

Considering the dynamic development of artificial intelligence systems, blurring faces helps limit the risk of unauthorized use of images, for example, for training algorithms without the consent of the data subjects. In the case of AI model training, anonymization can help reduce potential violations of individuals’ fundamental rights.

The Obligation to Anonymize Personal Data in Photographs and Video Materials

Since we have already explained that facial images, and often license plates, should be treated as personal data, organizations should conduct an audit of their internal activities, processes, and procedures to detect and determine how this sort of data is processed. Personal Data Processing Administrators (and Data Protection Officers appointed by them) have a number of obligations, including:

– minimizing the use of such data (anything that goes beyond a legitimate interest in using data for the purpose of processing requires consent from the person whose image is to be used)
– limited retention period (only until the direct and meaningful business necessity expires)
– appropriate data security
– ensuring limited access to data

It is not surprising that many organizations opt for preventive anonymization of photos and videos, because after anonymization, we are no longer dealing with personal data, and thus, GDPR requirements cease to apply. It is very often the case that personal data is recorded somewhat “accidentally” and does not contribute anything to the main business or organizational purpose for which the photographs or video recordings were made (so-called toxic data. Good examples include mapping city streets, infrastructure inspection, geodetic work, or documentation of construction projects.

Consequences of Personal Data Breaches in Visual Materials

The consequences of visual data breaches can be as serious as in the case of leaking personal identification numbers or other personal data. According to the GDPR, the penalty for violating the provisions of this regulation can amount to up to 20,000,000 EUR, and in the case of an enterprise, up to 4% of its total annual worldwide turnover. Practice shows that such breaches occur systematically, and companies failing to fulfill their obligations of proper personal data protection bear financial consequences. It can be expected that with the increasing amount of visual materials collected (through ubiquitous cameras and sensors), the need for their protection will only gain in significance.

Let us cite the example of the Kaufland store chain in Romania, which received a number of fines related to the failure to respond to Data Subject Access Requests (DSAR) from persons appearing in CCTV monitoring recordings. The supervisory authority emphasized that, according to the law, the administrator is not only obliged to provide monitoring recordings to the person concerned but also to apply technical and organizational measures to avoid violating the rights of other natural persons. Therefore, the faces of all persons appearing in the recording should be blurred, as should license plates. It happens more often nowadays that the national data protection bodies emphasise in their decisions that the data processor cannot refuse to provide the recordings on the grounds that they contain images of third parties and must therefore protect the privacy rights of those persons. In a similar case (failure to fulfill the obligation arising from Art. 15 GDPR, and preemptive data deletion to prevent the data subject from executing his right to demand the recording), the Spanish store chain Mercadona was fined 170,000 EUR. Under this link you can read a list of selected fines imposed by national data protection authorities in various European countries.

Practical Cases Requiring Anonymization

Storing a Significant Amount of Recordings or Photo Collections

The need for anonymization often arises in construction projects, geodetic work, or in the GIS industry. Drone-performed inspections are also the case. Before starting the construction of, for example, a gas pipeline or highway, extensive photographic documentation of the area is created. Sometimes contract provisions require proving progress at each stage of construction, or providing regular inspections and maintenance. Most often, it is impossible to avoid capturing employees, bystanders, or random vehicles in the frame.

The GDPR does not specify the exact data retention period, only indicating that it should be limited to the necessary minimum. Theoretically, photos and films should be deleted immediately after the direct business need for their use ceases. However, collecting such an amount of material is a significant cost, especially in large infrastructure projects, where expensive equipment is often used, such as cars with camera systems used in the mobile mapping industry. Companies often seek to preserve these materials for the future, and effective anonymization makes this possible. When photos or recordings are “cleaned” of personal data, they can be used much more freely.

Data Subject Access Requests (DSAR)

This is a very common case related to the implementation of Art. 15 GDPR, which imposes on the administrator the obligation to provide data at the request of the data subject (so-called data provision to the data subject upon request. I write about this in more detail in the article available HERE). In practice, this means that if we are captured by a surveillance camera, we have the right to request access to the recording on which we appear from its operator.

Until now, owners and operators of monitoring systems often used various excuses, claiming, for example, that they can transfer recordings only at the request of law enforcement bodies (e.g., police), insurance companies, or based on a court order. Another frequently used argument is the statement that other people also appear in the recording, which makes it impossible to share due to the protection of their privacy.

Supervisory authorities are increasingly imposing penalties on entities using such practices, emphasizing that in the exercise of the right of access, it is crucial to maintain a balance between the right to information and the privacy of others (suffice it to recall the aforementioned penalty imposed on the Romanian branch of Kaufland). This requires the application of appropriate anonymization techniques to protect the rights of bystanders.

Transferring Data to External Entities

It often happens that collected personal data constitutes merely an unnecessary addition to visual material that we need to share with other entities (e.g., subcontractors). Persons accidentally captured in the frame do not affect the substantive value of the material. In such cases, anonymization allows avoiding the inconvenience (and costs) associated with signing personal data processing agreements (DPAs). Even when concluding such agreements, the entity entrusting the data has the obligation to ensure that the new administrator properly processes the received data, protects it, maintains a record of processing activities, and deletes it irreversibly after they are done with the commissioned work.

Publication of Visual Materials

Images allowing the identification of a specific person constitute personal data and require consent for publication. An exception to this rule is, for example, mass events, where a single image is an element of the “scenery” (or images of public figures, as mentioned earlier). However, between these extremes, there are many borderline cases. Companies involved in mobile mapping, virtual tours, digital creators, or advertising portals must be particularly careful about these issues. In the case of services similar to Google Street View, one could argue that the people and vehicles captured in the lens are “part of the scene,” but with the enormous scale, it is not difficult to have numerous cases of serious privacy violations. Caution is also advised when placing photographs of your car on the Internet (e.g., on auction portals or social media) with visible license plates – they can be used to commit fraud.

Preparing Data Sets for AI Training

The law is particularly restrictive when it comes to using personal data in two cases: for marketing activities and training artificial intelligence models. The latter application is currently occurring on a large scale, e.g., in the automotive industry, which is striving to produce increasingly autonomous vehicles. This process requires collecting a huge amount of film material on roads and city streets to “feed” the AI algorithms. For automotive companies, the best practice is to implement anonymization processes already at the data acquisition stage to avoid later legal problems. The Artificial Intelligence Act adopted by the European Parliament and Council on June 13, 2024, explicitly confirms the right to privacy and protection of personal data of the individual at every stage of the life of an AI system, including during its creation.

Other Cases Requiring Anonymization

In companies, there is often a need to preserve recordings for training purposes in the field of Occupational Health and Safety (OHS) or so-called ‘loss prevention’. These may be recordings from warehouse halls, showing potentially dangerous situations, accidents, or costly mistakes. After anonymization, such material can be freely used in training for employees. This particularly concerns documenting incidents and accidents and preparing training materials based on them (we are dealing here with an unlimited retention period of the recording and a special need to protect the privacy of event participants). Well-known privacy expert Robert Bateman writes interestingly about this (the entire article can be read HERE).

A similar case concerns video recording as part of process improvement in accordance with the Lean Management methodology. This can be, for example, the analysis of activities performed at a given position on the factory assembly line, or the creation of so-called “spaghetti diagrams”. Anonymization of the image of employees allows the use of the collected video material for training purposes and its transfer to other departments and divisions within the company.

The number of schools and educational institutions using anonymization of photos and video recordings is also rapidly growing, as well as entities cooperating with them (e.g., those involved in transporting children to schools). Organizations dealing with special education, which operate on so-called particularly sensitive data, are especially active in this area.
Available Technological Solutions

There are many solutions for anonymizing visual materials on the market, which can be divided into several categories depending on the algorithms used, mode of operation, and platform.

Manual Editing of Materials

This is an unrivaled method in terms of accuracy, but also the most expensive. It requires a qualified, trained employee and licenses for appropriate software (unless open-source solutions are used). The advantage is complete freedom in terms of the masking effect (from a simple black square to various patterns of pixelation and blurring), its shape, as well as the choice of objects to anonymize. Not only faces or license plates can be blurred, but also company logos, street names, personal data plates, or tattoos.

The main disadvantage is the high cost and limited scalability – the method works only where the amount of data to anonymize is small. Manual anonymization of a recording from a camera at a railway station during rush hour, where hundreds of people pass through the frame, would be economically unjustified.

Solutions Built into Cameras (so-called “on edge”)

These can be found in top-tier (and increasingly in mid-range) models from leading manufacturers. Their undeniable advantage is that the image is anonymized “at the source” and transmitted further in this form, which is an ideal solution from the perspective of personal data protection.

However, there are many disadvantages. First of all, such solutions work only in specific camera models, while the need for anonymization often occurs in facilities with cameras of various types, from different suppliers, and installed at different times.

Secondly, cameras with built-in anonymization usually use classic image processing algorithms with limited effectiveness, typically not exceeding 80-85%. They are far from the accuracy achieved by the latest AI algorithms, which, however, require much more computing power than camera processors have at their disposal.
Solutions Being Part of VMS

CCTV infrastructure providers often offer their own systems for controlling and managing images from cameras (Video Management System). Often, one of their modules is the ability to anonymize materials. The accuracy of such anonymization varies, and since it is most often an addition to a larger whole, it is difficult to expect results comparable to products from companies specializing exclusively in anonymization. Nevertheless, more and more VMS incorporate best-in-class anonymization solutions.

Cloud Solutions

They usually utilize the latest generation of AI algorithms, so-called convolutional neural networks, which, although resource-intensive, offer the highest anonymization accuracy. In the cloud, this is not a problem, as computing power is available almost without limitations. That is why most solutions available to the general public are based on online services. However, they have certain disadvantages – the process is slowed down by the need to upload raw material to the cloud and download the finished product, as well as by the possible load on the service provider, who may queue tasks. Moreover, unless it is a private cloud of the organization, it is necessary to sign a data processing agreement (DPA), which is not always acceptable. There is also a greater risk of data leakage. For these reasons, corporations and the public sector often avoid such solutions.
Local server-based Anonymization

It is based on the same algorithms as cloud solutions but operates locally, on one’s own infrastructure. This is a good choice for organizations needing to anonymize large amounts of material (e.g., high-resolution videos, panoramic photos) while maintaining the highest security standards. The downside is the need to invest in one’s own hardware and its maintenance, along with appropriate IT personnel. The advantage, however, is the elimination of delays associated with sending data to the cloud and the security of the process.

Desktop Applications

Their main advantage is ease of installation and operation on a typical office computer. If such software has an intuitive interface, it does not require hiring additional personnel with specialized knowledge. For this reason, they are used not only by large corporations but also by schools, industrial plants, non-governmental organizations, local governments, railways, police stations, small geodetic and construction companies, hospitals, and other institutions whose scale of operations does not justify setting up a dedicated server. Accuracy may vary, but some apps such as Gallio PRO (where the author of this article works) use the same high-end AI algorithms as cloud and server solutions. The compromise in this case is the processing time. Some desktop applications also feature a simple video editor, allowing for manual corrections in places where the AI ​​algorithms failed to apply the blur or mask. This is especially useful when dealing with Data Subject Access Requests.

Types of Graphic Effects in Anonymization

Excluding manual editing of materials, where any graphic effect can be applied, automated solutions offer several basic blurring methods:

1. Covering the object with a uniform color (e.g., “black bars” over the eyes) – a simple method, but not subtle and very visible. Its disadvantage is that if the software fails to blur an object on a single frame in a video, an observer can easily notice it (and reverse the playback on a timeline to have a look).

2. Image blurring (blur) – looks fine especially with edges smoothly transitioning into the surroundings. This makes it harder to notice when the algorithm fails to detect and mask an object on a few video frames.

3. Pixelation of varying granularity – an intermediate solution, somewhat more expressive than blurring.

4. DNAT – Deep Natural Anonymisation. The most technologically advanced systems can generate imaginary faces and dynamically overlay them on the anonymized image. These artificial likenesses can even maintain the original facial expressions. However, this method requires significantly more computing power, and there is a theoretical risk that the generated face will accidentally resemble a real existing person.

Source: S. Klomp, From deepfakes to Safe Fakes, Eidhoven University of Technology 

Technological Challenges in Automatic Anonymization

The technology of automatic image anonymization, although it has existed for some time, still faces numerous challenges. Above all, contrary to popular belief, it is not able to ensure 100% effectiveness (like every AI-based solution).

The main difficulty is the fact that AI algorithms analyze each frame of the image separately, having no “memory” of previous frames. In the case of a typical recording from an industrial camera, where people move in different directions, with variable lighting, often entering and exiting the frame, detecting all faces on each frame is extremely difficult.

An hour of recording in the standard 24 frames per second format is over 86 thousand frames to analyze. It’s no wonder that sometimes the algorithm does not recognize all objects that were supposed to be anonymized.

Moreover, image analysis algorithms are not fully deterministic – this means that after performing the anonymization operation several times, we may obtain slightly different results. A certain element of randomness is built into the entire process.

Paradoxically, anonymizing recordings can be more difficult to implement than facial recognition itself. In the case of recognition, it is sufficient for one frame to contain a clearly visible face for the goal to be achieved. With anonymization, each overlooked frame constitutes a potential privacy violation.

An additional challenge is recognizing partially visible faces (e.g., when a person enters or exits the frame). For an AI model trained on whole faces, such a partial image can be difficult to detect.

The bottom line

Anonymization of personal data in visual materials is an important tool for privacy protection and compliance with regulations such as the GDPR. Although this technology is not perfect and faces numerous challenges, its application allows organizations to significantly reduce legal and financial risks associated with personal data processing.

Each organization must make its own decision about what anonymization measures to apply, tailoring them to the scale of operations and type of risk. These can be minimal justified actions, e.g., anonymization of only selected recordings and photo collections, or comprehensive solutions combining various anonymization technologies.

In the face of increasing penalties for violations of personal data protection regulations and growing public awareness regarding privacy, investment in knowledge about appropriate anonymization tools and procedures becomes not only a legal requirement but also a reasonable business strategy.

About the author

Łukasz Bonczol – expert in the field of visual data anonymization. Co-creator of Gallio PRO, one of the leading solutions for automated anonymization of video materials and photos, used by corporations, local governments, and NGOs (more at https://gallio.pro/).

Need advice on anonymizing photos and video recordings – contact the author at hello[at]gallio.pro.

In the era of widespread digitalization and omnipresent surveillance, the issue of personal data protection takes on particular significance. In accordance with the GDPR, facial images and other elements enabling personal identification are subject to special protection. This article analyzes issues related to anonymization of personal data in visual materials, presenting both legal aspects and […]

Tags: article