In France, the digital advertising market is estimated to be worth 3.5 billion euros. Up until recently, this advertising was dominated by displays on websites and the purchase of Google AdWords. However, now automated advertising (called “programmatic purchasing”) is available. Profiling Internet users can be done by analyzing their web activities, which allows them to predict the interest they will have in an advertisement at any time. Algorithms allow for the calculation of the value of advertising space in real time.
Uncontrolled algorithms can be used to display banner ads that are relevant to our interests. However, there are also risks. Internet users are not aware of the impact that lack of transparency has on their behavior. Moreover, algorithms can sometimes be influenced by exaggerated confidence despite their discriminatory results. The question of the algorithms’ neutrality and their ethical implications is raised. In order to study ethics in this field, it is important to understand how we are connected with these new technologies. It involves the law’s coverage of algorithms and the evolution of the digital advertising industry.
It would be prudent to concentrate on the algorithms rather than the data processed. This can be done by creating systems that are capable of testing them and controlling their behavior to avoid harmful outcomes.
Reforms in Europe: Law and algorithms
Data collection and processing have reached unprecedented levels, which is driving the development of new products. The development of connected devices and the empowerment of consumers are responsible for the increase in data volume and diversity. The technology has made it easier for them to take action. Businesses are more and more dependent on consumers’ opinions and data, as well as their own.
European institutions have started the reform of personal data legislation in light of this. In May 2018, the new European General Data Protection Regulation will be implemented in all EU member states. The GDPR imposes greater transparency and accountability for those who handle data. It is based on the principle of compliance with law and provides severe penalties. It also affirms the right of data portability and requires those responsible for processing personal data to ensure their operations are compliant with standards protecting personal data, beginning at the design stage of a product or service.
GDPR aims to regulate algorithmic data processing implicitly. In the advertising industry, we see a tendency: all sites, products and services that use algorithms tend to avoid mentioning them. Instead of mentioning the importance of algorithms, they refer to “customisation”. When there is customisation, there is often “algorithmisation”.
Digital advertising is not suited to legislation
The laws governing “traditional” advertising rely on the principle that consent is obtained from individuals prior to processing their personal data. This concept of data privacy is less relevant in digital advertising. In traditional marketing, data is often objective and predictable, such as age, gender, name, address, or marital status. Digital marketing, however, has a completely different concept of data. Social networks include data that isn’t just basic information like age, gender and address, but also data about my everyday life, such as what I do, what I listen to, etc.
It is possible to identify a person’s profile by analyzing their web activity and social network behaviour. VisualHunt
This new situation calls into question the validity of the distinction made between personal data and non-personal information. This new situation also questions the relevance of the prior consent principle. It’s almost impossible to use a tracking application without consenting to it. The consent is required to use this technology. However, the exact way the data controller will use it is unknown. The problem no longer relates to consent but rather the automatic, predicative deductions that are made by companies who collect these data.
Algorithms reinforce this trend by increasing the collection and usage of trivial, decontextualized information. This data is likely to be used in order to profile individuals and create “knowledge” based on probabilities, rather than certainty, about their personal and intimate interests. It would be more important to look at the algorithms themselves and not just the data that feeds the algorithms.
Online advertising: legal and ethical issues
Behavioural targeting is fraught with dangers. It can influence consumer choices, change perceptions of reality, or subliminally influence them. In order to prevent potential abuses, it is essential that algorithms are held accountable and transparent.
This situation brings into question the relationship of law and ethics which, unfortunately, is often misunderstood. The laws are meant to govern behaviour, i.e. what is permitted, forbidden or required in a legal sense. Ethics, on the other hand, refers to a more general distinction between right and wrong, regardless of any law compliance. The ethics of algorithmic processing should be based on two principles: transparency and the creation of tests that check the results to prevent any possible damage.
Transparency of algorithms and accountability
Online platforms’ activities are primarily based on information selection and classification, as well on offers of goods or services. They create and activate different algorithms that affect the consumption behavior and thinking of users. The customization can be misleading because it is based upon the machine’s conception of our thinking. This is not about who we are but what we’ve done and seen. This observation highlights the need for transparency. The people affected by an algorithm must first be informed about the algorithmic processing and what it means, including the types of data used and its purpose, in order to file a complaint if necessary.
What are the tests for algorithms?
By cross-checking information that can be considered “sensitive”, algorithms in advertising can result in a price differentiation for a product or a service, or even create typologies of policyholders at high risk. This can then lead to the calculation of insurance premiums based on criteria that are sometimes illegal. The results of these algorithms can be discriminatory. Not only are the collection and processing of such data (racial, ethnic, political, and religious opinions) generally forbidden, but the algorithmic methods themselves may also be. The results of the first international beauty competition based on algorithms, for example, led to the selection of only white candidates.
In order to avoid this kind of abuse, tests must be established for the results generated by algorithms. Codes of conduct have begun to appear in addition to legislation and the role of the protection authorities (CNIL). Advertising professionals who are members of the Digital Advertising Alliance introduced a protocol that is represented by an icon visible next to a target ad explaining how it works.
In order to keep a good name and a competitive edge, it is in the companies’ interest to adopt more ethical behavior. Internet users dislike advertising that they consider intrusive. Advertising’s ultimate goal is to anticipate consumer needs and “consume more effectively.” It must be done in a way that is ethical and compliant with the law. This could be the vector for a brand new industrial revolution that is conscious of foundations.