Yesterday’s ASA (Advertising Standards Authority) report on influencers highlights that over a three-week period, 65% of the Instagram stories monitored (over 15,000 stories) were not clearly labelled and identified as advertising content (as required by the CAP Code).

This ASA report has put influencers promoting products on social media back under the spotlight as so many have failed to meet the compliance standards required. The headlines focus on the fact influencers may be named and shamed if they don’t comply, but what does this mean for the brands and retailers that build campaigns around these partnerships?

 

The background

The code requires that any paid for advertising is clearly labelled with #ad or similar and social media companies have introduced tools to allow brands to advertise more transparently on their platforms, such as through the “Paid partnership” tag on Instagram. For a while, it did seem that celebrities and influencers were using the ad hashtag, after a few high-profile mistakes, but this has clearly fallen off the radar in recent times, at the same time as massive growth in this form of advertising.

The main issue is the huge disconnect between the ASA focus on protecting consumers from subliminal advertising and the influencer’s priority of maintaining an “authentic” image that is not tainted by sponsorship. The appeal of these social media pages is that they give followers an insight into the “real” life of the influencer or celebrity, which is aspirational and which many followers will want to emulate. If the followers realise that the content is only being promoted due to the financial relationship with the advertised brand, the content will naturally lose some of its appeal. In turn, this can lead to the influencer losing followers and this diminishes their appeal for other brands. It’s a bit of a vicious circle.

 

What does it mean for brands and retailers?

In the early days, big brands worked very closely with any talent representing their brand to ensure that the content they pushed out set the right tone and was compliant. Brands have moved away from this with influencers, most likely due to the push by influencers to maintain control of their channels and their image. In turn, brands have likely left responsibility for compliance with the influencer, which is not always the best move. Brands should consider doing their own due diligence on an influencer’s track record for compliance as part of their partnership campaign planning.

 

Brands have a lot to lose by picking the wrong influencer and falling foul of the CAP Code. Consumers often put a lot of trust in the accounts they follow and if they feel they have been misled or manipulated, they will quickly switch off from the influencer and the brand. It can be very difficult to come back from online setbacks as numerous brands have shown; but well targeted campaign can be hugely successful. Over the coming months, expect to see more collaboration and guidance from brands with influencers to make sure they hit the right mark. But if the media spotlight moves on to something else, you can expect to see these practices slipping back in.

 

Latest News

Chambers rankings 2025 – ‘They go head to head with the big firms’ - Pannone Corporate

This year’s Chambers 2025 rankings have been published, with Pannone once again featuring strongly – both for individual lawyers and teams. The Cham...

Read more...
Pannone heads to Lisbon for the annual PLG Academy - Pannone Corporate

Pannone has been a member of PLG International Lawyers for over 30 years, providing the firm with access to a professional network of lawyers from across...

Read more...
Grenfell Phase 2: key takeaways for the construction sector - Pannone Corporate

The Grenfell Phase 2 report has now been published. Whereas Phase 1, which was published in October 2019, focussed on events on the night of the fire,...

Read more...

View all posts

The chaos that has ensued from the regrading of A-Levels in the UK highlights far-reaching impacts of automated decision-making and why its use is restricted under the GDPR.

The law

Decisions which are based solely on automated processing which “produces legal effects concerning [the data subject] or similarly significantly affects [the data subject]” are prohibited under Article 22 of the GDPR.

There are some limited exceptions – (i) if the decision-making is necessary for the performance of a contract; (ii) if the decision-making is authorised by law and there are suitable safeguards for the rights and interests of the individual; or (iii) the individual has provided their explicit consent.

We can safely assume that A-Level students did not consent to the standardisation process! The other two exceptions do not apply here either.

The algorithm

Following a fairly thorough process, teachers provided a “centre assessed grade” (CAG) for each student in each subject and ranked each student in order from first to last. However, there was concern that the CAGs would be more generous than a usual exam mark. To moderate the grades, Ofqual developed an algorithm which took the CAGs and compared them to the performance of the relevant college and students in previous years to produce standardised grades across each cohort. These standardised grades were then allocated depending on the ranking awarded by teachers.

So, was the regrading solely the result of automated decision-making? Ofqual, in its privacy impact assessment, says not. Rather the algorithm “support[ed] the human intervention involved in the form of [CAGs] and rank orders … determined by teachers and signed off by other individuals within the centre”, with particular emphasis placed on the rank order.

The result

Almost 40% of CAGs in England were downgraded. Students attending smaller colleges and studying for more niche subjects, were more likely to receive their CAG as it was accepted that results are more likely to vary year on year and it was harder for the algorithm to work using less data. Many independent schools fell within this category. Students attending larger colleges and studying more traditional or popular subjects, typically in the state-sector, were less likely to receive their CAG. Further, if a percentage of students at a particular college previously received a U grade, for example, the bottom ranked students at that college were likely to receive a U grade, irrespective of their CAG.

The reaction

Rather than instilling public confidence, the use of the algorithm led to critics accusing Ofqual, exam boards and the government of systemic bias against students from more deprived backgrounds. The UK government and each of the devolved administrations has now announced they will be reverting to the CAGs where these are higher than the standardised grades.

What next

Whilst the immediate fury is likely to subside with the decision to revert to CAGs, threatened legal action under the GDPR and Equality Act could still proceed if university places have been lost due to initial regrading. Watch this space.

Danielle Amor, Senior associate, Danielle.Amor@pannonecorporate-com.stackstaging.com

For further information please contact our specialist data protection team.
Amy Chandler, Partner, Amy.Chandler@pannonecorporate-com.stackstaging.com
Danielle Amor, Senior solicitor, Danielle.Amor@pannonecorporate-com.stackstaging.com
Patricia Jones, Consultant, patricia.jones@pannonecorporate-com.stackstaging.com

Latest News

Chambers rankings 2025 – ‘They go head to head with the big firms’ - Pannone Corporate

This year’s Chambers 2025 rankings have been published, with Pannone once again featuring strongly – both for individual lawyers and teams. The Cham...

Read more...
Pannone heads to Lisbon for the annual PLG Academy - Pannone Corporate

Pannone has been a member of PLG International Lawyers for over 30 years, providing the firm with access to a professional network of lawyers from across...

Read more...
Grenfell Phase 2: key takeaways for the construction sector - Pannone Corporate

The Grenfell Phase 2 report has now been published. Whereas Phase 1, which was published in October 2019, focussed on events on the night of the fire,...

Read more...

View all posts

The UK’s data protection regulator (the ICO) has announced that it intends to impose a fine of £500,0000 on Facebook for its role in the Cambridge Analytica scandal*. With the recent coverage of the GDPR and the increased fines of up to €20 million or 4% of global annual turnover for the most serious breaches, £500k may seem surprisingly low. However, this is the maximum penalty the ICO is able to issue in respect of breaches occurring before 25 May 2018 and would in fact be the highest fine the ICO has ever issued.

The announcement was made as part of the ICO’s progress report into its investigation of the use of data analytics in political campaigns. The investigation was launched in response to allegations that data obtained from Facebook by Cambridge Analytica was misused by both sides in the UK referendum on membership of the EU and to target voters during the 2016 American Presidential election process.

The breaches relate to the Cambridge Analytica app which scraped data from the profiles of the 320,000 US Facebook users who used the app (a personality quiz) and also from those users’ friends’ accounts (an estimated 87 million users worldwide, including 1 million UK users). Facebook was not sufficiently transparent with users concerning the ability of the app to access profile data and, despite being aware of potential breaches of its terms of service, Facebook did not implement adequate security measures to restrict Cambridge Analytica’s collection and use of Facebook data.

This is the strongest indication yet from the ICO that it is prepared to use the full range of its regulatory powers to deal with the most flagrant data breaches. Facebook has certainly got off lightly with a fine of £500k due to the timing of the breach; a GDPR fine could have easily have been in the tens of millions. Social media platforms and political parties and campaigns are clearly high on the ICO’s current agenda, but other global businesses ought to take note too.

Most businesses are unlikely to ever face fines in the region of €20m or higher. However, practices which show a disregard for data laws and the privacy rights of individuals will attract the ICO’s attention and little leniency will be shown. Businesses operating in the EEA must ensure that they provide sufficient information to individuals about the use of their personal data, that they properly safeguard that personal data and, in the event of a data breach, take prompt and effective measures in response.

*Facebook now have an opportunity to respond to the ICO Notice before the penalty is finalised.

Excerpts from this article were first published on LexisPSL on 11/07/2018

Latest News

Chambers rankings 2025 – ‘They go head to head with the big firms’ - Pannone Corporate

This year’s Chambers 2025 rankings have been published, with Pannone once again featuring strongly – both for individual lawyers and teams. The Cham...

Read more...
Pannone heads to Lisbon for the annual PLG Academy - Pannone Corporate

Pannone has been a member of PLG International Lawyers for over 30 years, providing the firm with access to a professional network of lawyers from across...

Read more...
Grenfell Phase 2: key takeaways for the construction sector - Pannone Corporate

The Grenfell Phase 2 report has now been published. Whereas Phase 1, which was published in October 2019, focussed on events on the night of the fire,...

Read more...

View all posts