In 2022, if we understand the interconnected role of the design behind our mainstream social media applications, and the algorithms used to run them, a new trend has become quite real, which would entertain some intriguing legal questions surrounding areas of concern such as competition law, digital accessibility and technology policy. Social media influencers, content creators and even technology geeks, have noticed this trend that various social media applications, be it Instagram or Twitter or any, are now behaving as recommendation media applications. The 10-second video trends promoted by Tiktok, for example, kind of promoted the algorithmic tendencies of recommending content of some favorable parameters, thereby giving a hard time to YouTube and Instagram. Even Spotify has been affected by the 10-second video trends, making recommendation media the recent version of social media.
In this article, the legal and policy challenges around the transition of recommendation media from social media, are discussed. The endeavour behind the article is to declutter the tendency of algorithmic activities behind the rise of recommendation media, and assess how legal dilemmas may arise.
The Emergence of Recommendation Media
Let us first understand social media in brief terms. It is a digital medium, through which users of the platform “socialise” with each other. We may also say that social media has two important features, which make it characteristically, important - the human element of engagement of users (who are data subjects) and the technology element of the platform itself - the UI/UX, the code, the algorithms and even the stakeholders involved in the life cycle cum maintenance of the platform. The relationship between the technology involved and the human data subject defines the responsible and explainable features of the social media technology as a whole, while we also see the emergence of other forms of incidence which could relate to the platform and its social, political, economic and other relevant forms of use. Now, there are similarities among social media platforms in many ways, as to how are they useful, how they affect civil liberties of their own users, how they put their algorithmic infrastructure into proper use to moderate user content, and many others. As we see with time that the use of algorithms on social mediums, especially the mainstream ones such as Twitter, Instagram, LinkedIn, Facebook and others did drive and create a sphere of discourse and private censorship both. However, the way algorithms functionalists and shape social media discourse has surely changed. Tiktok is a quite important driver of this trend as well considering the fact that the app by introducing Tiktok Music would surely affect the way Spotify controls a significant place in the market.
The rise of recommendation media however isn't just driven by the “Tiktok Effect” as we know it. Due to the developments in the United States as far as their domestic issues are concerned, private censorship and algorithms-driven discourse on social media platforms have quite affected international discourse and content creation. The self-regulation approaches of the big technology (FAAMG) companies does affect the knowledge and information economies of the Global South economies where governments in Asia and Africa are questioning the lack of transparency in such self regulation policies, like leadership hierarchies and community standards.
This led the existing players promote the concept of recommendation media where parameters rule visibility. To approach this development, the emergence of alternative means of digital media became possible, starting from the United States. Substack, Revue and even Clubhouse represented those “alternatives” as we know.
Micheal Mignano explains how recommendation media actually works in an article entitled The End of Social Media:
In recommendation media, content is not distributed to networks of connected people as the primary means of distribution. Instead, the main mechanism for the distribution of content is through opaque, platform-defined algorithms that favor maximum attention and engagement from consumers. The exact type of attention these recommendations seek is always defined by the platform and often tailored specifically to the user who is consuming content. For example, if the platform determines that someone loves movies, that person will likely see a lot of movie related content because that’s what captures that person’s attention best. This means platforms can also decide what consumers won’t see, such as problematic or polarizing content. It’s ultimately up to the platform to decide what type of content gets recommended, not the social graph of the person producing the content. In contrast to social media, recommendation media is not a competition based on popularity; instead, it is a competition based on the absolute best content. Through this lens, it’s no wonder why Kylie Jenner opposes this change; her more than 360 million followers are simply worth less in a version of media dominated by algorithms and not followers.
Sam Lessin explains this phenomenon, of recommendation with this cycle of content marketing and its cycle, as per this description from his tweet:
Now, content creators are stuck in a different kind of a loop, in general - because it might lead to a situation, according to Sam, what we call as the Stage 5 of digital entertainment and content (which may or may not work in the case of knowledge economics that much). Now, algorithms take the larger helm to shape discourses and content driven reach for the users, which can be replicated by algorithmically sourced content, taking over human content, followed by personalized generated content to compete all facets of algorithmically sourced content.
It is important to estimate that this cycle could remain a theoretical guess and might not happen soon. However, what is important to realise is that this cycle is worth understanding the way recommendation media could have special repercussions in the way digital media would transform.
Ethical and Economic Implications of Recommendation Mediums
To understand the ethical repercussions behind the purpose and use of recommendation mediums, it is necessary to understand its economics, in some way. Instagram is a reasonable example to understand the same. To compete with Tiktok’s 10-minute videos, Instagram came up with Instagram Reels, which has created an interesting competitive streak against Tiktok. As of now, Instagram has to adapt with some choices of shaping their own platform, as we know that Meta (or in general FB platforms) more or less has an interface problem, and not an algorithm problem. Here is an excerpt from a screenshot of a Tweet by Sam Lessin:
I saw someone recently complaining that Facebook was recommending to them…a very crass but probably pretty hilarious video. Their indignant response [was that] “the ranking must be broken.” Here is the thing: the ranking probably isn’t broken. He probably would love that video, but the fact that in order to engage with it he would have to go proactively click makes him feel bad. He doesn’t want to see himself as the type of person that clicks on things like that, even if he would enjoy it. This is the brilliance of Tiktok and Facebook/Instagram’s challenge: TikTok’s interface eliminates the key problem of what people want to view themselves as wanting to follow/see versus what they actually want to see…it isn’t really about some big algorithm upgrade, it is about releasing emotional inner tension for people who show up to be entertained.
There are some ontological changes that recommendation mediums for sure provide to content creators and users, which cannot be ignored. Those important choices, are described as follows:
Recommendation Media creates a vertical hierarchy of rankings for any digital post on their platform, while horizontal reach is completely up to the user. Algorithms since drive content originally imply that vertical reach-out through scrolling up endless content is now the new normal. Even platforms like YouTube and Twitter are mainstreaming that in their own league, be it YouTube Shorts, Revue or Twitter Communities.
It enables a user to mainstream their content by contributing to multiple flows of content escalation, through any parameter possible. Tiktok as an example shows that it could be a 10-second soundtrack, which Instagram for sure resembles as well. However, there may be some other subtle aspects as well - including the graphics involved, the caption styling, or anything else.
While we know that social mediums promote a sense of monoculture in action, which has some economic imprints, recommendation mediums enforce monocultural trends using algorithms, which also makes several IP concerns driven by algorithmic choices and adaptivity of expression which any digital content may ought to have. In some aspects, it might simplify IP (mostly copyright) issues, but closures do not seem to really happen.
Recommendation mediums, other than social mediums, for sure do not drive an organic flow of discourses since they are algorithmically driven. It means that the flow of content expression is going to be driven by the recommendation algorithms, which validate or maybe invalidate the content flow.
We cannot deny that the technology companies do not have internal policies or strategic approaches towards the algorithms as to how they make these choices. However, at some point, even they cannot control the trends, simply. This statement by Mark Zuckerberg about News Feed on Facebook explains the problem:
We really messed this one up. When we launched News Feed and Mini-Feed we were trying to provide you with a stream of information about your social world. Instead, we did a bad job of explaining what the new features were and an even worse job of giving you control of them. I'd like to try to correct those errors now. When I made Facebook two years ago my goal was to help people understand what was going on in their world a little better. I wanted to create an environment where people could share whatever information they wanted, but also have control over whom they shared that information with. I think a lot of the success we've seen is because of these basic principles. We made the site so that all of our members are a part of smaller networks like schools, companies or regions, so you can only see the profiles of people who are in your networks and your friends. We did this to make sure you could share information with the people you care about. This is the same reason we have built extensive privacy settings — to give you even more control over who you share your information with. Somehow we missed this point with News Feed and Mini-Feed and we didn't build in the proper privacy controls right away. This was a big mistake on our part, and I'm sorry for it. But apologizing isn't enough. I wanted to make sure we did something about it, and quickly.
Now, it is necessary to understand that the perspective of being on any social medium, unlike online editorial publications, was that most of these mainstream platforms at least were horizontal in vogue and reach. Users, be it individuals, businesses and even governments could act in a horizontal fashion and fathom the organic reach-out of digital content.
Interestingly, Instagram now has three choices to make as they are shaping up their own platform:
Shift towards ever more immersive mediums (For example - Text to Video to 3D to VR)
The Increasing and Penetrable Use of Artificial Intelligence (from AI rankings and recommendations to mere generation)
Change in interaction models from user-directed to computer-controlled (from Clicks and Scrolls to Autoplays)
This could be an inevitable choice for many digital content platforms, as well as those social mediums which could come into origination in near future. So, yes - there could be ethical problems, which stem from the classical questions of transparency, explainability and responsibility of algorithms. Earlier, the Black Box problem and the lack of transparency was largely driving how AI estimates data subjects and their choices online. Now, it is becoming clearer that the dynamics of expression and reach are going to change, in fundamental ways.
To Conclude, Some Legal Dilemmas
To conclude, there could be some issues with the rise of recommendation mediums, many of which are obvious, with some being fresh problems:
Hostage of Expression and Speech: When algorithms drive content, there could be allegations of curbing freedom of speech and expression, which may be countered by justifying the self-regulation policies and explaining some aspects of algorithm-driven decisions made to moderate and even recommend/invalidate any digital content. These issues have been in the Web2 sphere for FAAMG platforms for long, especially in the US. Solutions have been proposed that there should be proper oversight, audits & compliances must be strengthened, regulators must be sensible in shaping public interests and concerns and means of ADR may be promoted to address subtle and edgy legal disputes. However, Recommendation mediums do something quite explictly, which is driving algorithms at the heart of content and expression flow in their platforms.
Competition/Antitrust and Corporate Governance Issues: While algorithms driving content and physical realities could be an important dilemma, and in the realm of sectorial implications of algorithmic activities and operations within mainstream digital platforms have been recognised, due to some real lack of research on linking digital realities with physical realities in a legal aspect of understanding - it has been hard to resolve the issues, which already exist in the Web2 sphere. Yes, recommendation mediums may affect markets and increase their fragility, leading to a pile of competition law issues. Has it become more certain to assess the problems and conclude as to what legal problems could come up, in the market economies? It has become easier to do so because issues of corporate governance, their horizontal impact, and the economics of regulation could clearly come on radar. However, nation-states must approach competition law differently, because they lack in establishing sectoral implications of algorithmic activities and operations properly. India is surely an example where to address Amazon India, they do require specific amendments to the Competition Act, 2002, to promote ex-ante regulation over digital markets.
For now, even if we take the European Union’s AI Act as a pivotal reference, we can at least conclude that sectoral regulation coupled by sheltering decentralised approaches to promote the ethics of responsible artificial intelligence, would surely help us in demystifying the challenges that recommendation mediums bring up in future. Governments would surely attempt to ask for audits and ensure that recommendation mediums as intermediaries comply within the legal schemes. However, the lack of clarity cannot be justified by a huge lack of legal acumen in even governing digital markets, be it the US or any emerging economy like India.