Twitter, the editor: What the loss of safe harbour means

16 Jun,2021

 

By Shashidhar Nanjundaiah

 

Shashidhar NanjundaiahTwitter is no longer a common carrier or an intermediary as it lost its “safe harbour” legal cover yesterday under Section 79 of the IT Act. In real terms, this means that the social media platform will now be responsible for what the Twitterati says on its platform. It is no longer a (presumed) hubless network of global chatter. After it did not appoint a full-time officer incharge of compliances, the country managing director will now be criminally liable for inflammatory or hateful remarks that are tweeted.

 

The loss of indemnity against what its users say means that the essence of the business model of social media has come crashing down. As with any pack of cards, this crash was waiting to happen. But the concern for the billions of social media users is much larger. Twitter and other social media giants have argued that their business is like that of a telephone company—a “common carrier”.

 

But as intermediaries, their role is not as simple as that. If you have ever been in a situation where a post on Facebook or LinkedIn finds a high number of views and the next one is a dud, based on several algorithmic parameters, it means that the role of the social media platform is not that of a common carrier at all. There is intervention, and it is largely technological, but then what isn’t automated these days? Unlike an editor, a social media company will not edit the content of what we write. It is far more democratic that way than a newspaper. But there is intervention—like an editor who “gatekeeps” a newspaper, assigning various pages, positions, lengths, and levels of prominence to news stories. Assigning prominence is a form of editorial intervention.

 

Editorialisation means that the social media business model that is fed to us—that the social media is the truly free marketplace of ideas—should be questioned. Users may assume that because of the sheer volumes, their posts have the same chance of visibility—and impact—as their neighbour’s. Have social media platforms been transparent with us about how it all really works, and whether it is simply an organic process like Darwin’s natural selection?

 

In real terms, this also means that social media is finally showing that it, too, has a lifecycle. After a decade of growth-the most unregulated and unfettered among modern technological growths-things seem to be cooling off, and that is something a report had predicted, based on the trends nearly three years ago. An Oxford University-Reuters Institute for the Study of Journalism’s Digital News Report for 2018 had reported that for the first time since the social media phenomenon began more than 15 years ago among users in the 37 countries across continents that the report included (not including India, though), its use for news consumption has shown a decline.

 

Liberal policies in a democracy implicitly must assume one factor that really is debatable—fairness. If a government tangos with private players and creates the shield of opacity from regular folks, the users, that wouldn’t really fair. On the other hand, it can’t be fair to hold a company responsible for what its consumers do, because those users are not employed by the company. For example, the bicycle manufacturer cannot be held responsible for a bicycle rider who chooses to ride illegally on the wrong side of the road. But in a social media model, the company and the user are bound by a trapeze-like relationship, bound to each other for a critical moment of use.

 

The technical problem with the company’s responsibility is that any damage can only be mitigated, in the form of a post facto response. We have not reached the stage of technological prowess of some sci-fi films where an impending incident can be prevented. But those crucial minutes, hours or days are all-important in impact terms. So what can a social media platform do to prevent further erosion of its business model? A precautionary method could be to filter out bots and fake profiles. Another possibility is, can it delay each post and hold back a post for manual scrutiny when the content seems shady? What’s the big rush to post everything live anyway—especially if we can all agree that intervention is a reality? Surely these solutions are possible to implement?

 

Whether the results of corporate responsibility for user content will be good or bad depends on who you are. It would be naïve to assume that everything will go according to the spirit of the law. Twitter will now be an easier target for those who find a post inconvenient, and when users find that their legal responsibility is indirect at best, one wonders whether bad content will be prone to sink even further. Twitter may need to bar many, many more people than ever before.

 

But patriotic societies like walls. Social media platforms have been betraying the preferred strategy of how our hypernationalistic nation wants to project itself. That discrepancy is far less pronounced in countries where they don’t really care that much. For us, these platforms have been uncontrolled dhobi ghats for our dirty linen, and that is not how we prefer it.  Now finally someone will have some control on how we communicate about ourselves.

 

Shashidhar Nanjundaiah has headed private schools of journalism and media in India. Currently in an independent capacity, Prof Nanjundaiah feels the need for better news literacy especially among younger audiences. He writes on MxMIndia often. His views here are personal

 

 

Post a Comment 

Comments are closed.

Today's Top Stories
Videos