Disinformation, platform regulation, and freedom of expression: My IVLP Experience
Over the last few years, I have been working in freedom of expression and press work in different forms. Last year I published a book chapter titled ‘Constitutions, freedom of expression, internet shutdowns, social media and defamation laws in Africa’. The chapter covers contemporary means of exercising freedom of expression and explores how internet shutdowns, social media regulation and defamation laws pose threats to the freedom, and how they could be tackled. One of the countries that I often use as an example of enhanced freedom of expression is the United States (U.S.).
The First Amendment of the U.S. Constitution provides that “Congress shall make no law respecting an establishment of religion or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances”. Essentially, the Amendment provides for freedom of religion, freedom of speech, freedom of press, freedom of assembly, and freedom to petition government. The First Amendment is the reason the press and digital platforms thrive in the U.S.
Any attempts to muzzle or extinguish the above freedoms are promptly extinguished and declared to be unconstitutional by the U.S. Courts. For example, the U.S. Supreme Court in Sullivan v New York Times ruled that public officials and public officials must meet a higher standard of proof in defamation cases. These officials must prove that the defamatory statements were made with “actual malice”, meaning the statements were manifestly false or made with reckless disregard for facts. The decision has made it an uphill task to successfully sue media organisations for defamation. A good comparator to this is Kenyan courts declaring criminal defamation to be unconstitutional and paving the way for civil defamation processes that also require proof of malice.
This month (September 2023), I have been taking part in the ‘Edward R Murrow Program for Journalists: Media in an Age of Disinformation Programme’. The Programme is part of the United States Government, Department of State’s International Visitors Leadership Programme (IVLP). During the programme, I have engaged media practitioners, Federal and State officials, academics, and political operatives on how the First Amendment and technology influence disinformation and free speech.
How the U.S. deals with these issues affects Kenya. Most important being the fact that digital platforms (Meta, Alphabet, and X) that Kenyans use to express themselves are American companies that espouse American laws and ideals. One example to illustrate this is the fact that digital platforms moved non-E.U. personal data to the U.S. when the E.U. GDPR came into force; meaning our personal data is by and large regulated under U.S. law. But this also raises the question as to how Kenya in the long run will seek to regulate the multinational digital platforms. Perhaps, the conclusion of the WorldCoin suits and investigations will offer concrete direction.
I draw several insights from my IVLP programme. One, digital and media literacy continue to be a problem for the U.S., Kenya, and the world at large. Perhaps, this might be a contributor to the proliferation of misinformation and abuse of freedom of expression/press rights. Digital and media literacy provide skills in the ability to access, analyse, evaluate, and create media content, as well as to navigate and critically assess digital technologies and online information. These skills are crucial for individuals to make informed decisions, be responsible consumers and creators of content, and participate effectively in the digital world. In my analysis, these are skills that should be provided at both national and county levels. Yes, county governments just as national government have a role to play to ensure county residents can access, analyse, evaluate, and create media content on county issues.
Secondly, while U.S.’ First amendment is a great provision that provides for wide ranging protections, with political speech enjoying the highest level of protection. In Kenya, we have Articles 33 and 34 that are not absolute rights. Unlike the U.S., Kenya’s media self-regulation is clearly defined through the Media Act and establishment of the Media Complaints Commission.
Thirdly, the war against misinformation is a challenging one. Even with fact checking and maintaining credible information sources, misinformation will always thrive. This is exacerbated by the fact that social media companies thrive on clicks and trends to raise revenue through targeted advertising. Regulation of social media companies in the U.S. that would impact on exercise of First Amendment rights is highly frowned upon. In addition, section 230 of the Communications Decency Act of 1996 protects these companies. The section provides for immunity for digital platforms by providing that, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. This shields platforms from legal action against them relating to content posted on the platforms.
Recently, in Kenya, there has been debate on the role social media platforms have in fanning hate speech and incitement. What should happen to these platforms if speech posted on the platforms amount to propaganda for war, incitement to violence, hate speech, advocacy of hatred, or injures reputations? A suit filed in the Kenyan courts relating to the role of social media companies in fuelling conflict in Ethiopia will perhaps provide pointer as to what the law ought to be.
The only regulation of these platforms that seems to be working in the U.S. is in relation to their trade practices as regulated by the Federal Trade Commission (FTC). FTC has been taking platforms to task on how they process personal information through deceptive means and use the information for commercial purposes. California data protection authorities are also taking big tech to task in relation to how they process personal information.
Fourthly, related to three above is the proliferation of A.I. technology that has impact on free speech, free press, and the right to privacy. Last week, American Senators in Washington were meeting big tech leaders to brainstorm on possible options for A.I. regulation. Most interesting is that a lot of big tech leaders believe in A.I. regulation while over the decades they have pushed back against regulation that touches on privacy rights, free speech, freedom of the press, and platform regulation generally. They see the potential harms of A.I. but not potential harms of their decades’ long operations. It is even worrying that A.I. technology is evolving faster than these leaders can spell A.I.
Kenya being a net consumer of A.I. technology is yet to signal possibilities of A.I. regulation. Perhaps the net consumerism on A.I. technology might change as President Ruto was recently wooing Silicon Valley outfits to set up shop in Kenya. I am curious on the regulatory framework that will evolve as these companies seek to implant themselves in the Silicon Savannah. Perhaps, we will have competition and consumer regulators taking action similar to what the FTC is doing. Or will Kenya go the European Union route that provides for stringent data protection, A.I., and platform regulation?
Fifthly and on big tech, I had a conversation with a Democrat campaign manager on microtargeting in political processes. The campaign manager reluctantly admitted to the fact that microtargeting still exists in American politics and a myriad of databases provide the data that is curated to be used for political microtargeted messaging. Data brokers thrive and publicly available data is accessible to all for use in political processes. In my draft PhD thesis, I provide legislative proposals on how to regulate data brokers generally. Interesting fact is that California recently tabled a draft Bill that would require data brokers to delete personal data at the request of data subjects.
It is critical that I do take into consideration that microtargeting happens mostly within digital platforms that are privately owned. To illustrate why ownership is critical, Meta provides access to researchers and civil society to its data and has greatly worked to reduce Cambridge Analytica-esque operations. On the other hand, X (formerly Twitter) has gone in the opposite direction, sacking government and public policy departments and providing no access to researchers or civil society. Simply, being private entities with limited regulation, these companies do as they wish.
Sixthly, it is my considered view that dealing with disinformation, platform regulation, and freedom of expression in Kenya requires a multi-regulatory approach. In my draft PhD thesis, I provide proposals on regulators such as the Office of the Data Protection Commissioner, Competition Authority of Kenya, and Communication Authority working in partnership when dealing with multidimensional regulatory questions.
Seventhly, clear open government law and practice at both Federal and State levels is critical. Though some States and Federal government slow access to information processes necessitating court action. This is similar to Kenya where individual citizens are left with no option but to file court petitions for national and county governments to grant access to information. I hazard a guess that access to information processes could be made more seamless.
Eighthly, it is critical that media organisations and legal practitioners start crafting and implementing organisation wide A.I. Policies. To illustrate, one media organisation I interacted with has a draft A.I. policy that among other things prohibits use of tools like ChatGPT to write media stories, requires disclosure where the tools are used, and requires layered due diligence to ensure factual reporting. For legal practitioners, there have been incidents of lawyers using ChatGPT and coming up with fake authorities. Media and legal practitioners need to have very clear policy guidelines on how to use or interact with A.I. as this goes into their ethical practice, credibility, and professionalism.
Lastly, as I write this, the American electoral season is gaining momentum, disinformation will most definitely have a role to play in political messaging. Social media platforms and newly minted A.I. tools will be brought to play in a big way. On my part, I will be watching closely how the A.I. tools direct public debate, especially highly protected political speech. With the U.S. Congress, the FTC, U.S. courts and State data protection authorities take any action before the U.S. elections in 2024?
As an academic, there is much more I can write about disinformation, platform regulation, and freedom of expression and my IVLP Experience. All in all, I am truly grateful to the U.S. Embassy, Nairobi, and the Department of State for providing me the opportunity to dig deeper into these issues that I have been working on academically and professionally.