Stanford Social Innovation Review Logo

  • Arts & Culture
  • Civic Engagement
  • Economic Development
  • Environment
  • Human Rights
  • Social Services
  • Water & Sanitation
  • Foundations
  • Nonprofits & NGOs
  • Social Enterprise
  • Collaboration
  • Design Thinking
  • Impact Investing
  • Measurement & Evaluation
  • Organizational Development
  • Philanthropy & Funding
  • Current Issue
  • Sponsored Supplements
  • Global Editions
  • In-Depth Series
  • Stanford PACS
  • Submission Guidelines

Three Cheers for Regulation

During the Industrial Revolution, labor organizations, social movements, the media, and government came together to rein in big business, providing lessons on how to regulate firms of today like Facebook, Amazon, and Google, writes  SSIR' s editor-in-chief in an introduction to the Summer 2019 issue.

  • order reprints
  • related stories

By Eric Nee Summer 2019

government should regulate internet usage essay

Before I joined Stanford Social Innovation Review in 2006, I spent almost 20 years in Silicon Valley reporting and writing about the technology industry for a variety of business publications, including Fortune and Forbes magazines. One of the most exciting developments I covered was the emergence of the Internet and the World Wide Web.

Many of the Web’s early supporters believed that it would usher in a utopian world where the powerless would be on an equal footing with the powerful. There was no central authority controlling access to the Web, or regulating who could create a website or what they could publish. A man living in Des Moines, Iowa, would have the same ability to reach everyone on the Web as the editors of The New York Times.

Software standards for the Web were open, license-free, and controlled by an international community—a far cry from the top-down profit-seeking approach to technology then pursued by the likes of IBM, Microsoft, and Apple. The possibilities for the Web were endless: open government, open data, open access, free education, and free information. The new crop of Web-based companies embraced that belief, arguing that the Internet and Internet-based companies shouldn’t be regulated. Libertarian ideology reigned.

But as we all know, the Internet became dominated by these same rebels—Facebook, Amazon, and Google—all of whom pursued profit and market dominance as aggressively as Standard Oil or US Steel ever did. The Internet not only has become dominated by these powerful companies but also is being used by companies, governments, and others to gather information on people and to actively misinform them.

But it doesn’t have to be that way. During the Industrial Revolution, big business was also largely unregulated and took advantage of a laissez-faire environment to pollute, to pay low wages and compel people to work long hours, and to use its monopoly control to squeeze suppliers and gouge customers.

But labor organizations, social movements, the media, and government came together to create regulations that changed the way companies operate. And guess what? Capitalism wasn’t destroyed. In fact, companies thrived, and a balance was struck between business and society. That balance has been undone in recent years, but it does provide a lesson for how society might similarly control Internet companies.

One of the organizations that have been fighting for the digital rights of individuals and society for nearly 30 years is the Electronic Frontier Foundation (EFF). Much of its efforts have focused on limiting government control and preserving individual freedom on the Internet, issues that continue to be important. But other organizations are beginning to take on business as well.

In this issue of Stanford Social Innovation Review , we take a close look at the history of the EFF in our Case Study, “The Invention of Digital Civil Society.” The article’s author—Lucy Bernholz, senior research scholar and director of the Digital Civil Society Lab at SSIR ’s parent organization, the Stanford Center on Philanthropy and Civil Society—has been active in this field for many years.

Support  SSIR ’s coverage of cross-sector solutions to global challenges.  Help us further the reach of innovative ideas.  Donate today .

Read more stories by Eric Nee .

SSIR.org and/or its third-party tools use cookies, which are necessary to its functioning and to our better understanding of user needs. By closing this banner, scrolling this page, clicking a link or continuing to otherwise browse this site, you agree to the use of cookies.

Stanford University

government should regulate internet usage essay

Should the Government Regulate Social Media?

  • Herbert Lin ,
  • Marshall Van Alstyne

Government regulation to prevent the spread of misinformation and disinformation is neither desirable nor feasible. It is not desirable because any process developed to address the problem cannot be made immune to political co-optation. Nor is it feasible without significant departures from First Amendment jurisprudence and clear definitions of misinformation and disinformation. 

Read the rest at Divided We Fall

  • Mobile Close Open Menu

Internet Regulation: The Responsibility of the People

Jan 31, 2020

Stay updated on news, events, and more

Join our mailing list

ESSAY TOPIC: Is there an ethical responsibility to regulate the Internet? If so, why and to what extent? If not, why not?

Last summer, the Federal Trade Commission (FTC) fined Facebook 5 billion dollars for violating the privacy rights of their users. Many argue, however, that users of Facebook have little to expect when it comes to privacy. To an extent, that is true. By putting their personal information online, people allowed their data to be harvested by companies through the use of internet cookies and IP tracking. As users continue to share more and more of their lives online, the expectation of privacy will continue to diminish. The Facebook case is simply an indicator of a wider issue that has arisen in this new internet age.

It is not only companies or friends that see the information we post, but data breaches have exposed financial information of users, and governments have used their power to monitor users' online activities. In an age when it seems like everything is shared online, it is more necessary than ever for people and governments alike to determine their responsibilities, to take control of the direction of the internet.

Governments, in particular, have a difficult road ahead as they determine how much of their citizens' internet lives they wish to monitor and regulate. Their choices will determine the types of regulations, whether it be censorship of information or media, tracking what websites or news users share, or even connecting their usage to their specific location. Some governments around the world have already taken such steps, justifying these actions by saying is it essential for the safety of their citizenry. However, many internet users question whether the government has their best interests at heart. Rather, they believe that the government is trying to expand control over their citizens. In many cases, this is true. Therefore, it is the opinion of this writer that allowing governments to regulate, monitor, and control their citizens' internet activities is too much power, but rather the people should take steps to monitor themselves.

The past few years have seen a dramatic increase in the amount of internet "trolling," which refers to users who act in bad faith in their online interactions, relying on the anonymous nature of the internet to protect them. They might lie about who they are, try to upset others, or share false information often referred to as "fake news." Though this might seem harmless by itself, in large doses it can have an impact on world events. For instance, according to The Telegraph , during the 2016 British Referendum on their European Union membership, Russian trolls sought to influence voters. On the day of the election, they sent millions of tweets and online messages supporting the leave campaign. Russian trolls were also found to be active during the 2016 American Presidential Election. Posing as American citizens, they set up spam accounts that posted news stories, many of which were false, in order to sway voter opinion.

Though it is difficult to estimate the effect these accounts had on the outcomes of the elections, the accounts earned many followers and millions of retweets in which others shared the false information. This rise in "fake news" and "trolling" has led many in the government to consider legislation that would track internet users' activities, so they might better combat these abuses.

Some countries have already taken steps to clamp down on such abuses. South Korea, for instance, has for years used an internet authentication process that ties a person's internet profile to a phone number and real name. This past year, Australia enacted legislation in response to the Christchurch massacre in New Zealand. The legislation gives them the authority to remove content they deem too violent or offensive from media or social websites. Other countries like the United Kingdom have enacted similar laws to mixed reactions.

While the countries listed above might have their citizens' best interests at heart, one need look no further than China to see how such controls might be abused by governments.

For years, China has heavily regulated and censored the internet that Chinese citizens experience, often referred to as "The Great Firewall of China." For an internet company to become available in China, they must first tailor their website so it does not conflict with Chinese interests. For instance, Google, one of the largest companies in the world, famously scrubbed internet searches of references to any brutality or wrongdoing by the Chinese government before they were allowed to operate in China. In addition to censoring the type of media available, China has also taken steps to monitor its citizens' actions online. To obtain internet, users must register their names and phone numbers. In December, they plan to roll-out a new system that requires users to register their faces to get service. While these regulations and censorship have been criticized in the past, the scrutiny has increased in recent months due to the protests and subsequent crackdown in Hong Kong. There, the Chinese government has used its internet regulation to track protestors and block access to helpful internet applications. Apple, for instance, has removed an application used by protestors to track police movements from their internet store. China has also blocked the use of privacy protection programs, called VPNs, so users cannot hide their internet activities. Clearly, citizens cannot always rely on their governments to regulate the internet in their best interests.

Ultimate responsibility lies where it always has, with the people. Governments might have to answer to people, but they move slow or are heavy-handed in their approach. Citizens must take control of regulating the internet. In many instances, users already have through the use of fact-checking, and economic pressures. The rise of "trolling" and "fake news" has encouraged the growth of internet fact-checking sites that provide information on the truthfulness of news stories, posts made by public figures, and even speeches given in real-time. This allows users to sift through and separate the fake news from the real. Users also have economic power, when it comes to the internet. Companies like Twitter, have recently bowed to pressure by users, monitoring their service more closely for bots and troll accounts. Already, they have suspended millions of accounts they have found guilty of spreading false or harmful information. These actions have come about because of user complaints.

The internet, more than anything, brings people together. In its short history, it has already become the greatest source of information and sharing the world has ever known, and it has become this because of the creativity, ingenuity, and contributions of regular people. As the internet was created by the people for the people, should it not also be controlled by the people? Governments may have their role to play, but it needs to be at the behest of their citizens, not the other way around.

Before, I mentioned that the FTC fined Facebook 5 billion dollars. How much does that concern its CEO Mark Zuckerberg, who is worth over 70 billion dollars? Probably not much.

However, at the same time, Facebook lost over 15 million subscribers, a substantial loss in an industry where growth is everything. What concerns Facebook more, the power of governments to levy fines, or the power of their users to leave, thereby making the platform obsolete? As always, the power belongs to the people.

Works Cited:

Al-Heeti, Abrar. "Facebook Lost 15 Million US Users in the Past Two Years, Report Says." CNET, CNET, 6 Mar. 2019, www.cnet.com/news/facebook-lost-15-million-us-users-in-the-past-two-years-report-says/ .

"Apple Bans Hong Kong Protest Location App." BBC News, BBC, 3 Oct. 2019, www.bbc.com/news/technology-49919459 .

Ellis, Megan. "The 8 Best Fact-Checking Sites for Finding Unbiased Truth." MakeUseOf, 30 Sept. 2019, www.makeuseof.com/tag/true-5-factchecking-web"Google in China: Internet Giant 'Plans Censored Search Engine'." BBC News, BBC, 2 Aug. 2018, www.bbc.com/news/technology-45041671 .

Griffiths, James. "Governments Are Rushing to Regulate the Internet. Users Could End up Paying the Price." CNN, Cable News Network, 8 Apr. 2019, https://www.edition.cnn.com/2019/04/08/uk/internet-regulation-uk-australia-intl-gbr/index.html.

Lapowsky, Nicholas ThompsonIssie. "How Russian Trolls Used Meme Warfare to Divide America." Wired, Conde Nast, 17 Dec. 2018, www.wired.com/story/russia-ira-propaganda-senate-report .

Perper, Rosie. "Chinese Citizens Will Soon Need to Scan Their Face before They Can Access Internet Services or Get a New Phone Number." Business Insider, Business Insider, 10 Oct. 2019, www.businessinsider.com/china-to-require-facial-id-for-internet-and-mobile-services-2019-10 .

Popken, Ben. “Russian Trolls Went on Attack during Key Election Moments.” NBCNews.com, NBCUniversal News Group, 14 Feb. 2018, www.nbcnews.com/tech/social-media/russian-trolls-went-attack-during-key-election-moments-n 827176 .

Shepardson, David. "Facebook to Create Privacy Panel, Pay $5 Billion to U.S. to Settle Allegations." Reuters, Thomson Reuters, 24 July 2019, www.reuters.com/article/us-facebook-ftc/facebook-to-create-privacy-panel-pay-5-billion-to-u-s-t o-settle-allegations-idUSKCN1UI2GC .

Timberg, Craig, and Elizabeth Dwoskin. "Twitter Is Sweeping out Fake Accounts, Suspending More than 70 Million in 2 Months." Chicagotribune.com, 7 July 2018, www.chicagotribune.com/business/ct-twitter-removes-fake-accounts-bots-20180706-story.html .

Woollacott, Emma. "Russian Trolls Used Islamophobia To Whip Up Support For Brexit." Forbes, Forbes Magazine, 1 Nov. 2018, www.forbes.com/sites/emmawoollacott/2018/11/01/russian-trolls-used-islamophobia-to-whip-up -support-for-brexit/#3f409b7b65f2 .

Wright, Matthew Field; Mike. "Russian Trolls Sent Thousands of pro-Leave Messages on Day of Brexit Referendum, Twitter Data Reveals." The Telegraph, Telegraph Media Group, 17 Oct. 2018, https://www.telegraph.co.uk/technology/2018/10/17/russian-iranian-twitter-trolls-sent-10-million-tweets-fake-news/ s-fake-news/ .

Yoon, Julia. "South Korea and Internet Censorship." The Henry M. Jackson School of International Studies, 11 July 2019, jsis.washington.edu/news/south-korea-internet-censorship/.

You may also like

JAN 31, 2020 • Article

Big Data, Surveillance, and the Tradeoffs of Internet Regulation

This essay written by Seungki Kim is the first prize winner of the high school category in the 2019 student essay contest. Should internet users be ...

Ethics Empowered

Using the power of ethics to build a better world

Sign up for news & events

[email protected] 212-838-4122 170 East 64th Street New York, NY 10065

  • Privacy Policy
  • Accessibility Policy

government should regulate internet usage essay

Should the Government Regulate the Internet?

Net neutrality is th

Introduction

Net neutrality is the concept that internet service providers (ISP’s) ought to treat all internet traffic equally and not intercede between users and their internet destinations.  Net neutrality policies were officially implemented by the Federal Communications Commission (FCC) in 2015.  Through this, the FCC classified the internet as a regulated utility under the 1934 Telecommunications Act. Advocates of net neutrality argued that ISP’s would throttle (diminish) the speed at which users could access certain websites, unless net neutrality was implemented.  Usually, these websites consume a large amount of bandwidth.  Advocates also claimed that ISP’s would eventually charge fees to websites in return for unencumbered user access to those sites. Skeptics of net neutrality argue that the government is poorly suited to regulate such a vast and changing communications tool.  Further, providing internet access is a costly business for ISP’s, and businesses who provide and innovate valuable services should be reworded for their work.  Net neutrality, in their view, harms economic prosperity and the free flow of information.

  • Video:  “Internet Citizens: Defend Net Neutrality,” G.P. Grey
  • Video:  “Net Neutrality: How the FCC Could Kill Call of Duty,”  Learn Liberty
  • “Pros and Cons: Net Neutrality and the Internet as a Utility,”  Johnson City Press
  • As part of their homework, they should compose a short summary, in their own words, of the main points of the pro and con perspectives included in the article. Their summaries may address the following contentions, but may not be limited to them.
  • In the pro-net neutrality piece, the author argues that ISP’s may begin to funnel internet users into fast and slow-speed lanes, depending on how they use the internet and which websites they access. This would degrade the quality of the internet user’s experience.  Furthermore, there is a fear that ISP’s will charge websites for regular-speed user access.  This is a problem because very few websites would be able to pay such fees, giving advantages to a few wealthy companies, while harming many others.  Next, the pro-neutrality author writes that net neutrality has been the de facto management policy of ISP’s since the beginning.  The implementation of the principles of net neutrality have led to vast innovation and growth.  The FCC is merely codifying these positive principles and enhancing the integrity of the economy.
  • The anti-net neutrality author argues that the government should not become internet traffic cops, picking and choosing how ISP’s run their businesses. They argue that ISP’s will lose financial incentives and have no reason to expand coverage to underserved areas.  If ISP’s cannot afford to innovate, then many people will be out of jobs.  High bandwidth websites (such as Netflix) are costly burdens for ISP’s, and they must find ways to make up for those shortfalls.  Moreover, regulating the internet under the 1934 Telecommunications Act, a law conceived about sixty years before the internet was widely available, is a foolish idea that burdens modern technology with antiquated law.
  • After their summaries, students should record which side of the debate they favor, and why.
  • Answer: It was created by Congress through the Communications Act of 1934 and signed into law by President Franklin Roosevelt. Oversight is performed by the Congress.  The President appoints its five commissioners.
  • Answer: It was created to regulate telephone and radio communications. As new methods of communication were pioneered by entrepreneurs, scientists, and engineers, the FCC expanded its regulatory responsibilities to include television, cable, and satellite communications.  Originally, its main mission was to provide equal and affordable access for all people to communications services, and ensure the viability of the nation’s communications networks.
  • Do they believe the FCC performs an important regulatory function, or should Congress consider alternative methods for overseeing communications? Should any part of the government oversee communications at all?
  • Should the internet be regulated in any capacity? If so, why?  If not, why not?
  • Where do they stand on net neutrality? Ask them to consult the viewpoint they recorded in their homework the evening before.  Now that they have shared a discussion about net neutrality and heard more viewpoints, have their own viewpoints changed?  If so, why?  If their stance remains the same, what are the compelling arguments that brought them to their stance?

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Are We Entering a New Era of Social Media Regulation?

  • Dipayan Ghosh

government should regulate internet usage essay

The attack on the Capitol could mark a point of no return.

The violence at the U.S. Capitol — and the ensuing actions taken by social media platforms — suggest that we may be at a turning point as far as how business leaders and government bodies approach social media regulation. But what exactly will this look like, and how will platforms balance supporting free speech with getting a handle on the rampant misinformation, conspiracy theories, and promotion of fringe, extremist content that contributed so significantly to last week’s riots? The author argues that the key is to understand that there are fundamental structural differences between traditional media and social media, and to adapt approaches to regulation accordingly. The author goes on to suggest several areas of both self-regulation and legislative reform that we’re likely to see in the coming months in response to both recent events and ongoing concerns with how social media companies operate.

After years of controversy over President Trump’s use of social media to share misleading content and inflame his millions of followers, social media giants Facebook and Twitter finally took a clear stand last week, banning Trump from their platforms — Facebook indefinitely, and Twitter permanently. Could this indicate a turning point in how social media companies handle potentially harmful content shared on their platforms? And could it herald a new era of social media reforms, through both government policies and self-regulation?

  • Dipayan Ghosh is co-director of the Digital Platforms & Democracy Project at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School. He was a technology and economic policy advisor in the Obama White House, and formerly served as an advisor on privacy and public policy issues at Facebook. He is the author of Terms of Disservice (2020). Follow him on Twitter @ghoshd7.

Partner Center

The Internet: To Regulate Or Not To Regulate?

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Cartoon on Medium

A considerable amount of attention in the UK has been placed on regulating the internet .

While some have argued that it is simply not possible or counterproductive , fears over security risks tend to bolster calls for regulation. This is particularly the case when extremist and terrorist content continues to be hosted online, unchallenged.

Work by my colleague Mubaraz Ahmed, for example, has found that each month, more than 484,000 Google searches take place globally using keywords that return results dominated by extremist material. More concerning is that fact that ‘high risk’ keywords like ‘ caliphate’ and ‘ Dabiq ’ – the name of Islamic State’s English-language magazine – go largely unchallenged.

Coupled with this ease of access are newly created powers within the United Kingdom’s Counter-terrorism and Border Security bill . Now, it is crime to view terrorist-related material online three or more times, with a penalty of up to 15 years in jail.

In my opinion, this penalty on the end user is a little unfair. If the responsibility for consumption lies with the consumer, easier channels for reporting extremist content and those who promote it must be created.

In fact, one could question why internet giants are not made to take more responsibility for allowing extremist or terrorist content to remain on their platforms – which leads us back to the debate on regulation.

The first issue is cooperation. Extremist content, instructional terrorist material, as well as funding campaigns to raise money for terrorist groups, can be found on all parts of the internet – with varying degrees of accessibility . Therefore, regulation of the internet will only be possible with the cooperation of multiple government agencies, private sector companies, and end users, particularly when it comes to regulation to remove harmful or hateful material, and content that threatens national security. With the often hostile environment that tech companies are forced to face in Parliamentary enquiries on their self-regulation, this seems challenging.

Second, we can only determine the legal liability of online platforms for the content they host after deciding how best to deal with unacceptable online content: that of an extremist and/or illegal nature. The removal of extremist and terrorist content from the internet – particularly in the case of artificial intelligence programs that may do ‘bulk’ removals - creates a risk that evidence needed for prosecution of individuals disseminating content or providing material support to terrorist organisations may be lost. Technology companies should work with law enforcement to ensure that this material is not simply removed, but archived effectively to understand patterns of behavior.

Third, on the part of technology companies, greater transparency is needed when it comes to government definitions of terrorism and extremism for legislative purposes, particularly on definitions of terrorism online. Given there is no comprehensive international legal definition of terrorism and the internet is a global space, it is perhaps unsurprising that technology companies have struggled to remove content seen as facilitating radicalization on their globally operating platforms.

The existing powers and regulations available in the United Kingdom to audit and regulate the internet are unclear. Further complicating the matter is the fact that companies such as Google and Facebook operate as quasi-monopolies and enjoy dominant market positions.

The most desirable option when it comes to moderating content, is to apply greater pressure on these companies to promote, implement, and approve a self-regulatory model where transparency and accountability of the removal of extremist content hosted on these platforms is made publicly available through the publication of a quarterly report.

Such reports should reference statistics on content flagged by users, outcome of investigated content, decision-making systems employed by these companies on content removal, case studies, and areas for improvement.

Transparency will further incentivize technology companies to cooperate in this field, and has the potential to foster further innovation in the successful removal of extremist and hate content.

Crucially, the public should be able to report and flag extremist content found on the internet to those companies hosting such content – and these concerns should be taken more seriously, with a conversation between company and user.

For example, there is still no ‘flagging’ system for users to report instructional terrorist manuals or disturbing extremist content on Google search results, with software often auto-predicting extremist literature or directing vulnerable people who may consume this content to more extremist literature (in multiple languages). Internet users must be able to flag content as specifically terrorist-related on all social media sites, rather than as just ‘hate content’.

An example of a solution could be the creation and dissemination of trusted third-party programs for platforms like Google, and other search engines, to make such extremist material less visible .

And finally, it is the responsibility of technology companies to ensure that algorithms do not lead their users to sites containing terrorist propaganda, based on their search history.

Regulating them on any failures on the above will help us move beyond a model of all talk and little action.

Nikita Malik

  • Editorial Standards
  • Reprints & Permissions

Advertisement

Supported by

Should the Internet Be Regulated?

  • Share full article

By Concepción De León

  • Dec. 1, 2017

The Federal Communications Commission’s plan to roll back net neutrality has sparked intense debate ; those in favor worry that deregulation would limit access to information in a way that disproportionally affects vulnerable populations, while opponents argue that the market naturally regulates itself without government interference. Here are three books that examine both arguments and their historical precedents.

THE VICTORIAN INTERNET The Remarkable Story of the Telegraph and the Nineteenth Century’s Online Pioneers By Tom Standage 227 pp. Walker & Co. (1998)

In this history of the telegraph, which was developed in the United States and Britain during the 1840s, Standage demonstrates the parallels between the innovative technology of that era and today’s internet. The telegraph allowed people to communicate globally, changing the way business was conducted and even making transnational romance a possibility. Many hoped the accelerated communication would inspire greater international harmony. Standage cites a toast by the British ambassador in 1858 to “the telegraph wire, the nerve of international life, transmitting knowledge of events, removing causes of misunderstanding and promoting peace and harmony throughout the world.” The reality was less idyllic; people found ways to use the new form of communication to nefarious ends (like delaying messages or hacking private communication) and divisions were still perpetuated. But the telegraph’s cultural impact is undeniable, and Standage discusses its enduring influence in this book.

WHO CONTROLS THE INTERNET? Illusions of a Borderless World By Jack Goldsmith and Tim Wu 238 pp. Oxford University Press. (2006)

For an overview of the fight to keep the internet open, turn to this book, written by Wu, the Columbia law professor who coined the term “network neutrality,” and Harvard professor Jack Goldsmith. As the subtitle suggests, Goldsmith and Wu reckon with the idea that the internet would transcend borders and territorial rule. They cite case studies like Google’s struggle to do business in France and Yahoo’s compliance with Chinese censorship to demonstrate how governments continue to exert their influence to control the web. In his second book, “The Master Switch,” Wu discusses how consolidation in the communications industry can lead to stringent control of information by corporations and threaten the internet’s democratic design.

THE FALLACY OF NET NEUTRALITY By Thomas W. Hazlett 56 pp. Encounter Books. (2011)

This brief primer presents the opposing view; Hazlett argues that government regulation stalls and suppresses innovation and that competing networks should be allowed to hash out the rules of managing web traffic among themselves. As he writes in his book, “This bountiful marketplace has emerged unplanned, unregulated, from the visions of technologists, the risks of venture capitalists, and the innovations of entrepreneurs.” Hazlett believes that trend can and should continue on its own.

Follow New York Times Books on Facebook and Twitter (@nytimesbooks) , and sign up for our newsletter .

  • Skip to main content
  • Keyboard shortcuts for audio player

How much control should a government have over citizens' social media content?

An appeals court has ruled against the Biden administration for contact with social media companies. NPR's Michel Martin talks about the ruling with Mark MacCarthy of the Brookings Institution.

Copyright © 2023 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Should the Internet Be Regulated? Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

For purely legitimate purposes, some skeptics advocate for the control of the internet as it has become the world’s most powerful tool, yet the least regulated. To others, the internet does not require regulation because there is no consent among users to be governed, a position I agree with. Unlike conventional areas of jurisdiction, the internet cannot have a constitution or a declaration instituted among its users as a country would do against its people.

Moreover, no one has a legitimate claim to it and, consequently, no one can claim to have authority over the internet. Indeed, questions arise on how an entity can control people’s conduct online. The French government’s attempts to control the internet were unsuccessful because of similar arguments and the failure and lack of jurisdiction. Being an integral part of the global communication network, the internet has attracted regulators claiming its lawless nature, which, arguably, should not be controlled for the lack of legal grounding of such regulations.

The nature of the internet in communication makes gatekeepers of the ubiquitous mobile and electronic communication systems seek to regulate it for the public good. Others cite cyber and electronic security and need to extend the computer law that existed before the internet and through which regulators governed electronic data interchange, cybernetics, and robotics. However, it is critical to note that the public’s attitude towards regulation then differs from today’s. Undeniably, it would be challenging to balance between regulating the internet and maintaining freedom of speech.

It is also nearly impossible to compare the opportunities that lie with internet use and the dangers posed to users – with the latter being insignificant to the former to warrant regulation. Today’s internet users are knowledgeable and know the risks they are exposed to on the internet; hence, any form of regulation cannot justify protecting them. However, despite the lack of legal grounding on ways to regulate the internet, there is a need to manage cyberspace but within the limits of speech among other freedoms to internet users.

Cyber-attacks continue to threaten lives and national security, as previously witnessed, thus, reiterating the need to protect cyberspace. An event such as the attempt to poison Florida City’s water supply by hacking into the water systems highlights the “bad actors out there” (Tidy, 2021). To effectively protect the public from such actors, entities such as the Department of Defense have been tasked with defending cyberspace through its DOD Information Network (DODIN).

The entity works with military intelligence and other private sector stakeholders to secure the physical domains and cyberspace. Indeed, the achievements made by this unit warrant their full authorization to secure the internet. Although they would not control individual actions on the internet or attempt to regulate its use, they would be in charge of securing systems, monitoring offensive cyberspace operations, and defending cyberspace operations.

In addition to the DODIN, the U.S. government created the Cybersecurity and Infrastructure Security Agency (CISA), federal agency meant to protect its critical infrastructure from physical and cyber threats. The agency is more suited to protecting the internet because of its mandate and considering its specificity. From the roles allocated to these agencies, it is evident that the internet cannot be regulated per se; instead, the risks from internet use can be managed. These arguments suggest that the internet will remain a free domain – a tool for expression and communication that cannot be regulated but controlled.

The lack of a legitimate claim complicates any attempts to introduce regulations to control the internet. Despite being the pinnacle of global communications, the internet remains a free domain through which people exercise their freedom of speech, thus, any attempt to regulate it must adhere to these freedoms. However, the lack of proper regulatory structures does not stop CISA and DODIN from securing cyberspace and critical infrastructure from cyber-attacks. The nature of the internet will continue to attract regulators who cite the prevalence of lawlessness and risks from an unregulated internet.

Tidy, J. (2021 ). Hacker tries to poison water supply of Florida city . BBC. Web.

  • "What Does the Internet Teach Your Teen About Sex?" Summary
  • JMeter and Locust Load Tests for Websites
  • Cyberspace: Statistics, Policy, and Crimes
  • Military Cyberspace as a New Technology
  • Cybercrime: Criminal Threats From Cyberspace
  • Internet Technology and Impact on Human Behavior
  • Is the Internet Affecting People Negatively?
  • Media and Internet: Accurate vs. Inaccurate
  • The Million Dollar Page: Why Is It Popular?
  • Networking: Implementation of Authentication Process
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2022, November 3). Should the Internet Be Regulated? https://ivypanda.com/essays/should-the-internet-be-regulated/

"Should the Internet Be Regulated?" IvyPanda , 3 Nov. 2022, ivypanda.com/essays/should-the-internet-be-regulated/.

IvyPanda . (2022) 'Should the Internet Be Regulated'. 3 November.

IvyPanda . 2022. "Should the Internet Be Regulated?" November 3, 2022. https://ivypanda.com/essays/should-the-internet-be-regulated/.

1. IvyPanda . "Should the Internet Be Regulated?" November 3, 2022. https://ivypanda.com/essays/should-the-internet-be-regulated/.

Bibliography

IvyPanda . "Should the Internet Be Regulated?" November 3, 2022. https://ivypanda.com/essays/should-the-internet-be-regulated/.

Essay Service Examples Technology Internet

Should the Internet Be Regulated? Essay

Table of contents

Why is there a need to regulate the internet, why is it hard to regulate the internet, regulating the internet, pros and cons in regulating the internet.

  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee

document

  • Australian Human Rights Commission. ‘5 Current Issues of 'Internet Censorship': Bullying, Discrimination, Harassment and Freedom of Expression’. September 25, 2013. Accessed April 8, 2019. https://www.humanrights.gov.au/publications/background-paper-human-rights-cyberspace/5-current-issues-internet-censorship-bullying
  • Davis, Simon. 2015 'The Internet Is Still Teaching People How to Kill Themselves'. Vice. Accessed April 8, 2019. https://www.vice.com/en_us/article/nn97jk/how-google-searchesinfluence-suicides-511
  • Department of Homeland Security. 2019. “CISA”. March 15, 2019. https://www.dhs.gov/CISA.
  • Elgersma, Christine. 'Parents, Here's the Truth about Online Predators'. CNN. August 03, 2017. Accessed April 5, 2019. https://edition.cnn.com/2017/08/03/health/online-predators-parents-partner/index.html
  • Kenton, Will. 'United Nations (UN)'. Investopedia. April 03, 2019. Accessed April 15, 2019. https://www.investopedia.com/terms/u/united-nations-un.asp
  • Long, Julia. 2016 'Pornography Is More than Just Sexual Fantasy. It's Cultural Violence'. The Washington Post. Accessed April 7, 2019. https://www.washingtonpost.com/news/in-theory/wp/2016/05/27/pornography-is-more-than-just-sexual-fantasy-its-cultural-violence/?noredirect=on&utm_term=.fa3120f79cdd
  • Parliament of Australia. 2013. 'Can the Internet Be Regulated?'. Accessed April 7, 2019. https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/rp/RP9596/96rp35
  • Sherman, Justin. 2019. “How to Regulate the Internet Without Becoming a Dictator”. Foreign Policy. Foreign Policy. February 18, 2019. https://foreignpolicy.com/2019/02/18/how-to-regulate-the-internet-without-becoming-a-dictator-uk-britain-cybersecurity-china-russia-data-content-filtering/
  • South China Morning Post. 2014. “China Blocks Thousands More Websites as 'Great Firewall' Targets Cloud Services”. Accessed April 8, 2019 https://www.scmp.com/news/china/article/1642977/china-blocks-thousands-more-websites-great-firewall-targets-cloud
  • Techopedia. 'What Is the Internet? - Definition from Techopedia'. Accessed April 6, 2019. https://www.techopedia.com/definition/2419/internet
  • “The Internet and Related Issues” 1995. Accesed April 8, 2019 https://www.legco.gov.hk/yr97-98/english/sec/library/9495rp05e.pdf
  • United Nations. n.d. “Uphold International Law”. United Nations. Accessed April 15, 2019. https://www.un.org/en/sections/what-we-do/uphold-international-law/
  • Zheng, Haiping. 2013. Regulating the Internet: China’s Law and Practice. Accessed April 9, 2019. https://file.scirp.org/pdf/BLR_2013032615421340.pdf

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

reviews

Cite this paper

Related essay topics.

Get your paper done in as fast as 3 hours, 24/7.

Related articles

Should the Internet Be Regulated? Essay

Most popular essays

  • Cyber Crimes
  • Cyber Security

There are some processes and technologies that are formed for the security of computers, software,...

For Internet companies with huge information, information review plays a very important role. The...

  • Effects of Technology

Have you ever thought about how the Internet affects our cognitive abilities and changes our way...

I chose this topic because as a teenager myself, I’ve grown up in a world that has invariably...

SMEs face a severe issue when it comes to cyber security, the issue faces all kinds of...

  • Effective Communication
  • Social Media

The enourmous growth in the use of the Internet over the last decade has led to radical changes to...

Contemporary world comes with many technological changes and improvements. The biggest invention...

Society especially the young kids have become dependent on the internet to express themselves and...

  • Technology in Education

Formerly, around ten years ago or maybe even less, we were not familiar with the notorious term...

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via [email protected].

We are here 24/7 to write your paper in as fast as 3 hours.

Provide your email, and we'll send you this sample!

By providing your email, you agree to our Terms & Conditions and Privacy Policy .

Say goodbye to copy-pasting!

Get custom-crafted papers for you.

Enter your email, and we'll promptly send you the full essay. No need to copy piece by piece. It's in your inbox!

Whom to Protect And How: The Public, the Government, and the Internet Revolution

Subscribe to the economic studies bulletin, drew e. altman , dea drew e. altman john m. benson , jmb john m. benson marcus d. rosenbaum , mdr marcus d. rosenbaum minah kim , mk minah kim mollyann brodie , mb mollyann brodie rebecca flournoy , and rf rebecca flournoy robert j. blendon rjb robert j. blendon.

December 1, 2001

The authors are part of a team conducting ongoing polling on American’s attitudes about domestic policy issues.

The United States is now in the second stage of a major technological transformation. What began in the 1980s as the Computer Revolution has extended its reach and become the Computer and Internet Revolution. The second stage of the revolution is not only transforming American life, but also leading to calls for federal government protection from perceived threats presented by specific Internet content. Because of First Amendment concerns and the difficulty of regulating this international technology, the government will find it hard to provide the kind of oversight the public wants.

During the first stage of the Computer and Internet Revolution, computer use grew rapidly. Between 1985 and 1999, the share of Americans who used a computer at work or at home more than doubled, from 30 percent to 70 percent. The increase in home computer ownership was even more striking, quadrupling from 15 percent in 1985 to 60 percent by century’s end (table 1).

Table 1. Share of the Public with Access to Computers at Home and at Work, 1985-99
PERCENT 1985 1990 1995 1997 1999
(a) (b)
Computer at work 25 32 39 38 42 44
Computer at home 15 22 36 42 54 60
Computer at neither 70 58 46 43 35 30
Source: NSF, 1985-1999 (a); NPR-Kaiser-Kennedy School, 1999 (b)

The Internet stage of the revolution started in the mid-1990s. Only five years ago, fewer than one in five Americans (18 percent) had ever used the Internet. As the new century begins, nearly two-thirds (64 percent) have used the Internet some time in their lives. In 1995 only 14 percent of Americans said they went online to access the Internet or to send or receive e-mail. By 1997 that share had more than doubled, to 36 percent, and today more than half (54 percent) go online. Virtually all Americans younger than 60 say they have used a computer (92 percent), and most have used the Internet (75 percent) or sent an e-mail message (67 percent).

The rapid spread of the new technology is not without precedent. Television ownership in the United States exploded from 6 percent in 1949 to 52 percent in 1953 to 83 percent by 1956. Still, the increase in computer use and, in the second wave, Internet use is remarkable.

Although much is made of the Internet’s almost limitless capabilities, at this point people are most likely to use it to get information. Americans use the Internet at home to learn about entertainment, sports, and hobbies (38 percent), current events (37 percent), travel (33 percent), and health (28 percent). Fewer use the Internet to shop (24 percent), pay bills (9 percent), and make investments (9 percent).

A Beneficent Revolution

America’s Internet Revolution is taking place among people already disposed to believe strongly in the benefits of new technology. When asked to rate on a scale of 0 to 100 their interest in 11 issues, Americans ranked new medical discoveries highest (an average of 82), followed in fourth and fifth places by new scientific discoveries (67) and new inventions and technologies (65).

Large majorities of Americans believe that science and technology make lives healthier, easier, and more comfortable (90 percent) and that science and technology will provide more opportunities for the next generation (84 percent). Three-fourths of Americans (74 percent) believe that the benefits of scientific research have outweighed the disadvantages.

The experiences of the past two decades have left most Americans feeling quite positive about the general impact of computers on national life and receptive to the possibilities of the Internet. Asked to choose, from a list of eight options, the two most significant technological developments of the 20th century, Americans put the computer (named by 62 percent) at the top of the list by a large margin over the automobile (34 percent), television (21 percent), and the airplane (16 percent). The landslide vote for the computer may be due in part to its novelty, but Americans clearly regard the computer as a major technological discovery.

Most Americans see the computer’s impact on society as mainly positive. Just over half believe that the computer has given people more control over their lives (17 percent believe it has diminished their control). More than eight out of ten see computers as making life better for Americans (9 percent think computers are making life worse). Sixty-eight percent believe the Internet is making life better (14 percent believe it is making life worse). Americans are more evenly divided in their views on the impact of television: 46 percent believe that TV is making life better, 34 percent think it is making life worse.

Most Americans also view the computer industry positively. More than three out of four (78 percent) think computer software companies serve consumers well, while only 7 percent think their service is poor. Only banks (73 percent) and hospitals (72 percent) have comparably positive ratings, but both have higher negatives (24 percent each). Nearly two-thirds (65 percent) of Americans believe that the Internet industry is doing a good job serving its consumers; again, only 7 percent think it is doing a bad job.

Despite some early fears, most Americans do not think the use of computers in the workplace displaces workers or depresses wages. A plurality (43 percent) think the growing use of computers will create more jobs; 32 percent think it will mean fewer jobs; about a quarter think it will not make much difference. Americans are evenly divided, at 39 percent each, on whether the use of computers will raise wages or not have much effect; but only 19 percent believe it will lower wages.

In two areas—the amount of free time and time spent with family and friends—Americans do not believe computers have improved life. Only one-fourth (24 percent) of the public believes that computers have given people more free time. Nearly half think computers have actually reduced free time. And more than half (58 percent) say computers have led people to spend less time with families and friends.

What Role for Government?

The first wave of the Computer and Internet Revolution led many Americans to see a role for government in narrowing a “digital divide” in American society, a problem that continues to concern the public today. Nearly half (45 percent) believe that access to computers widens the gap between the haves and the have-nots, while only 11 percent believe that it narrows the gap; 39 percent think it has not made much difference. A majority of Americans (57 percent) believe the government should help low-income people get access to computers and the Internet, and 78 percent say the government should help low-income children.

The Internet Revolution is leading to a broader range of public concerns, accompanied by calls for more government involvement in specific areas (table 2). Eighty-five percent of Americans cite as a major problem the possibility of dangerous strangers making contact with children; 84 percent, the availability of pornography to children; and 73 percent, the availability of information about how to build bombs and other weapons.

Table 2. What the Public Believes the Government Should Do about Key Issues Involving the Internet
PERCENT ISSUE IS A PROBLEM
GOVERNMENT SHOULD DO SOMETHING GOVERNMENT SHOULD NOT BE INVOLVED ISSUE IS NOT A PROBLEM DON’T KNOW
Dangerous strangers making contact with kids 79 15 3 1
The availability of pornography to kids 75 20 4 1
The availability of information about how to build bombs and other weapons 75 15 8 1
False advertising 62 20 12 4
Pornography and adult entertainment 61 26 10 3
The ability to purchase guns 61 14 18 5
Loss of privacy 54 29 14 2
Hate speech, information that attacks people based on their race, religion, or ethnicity 53 27 15 5
Violent games 51 31 15 3
Source: NPR-Kaiser-Kennedy School, 1999

In addition, more than half (56 percent) of Americans regard the loss of privacy as a major problem with computers or the Internet. Although few (4 percent) have ever had an unauthorized person gain access to their financial records or personal information over the Internet, privacy concerns are increasing demands for regulation. More than half (59 percent) of Americans worry that an unauthorized person might gain such access, including 21 percent who say they are very worried. More than three-fourths (81 percent) of people who ever go online say they are concerned about threats to their personal privacy when using the Internet, including 42 percent who say they are very concerned.

What do these trends indicate about a possible new role for government in regulating the Internet? On the one hand, the coming years will witness an upsurge in use of the Internet for a wide variety of purposes, and the public is unlikely to want across-the-board government regulation of the Internet. On the other, most Americans are likely to support legislation to address their specific concerns about the content of the Internet.

Many people are wary of having the government regulate what can be put on the Internet, but they are more willing to accept regulation when it comes to specific threatening content. At least at this point, only about a third of Americans see the need for more government regulation of the Internet industry or the general content of the Internet. But when specific content seen as threatening, such as pornography and bomb-making information, is mentioned, 60 percent favor government restrictions, even if they would impinge on freedom of speech. More than half (57 percent) say that “the federal government needs to regulate what is on the Internet more than television and newspapers because the Internet can be used to gain easier access to dangerous information.”

Three-quarters of Americans say the government should “do something” about the possibility of dangerous strangers making contact with children and about the availability both of pornography to children and of information on how to build explosives (see table 2). A majority also says the government should do something about false advertising (62 percent), the availability of guns (61 percent), pornography (61 percent), the loss of privacy (54 percent), and hate speech (53 percent).

More Americans are worried about specific threats like pornography and bomb-making information on the Internet than about First Amendment issues involved in regulating these threats. When asked which worried them more, 53 percent said they were more concerned that government would not get involved enough in regulating pornography and bomb-making information on the Internet. Only 26 percent were more concerned that government would get too involved in censorship of the Internet.

Public concerns about specific threats on the Internet are not likely to dissipate as more people go online. While Internet users are less likely than nonusers to believe that the content of the Internet needs more regulation than TV or newspaper content, about half of Internet users (as against 65 percent of nonusers) favor this additional regulation in general. In addition, a majority of Internet users believe the government should do something about most of the same specific threats mentioned by nonusers.

The next decade will see an explosion of growth and change in the world of the Internet. Like the advent of television half a century ago, the Internet Revolution will lead to fundamental and in most cases positive changes in the way Americans live. The number of Americans who use the Internet for nearly every activity is likely to double or triple.

Between a Rock and a Hard Place

In the midst of this extraordinary ferment, public pressure will build in favor of more government involvement in regulating specific parts of the Internet’s content. Regulatory efforts will raise a number of First Amendment issues, if not with the public, at least within the judicial system. Given that information on the Internet flows almost seamlessly across national borders, the U.S. government-or any other-will find it extremely difficult to limit access to information the public thinks is dangerous. Policymakers are likely to be caught between growing public pressure to protect against perceived threats to national and personal well-being and the limits of their ability to regulate specific Internet content.

Internet & Telecommunications

Economic Studies

Kevin C. Desouza, Richard Watson, Yancong Xie

August 28, 2024

Isabella Panico Hernández, Nicol Turner Lee

August 22, 2024

The Brookings Institution, Washington DC

2:00 pm - 3:30 pm EDT

Home — Essay Samples — Sociology — Social Media — Why the Government Should Regulate Social Media

test_template

Why The Government Should Regulate Social Media

  • Categories: Cyber Security Social Media

About this sample

close

Words: 732 |

Published: Sep 12, 2023

Words: 732 | Pages: 2 | 4 min read

Table of contents

Misinformation and disinformation, privacy and data security, harmful content and online abuse, algorithmic transparency and fairness, protecting vulnerable users.

Image of Dr. Oliver Johnson

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Prof Ernest (PhD)

Verified writer

  • Expert in: Information Science and Technology Sociology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

1 pages / 479 words

2 pages / 745 words

3 pages / 1329 words

5 pages / 2318 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Social Media

The media, in its various forms, plays a pivotal role in shaping our understanding of the world, including our perceptions of crime. Whether through news coverage, television shows, or social media, the media has the power to [...]

Abel, J. P., Buff, C. L., & Burr, S. A. (2016). Social Media and the Fear of Missing Out: Scale Development and Assessment. Journal of Business & Economics Research (JBER), 14(1), 33-44. [...]

Clarke, Roger. 'Dataveillance by Governments: The Technique of Computer Matching.' In Information Systems and Dataveillance, edited by Roger Clarke and Richard Wright, 129-142. Sydney, Australia: Australian Computer Society, [...]

Turkle, Sherry. 'Connected, but Alone?' TED Talk, Massachusetts Institute of Technology, 2019.Jacobson, Rae. 'The Effect of Social Media on Teenagers' Mental Health.' Child Mind Institute, 2019

One of our rights in the United States is freedom of speech, which is guaranteed by the First Amendment. The First Amendment of the United States Constitution, “…prohibits the making of any law respecting an establishment of [...]

In the era of science and technology, it would be quite unusual to find anyone who does not have a social media account. Based on a research carried out by the Pew Research Center in 2013, forty-two percent of the internet users [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

government should regulate internet usage essay

How to Regulate (and Not Regulate) Social Media

  • Essays and Scholarship

How to Regulate (and Not Regulate) Social Media

Creating incentives for social media companies to be responsible and trustworthy institutions, occasional papers.

An essay series tackling pressing issues at the intersection of speech, privacy, and technology

Introduction

To understand how to regulate social media, you have to understand why you want to regulate it. I will say something about specific regulatory proposals in the last part of this essay. But I want to spend most of my time discussing the why as much as the how.

Here is the central idea: Social media companies are key institutions in the 21st century digital public sphere. A public sphere doesn’t work properly without trusted and trustworthy institutions guided by professional and public-regarding norms. The goal of regulating social media is to create incentives for social media companies to be responsible and trustworthy institutions that will help foster a healthy and vibrant digital public sphere.

What is the public sphere? For purposes of this essay, we can say that the public sphere is the space in which people express opinions and exchange views that judge what is going on in society. Put another way, the public sphere is a set of social practices and institutions in which ideas and opinions circulate. The public sphere is obviously crucial to democracy. But most people’s opinions aren’t about government policy. They are about sports, culture, fashion, gossip, commerce, and so on.

A public sphere is more than just people sitting around talking. It is shaped and governed, and made functional or dysfunctional, rich or poor, by institutions . Most of the institutions that constitute the public sphere are private. They sit between the public and the government. There are lots of examples in the pre-digital world: print and broadcast media, book clubs, spaces for assembly and conversation, sports stadiums, theaters, schools, universities, churches, libraries, archives, museums, and so on.

A digital public sphere is a public sphere that is dominated by digital media and digital technologies. Digital media become the key institutions that either maintain or undermine the health of the public sphere.

Three Kinds of Digital Services

Before discussing how we should regulate social media, I want to distinguish social media from two other parts of the infrastructure of digital communication. 1 1. Jack M. Balkin, Free Speech Is a Triangle , 118 Colum. L. Rev. 2011, 2037–40 (2018). These are:

  • Basic internet services, such as the Domain Name System (DNS), broadband companies, and caching services.
  • Payment systems, such as MasterCard, Visa, and PayPal.

For basic internet services the regulatory answer is pretty simple: Nondiscrimination. Let the bits flow freely and efficiently. Don't try to engage in content regulation at this level. Government should enforce non-discrimination as a matter of policy. Although the question is contested (for example, in the policy debates over network neutrality rules), I believe that enforcing nondiscrimination rules at this level of the internet presents no significant First Amendment problems.

We should treat payment systems, and caching and defense systems, like public accommodations, with this caveat: They can refuse to do business if a customer uses their business for illegal activities.

Governments and civil society groups often want to use basic internet services and payment systems to go after propagandists, conspiracy mongers, and racist speakers. I think this is a mistake. These businesses are not well designed for content moderation and their decisions will be arbitrary and ad hoc.

I believe that content regulation should occur higher up the stack, to borrow a familiar computer science metaphor.

Instead, these businesses should concern themselves only with the legality or illegality of transactions. Government should require nondiscrimination—otherwise the public and politicians will place irresistible pressure on basic internet services and payment systems to engage in content moderation, which is not their job.

Government requirements of nondiscrimination/public accommodation have this advantage: when civil society groups and politicians demand that these businesses engage in content moderation, or argue that businesses are complicit in the politics of the customers they serve, the businesses can respond that they have no choice because the law requires them not to discriminate against customers who are not engaged in illegal activity.

Instead, content moderation should occur in social media and search engines. In fact, for these services, content regulation is inevitable. Since it is inevitable, that’s where you should do it.

The Public Function of Social Media

Now let’s ask: what is social media's public function? What tasks should it perform in the digital public sphere?

This is a normative and interpretive question. So too is the related question of what it means for the public sphere to be well functioning, “healthy,” or “vibrant.” We must decide what makes the digital public sphere function well or badly. Because social media are so new, we have very little history to work with. So we have to make analogies to the longer history of media and democracy. But in doing so, we also have to reckon with the fact that earlier versions of the public sphere may not have functioned well.

I mentioned previously that the public sphere created by social media in the 21st century is a successor to the public sphere created by print and broadcast media in the 20th century. Twentieth-century media helped produce a particular kind of public sphere, different than today’s, because broadcast and print media played a different role than social media do today. These companies—or their contractual partners—produced most of the content that they published or broadcast. Twentieth-century print and broadcast media were not participatory media; the vast majority of people were audiences for the media rather than creators who had access to and used the media to communicate with others.

The 21st century model, by contrast, involves crowdsourcing and facilitating end user content. Social media host content made by large numbers of people, who are both creators and audiences for the content they produce.

If that’s so, what are social media's central functions in the public sphere? What is social media’s appropriate role? I argue that social media have three central functions:

First, social media facilitate public participation in art, politics, and culture.

Second, social media organize public conversation so people can easily find and communicate with each other.

Third, social media curate public opinion , not only through individualized results and feeds, but also through enforcing community standards and terms of service. Social media curate not only by taking down or rearranging content, but also by regulating the speed of propagation and the reach of content.

This last point bears elaboration. During the 20th century, newspapers and television also curated public discourse through the exercise of editorial judgment. They decided what content to commission in the first place and how to edit and convey the content they eventually produced. That meant that the content that circulated in these media was restricted and sanitized for mass audiences. One did not see pornography in The New York Times or advocacy of racial genocide on NBC because these companies had standards and professional norms about what they would publish or broadcast. These standards and norms, in turn, were backed up by legal requirements—for example, against defamation, obscenity, and indecency. Even so, 20th century media companies often limited speech far more than the law required.

Twentieth-century mass media set boundaries on permissible content, and created a certain kind of public conversation based on the expected interests and values of their audiences. Different players in different media and in different parts of society imposed different norms. Book publishers applied their own set of norms, motion picture companies had their own set of norms, the pornography industry (which encompassed both print and video) had its own norms, and so on. Generally speaking, daily newspapers and broadcast media applied norms of a hypothesized polite society judged appropriate for an imagined audience of average adults and their families. One could get access to more daring content elsewhere, for example in books and magazines, subject always to background legal constraints.

Social media also curate public discourse today. But instead of publishing their own content, they are publishing everyone else’s content. Like 20th century mass media, they apply a set of rules and standards about what kinds of content (and conversations) are permissible and impermissible on their sites. They impose a set of civility, safety, and behavioral norms for their imagined audience—different from 20th century newspapers, but nevertheless still quite constrained. Different social media enforce different norms. Like 20th century media, social media may limit speech far more than the law requires them to. Facebook, for example, limits nudity even when it is constitutionally protected. 2 2. See Adult Nudity and Sexual Activity, Facebook Community Standards, https://www.facebook.com/communitystandards/adult_nudity_sexual_activity [ https://perma.cc/EFQ8-RGPW ] (last visited Mar. 10, 2020).

Generally speaking, the free speech principle allows the state to impose only a very limited set of civility, safety, and behavioral norms on public discourse, leaving intermediate institutions free to impose stricter norms in accord with their values. This works well if there are many intermediate institutions. The assumption is that in a diverse society with different cultures and subcultures, different communities will create and enforce their own norms, which may be stricter than the state’s. I believe that a diversity of different institutions with different norms is a desirable goal for the public sphere in the 21st century too. But I also believe that there is a problem—no matter which century we are talking about—when only one set of norms is enforced or allowed. If private actors are going to impose norms that are stricter than what governments can impose, it is important that there be many different private actors imposing these norms, reflecting different cultures and subcultures, and not just two or three big companies. I will return to this point later on.

Now let me connect the three functions I mentioned—facilitating public participation, organizing public conversation, and curating public opinion—to the goals of a healthy, well-functioning, public sphere. Why are these functions the key indicia of a well-functioning public sphere?

These functions are important because the public sphere is the institutional home of freedom of speech and it helps realize the values of freedom of expression. Free speech values help us understand whether the public sphere is functioning well or badly. If the institutional arrangements work well to facilitate these values, then we say that the public sphere is functioning well, and that it is healthy. But if institutional arrangements hinder these values, we should conclude that the public sphere is not functioning well.

Well, what are these values? There are at least three of them:

First, freedom of speech serves the values of political democracy. It enables democratic participation in the formation of public opinion. It helps to ensure (although it does not guarantee) that state power is responsive to the evolution of public opinion. And it helps to ensure (although it does not guarantee) that the public can become informed about issues of public concern. Thus the democratic political values are participation, responsiveness, and an informed public.

Second, freedom of speech helps to produce a democratic culture . A democratic culture is a culture in which individuals and groups can freely participate in culture and in the forms of cultural power that shape and affect them. 3 3. Jack M. Balkin, Cultural Democracy and the First Amendment, 110 Nw. U. L. Rev. 1053 (2016). Because cultural power is even more pervasive than state power, individuals need to have a way of participating in the construction and development of the cultures that constitute their identities and affect their lives. Freedom of speech allows widespread participation in the forms of meaning making that construct us as individuals. It gives people a chance to talk back to and shape the forms of cultural power that constitute them.

Third, freedom of speech helps promote (although once again, it does not guarantee) the growth and spread of knowledge . I use this formula instead of the familiar “marketplace of ideas” because the latter metaphor is misleading. The best way to develop and spread knowledge may not be through competition for acceptance in public opinion. Instead, in modern societies, the development and spread of knowledge depends on a host of disciplines, institutions, and public-regarding professions.

Social media perform their public functions well when they promote these three central values: political democracy, cultural democracy, and the growth and spread of knowledge. More generally, a healthy, well-functioning digital public sphere helps individuals and groups realize these three central values of free expression. A poorly functioning public sphere, by contrast, undermines political and cultural democracy, and hinders the growth and spread of knowledge.

Trusted and Trustworthy Intermediate Institutions

Here’s the next big idea: If you want to realize these values, you need more than a simple free speech guarantee like the American First Amendment. You need more than a legal norm that the state doesn't censor. You need more than the formal ability to speak free of government sanction. You need intermediate institutions that can create and foster a public sphere. Without those intermediate institutions, speech practices decay, and the public sphere fails.

A healthy system of free expression requires much more than non-censorship.

First, it requires knowledge institutions and knowledge professionals who produce and disseminate knowledge and opinion. Examples from the 20th century include newspapers and other media organizations, schools, universities, libraries, museums, and archives. Some of these may be run and/or subsidized by the state. But many of them will be privately owned and operated.

Second, you need lots of different institutions, and they can't all be owned or controlled by a small number of people. They have to provide what Justice Hugo Black once called “diverse and antagonistic sources” of information. 4 4. Associated Press v. United States, 326 U.S. 1, 20 (1945). This is a famous formula in First Amendment law. But this formula is not just about having lots of different voices that disagree with each other. Rather it’s about having lots of different institutions for knowledge production and dissemination.

Third, these institutions have to have professional norms that guide how they produce, organize, and distribute knowledge and opinion. 5 5. See Robert C. Post, Democracy, Expertise, and Academic Freedom: A First Amendment Jurisprudence for the Modern State (2012) (arguing that professional and disciplinary norms for knowledge production are necessary to achieve the “democratic competence” necessary for democratic self-government).

Fourth, these intermediate institutions and professional groups can successfully do their job only when they are generally trustworthy and trusted. When intermediate knowledge producing institutions and professions are not trusted, the public sphere will begin to fall apart. Why will it begin to fall apart? Because no matter what your theory of free speech might be, realizing the values of free speech depends on the creation, curation, and dissemination of knowledge by intermediate institutions and professions that the public generally trusts. Without these trusted institutions and professions, the practices of free expression become a rhetorical war of all against all. Such a war undermines the values of political democracy, cultural democracy, and the growth and spread of knowledge that free expression is supposed to serve. Protection of the formal right to speak is necessary to a well-functioning public sphere. It is just not sufficient.

In a nutshell, that is the problem we are facing in the 21st century. We have moved into a new kind of public sphere—a digital public sphere—without the connective tissue of the kinds of institutions necessary to safeguard the underlying values of free speech. We lack trusted digital institutions guided by public-regarding professional norms. Even worse, the digital companies that currently exist have contributed to the decline of other trusted institutions and professions for the creation and dissemination of knowledge.

The irony is profound. Never has it been easier to speak, to broadcast to millions. Never has access to the means of communication been so inexpensive and so widely distributed. But without the connective tissue of trusted and trustworthy intermediate institutions guided by professional and public-regarding norms, the values that freedom of speech is designed to serve are increasingly at risk. Antagonistic sources of information do not serve the values of free expression when people don’t trust anyone and professional norms dissolve. InfoWars is an antagonistic source of information. Boy, is it antagonistic! But its goal is to destroy trust. Its goal is to get you to trust nobody. It reduces politics to tribalism and cultural participation to warfare. It reverses and undermines the spread and growth of knowledge.

Diverse Affordances, Value Systems, and Innovations

To achieve a healthy and vibrant public sphere, we also need many different kinds of social media with many different affordances, and many different ways to participate and make culture. Thus, it is important to have Facebook and YouTube and TikTok and Twitter, and many other kinds of social media applications as well. Moreover, these applications can't be owned or controlled by the same companies.

Diversity of affordances and control is important for three reasons. First, you don’t want one set of private norms governing public discourse. Ideally, different social media will set their own community standards and values, even if they overlap to some degree. Second, you want many players because you want continuous innovation. Third, you want many different kinds of social media because different affordances make culture richer and more democratic.

So in addition to "diverse and antagonistic sources of information" we should want "diverse affordances, value systems, and innovations." But, as I said before, “diverse and antagonistic” is not enough. Social media also need to become trusted mediating institutions guided by professional norms. They have to become trusted and trustworthy organizers and curators of public discourse. They aren’t now.

One might object: won’t network effects doom the goal of a world with many different kinds of social media? Won’t people gravitate to one social media application because everyone else they know is already using it?

The answer is no. Many people currently use many different social media applications, not a single one. They belong to several communities and their usage changes over time. There are several reasons for this.

First, social media have different affordances and people use social media for many different purposes. One can be a member of Facebook and still use YouTube or TikTok. If we encourage diversity of affordances, we will also encourage diversity of use.

Second, people may use different social media more or less frequently and move to new social media as they get older or as their tastes and needs change. Younger people may move to different social media than their parents and grandparents. We have already seen generational migration from MySpace to Facebook and from Facebook to Snapchat and TikTok.

Third, people may link content from one social media site to others; in a tweet, for example, they may link to a YouTube video or a Spotify playlist.

Social media have incentives to allow people to belong to multiple sites because they want people to switch to their application. Moreover, because they want to be useful (and perhaps even indispensable) to end users, they also have incentives to allow links to other parts of the internet, including other social media. Regulation can encourage this kind of openness, too. If we promote innovation among social media companies, with many different kinds of affordances, network effects will not prevent a larger number of players than we currently have.

The Limits of Economic Incentives

So far I’ve offered a set of ideals to aim at. I’ve told you what a healthy digital public sphere would look like. And I’ve told you what kinds of institutions we might need.

But it’s pretty obvious, when we turn to the real world, that social media are not living up to their appropriate roles in the digital public sphere.

Why? Well, social media are driven by market incentives. In fact, sometimes they are so big that they make their own markets. So economic incentives or profit motives are probably more accurate terms than market incentives. The largest social media are less subject to market discipline than other firms; and lack of competition is one important reason why social media don’t live up to their social function in the digital public sphere. Yet it is only one part of the problem.

Economic incentives may be necessary for a healthy public sphere, but they will not be sufficient. Here is why: Free expression and the production of knowledge goods produce both positive and negative externalities. That is, they produce benefits and harms that can’t be completely captured by ordinary market transactions. The result is that markets—even perfectly functioning competitive markets—will overproduce the harms of free expression and under-produce the goods of free expression. And this is true whether media goods are financed through advertising, subscription, or pay services.

Whatever your theory of free expression is, market competition won't produce the kind of culture and knowledge necessary for democratic self-government, democratic culture, or the growth and spread of knowledge. Markets will under-produce the kinds of speech and knowledge goods that support political and cultural democracy; they will under-produce the kinds of institutions that will reliably discover and spread knowledge. Conversely, market incentives will overproduce conspiracy theories and speech that undermines democratic institutions. When social media are dominated by a small number of powerful economic actors, their incentives are not much better.

Economic incentives are not the same thing as professional norms and they may come into conflict with and undermine professional norms.

And today, economic incentives for social media companies promote distrust, not trust. They undermine professional norms for the production of knowledge rather than support them.

Then add the fact that all of this takes place on the internet. The internet is just a big machine for destroying professional norms.

It's not surprising that social media have failed at the task I just set out for them. For one thing, they are still very new. Facebook is only a decade and a half old. Google is only 20 years old. They emerged as profit-making technology companies, and only later came to understand themselves as media companies. They were brought to this realization kicking and screaming all the way, through continuous and sustained public pressure.

And yet this is the direction they must travel. Social media companies have to become key institutions for fostering a healthy public sphere. They can't just serve economic incentives. They have to adopt public-regarding professional norms related to the important public function that they serve in the digital public sphere.

By analogy, think about journalism. It also serves a crucial role in the public sphere because it informs the public and sets agendas for public discussion. If the professional norms of journalism are weakened or destroyed and the practice of journalism becomes solely market driven, journalism will make the public sphere worse, not better. It will choose stories and treatments that increase polarization, tribalism, and social distrust, and it will generate or help spread propaganda and conspiracy theories.

In fact, social media has multiple roles to play in the digital public sphere.

First, social media companies are important players in many different kinds of regulation. Public-private cooperation is necessary for dealing with, among other things, terrorist recruitment, foreign interference in elections, campaign finance violations, and child pornography.

Second, huge digital communities create special problems of personal safety, threats, and abuse. Some countries present special problems of state propaganda and genocidal speech campaigns.

Third, the need for content moderation creates problems of scale. Content moderation that is simultaneously quick, accurate, and at scale is hard to achieve. Accuracy requires increasing the number of moderators (either through hiring or contracting out to other firms) at numbers far greater than most social media companies would like; it also requires treating content moderators much better that they are currently treated by their employers. 6 6. Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (2019); Casey Newton, The Trauma Floor: The Secret Lives of Facebook Moderators in America, The Verge (Feb. 25, 2019), https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona In fact, social media companies often rely on complaints by end users, civil society organizations, and government actors to spot violations of their terms of service. Because moderation is costly to do well, social media companies have economic incentives to drag their heels.

Misaligned Incentives

Are there incentives for social media to become trustworthy institutions that protect and foster the digital public sphere? Sadly, not as they are currently constituted.

Social media companies have been slow to solve the problems they create. Social media companies have viewed themselves primarily as technology companies that make money through digital surveillance that enables advertising. Their goal is to get bigger and bigger, and to expand their user base so they can serve more ads and make more money.

The 20th century public sphere was also partly funded through advertising. But its problems were a bit different, because you didn't have modern methods of data collection and behavioral advertising. Also 20th century media had greater professional and economic incentives to be trustworthy, even if they were hardly perfect and tended to be too passive and apologetic.

Advertising (and therefore data collection and manipulation) are central to the problems that social media creates for the digital public sphere. There are three reasons for this.

First, the attention economy generates perverse effects. It encourages companies to highlight the kind of content that keeps viewers’ attention. This content is less likely to be informative or educational, and more likely to be false, demagogic, conspiratorial, and incendiary, and to appeal to emotions such as fear, envy, anger, hatred, and distrust.

Second, Facebook and Google serve both as advertising brokers and as the major market for ads. They are a digital advertising duopoly.

Third, Facebook and Google have dried up revenues for newsgathering organizations, who get an increasingly small amount of ad revenues, or have to take crumbs off the table from Facebook and Google. The internet has created news deserts for local news and increased incentives for consolidation of media organizations into a handful of large companies. Put another way, one side effect of market incentives has been undermining other public sphere institutions—in particular, journalism—and the advertising-based business models that have traditionally sustained journalism.

Economic incentives have driven Facebook and Google to grow ever larger and to buy up as many potential competitors as possible. But a well-functioning digital public sphere should have many social media companies, not just a few, because:

  • you don't want a monoculture of content moderation;
  • having lots of different players in different parts of the world partly eases problems of scale in moderation;
  • many players make it harder for foreign governments to hijack elections;
  • many players may be better for innovation; and
  • many players are harder for governments to co-opt.

To all of these we should add a sixth reason tied to the dangers of surveillance capitalism. 7 7. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019). Facebook’s and Google’s control over digital advertising is made possible by their ability to collect and aggregate enormous amounts of end user data, more than any other company. The more data Facebook and Google are able to collect, the better their predictive algorithms, the more powerful their ability to nudge and influence end users, and the better their ability to corner the market on digital advertising. That is why it is profitable for Facebook and Google to buy up so many different kinds of companies and applications, each of which collects data in different ways. More data means more power.

If there are many different social media companies, none will have the same dominance and control over the collection and analysis of end user data. None will have the same power to manipulate and influence end users, and none will be able to corner the market for digital advertising. Having more players diffuses and decentralizes power over the collection and control of data, over digital advertising markets, and over end users, who are the objects of surveillance, influence, and manipulation.

Public Provisioning and Public Utilities

Many people have suggested public provisioning—state-run social media—as a solution to the problems of social media. Others have suggested turning social media companies into public utilities.

Let’s start with public provisioning. Certainly one way to provide public goods that the market will fail to provide adequately is to have government provide it. That’s what we do with state universities and what many countries do with public broadcasting.

But unlike state universities and public broadcasters like the BBC, you really don’t want governments to provide social media services:

First, if social media companies are treated as state actors and have to abide by existing free speech doctrines—at least in the United States—they will simply not be able to moderate effectively. Facebook’s and Twitter’s community standards, for example, have many content-based regulations that would be unconstitutional if imposed by government actors. Even if one eliminated some of these rules, the minimum requirements for effective online moderation would violate the First Amendment.

Second, content moderation does not give speakers final judicial determinations (with full Bill of Rights protections) of whether their speech is protected or unprotected. Therefore content moderation has many of the same problems as administrative prior restraints. The standard remedies for violating community standards and/or terms of service include removing an end user’s content and banning the end user from the community. Some of these remedies would probably violate American free speech doctrine, including the rule against prior restraints. If A defames B in a public park, for example, a court could not forbid A from ever speaking in the park again. 8 8. See Balkin, Free Speech Is a Triangle , supra note 1, at 2025–27.

Third, and relatedly, many people are concerned about the propagation of false and misleading political advertisements and political propaganda on social media. They want social media companies to take down this speech or prevent it from being used in targeted political messages and ads. But if that is your concern, the last thing you would want to do is make social media state actors, because state actors are severely constrained in how they can sanction political speech, even false political speech. And, once again, even when state actors may sanction political speech, they must first afford the speaker the full panoply of Bill of Rights protections and a final individualized judicial determination before they can act. These requirements are simply inconsistent with the speed and scale of social media content moderation.

Fourth, if you think that surveillance capitalism is bad, there are even more serious problems of government surveillance and data manipulation when governments run your social media company.

Fifth, and relatedly, governments running social media services would create enormous risks of facilitating government propaganda and the use of end user data to engage in targeted influence campaigns.

Sixth, and finally, governments may not be particularly good at innovation. And they will not be very good at facilitating a diverse set of affordances, values, and innovations.

Another approach is to turn social media and search engines into privately owned public utilities. 9 9. K. Sabeel Rahman, Regulating Informational Infrastructure: Internet Platforms as the New Public Utilities , 2 Geo. L. Tech. Rev. 234 (2018).

It is not clear that social media fit the traditional model of public utilities very well. The classic examples of public utilities are companies that provide water, telephone services, and electrical power. The standard reasons for making a company a public utility are to control price, to secure universal access, and to assure the quality of continuous service. But with social media, the price is free, access is universal, and continuous service is almost always provided—in part because companies want as much of end users’ attentions as they can get. If the real goal of treating social media as public utilities is to prevent discrimination in content moderation, then one faces the same problems as state-run social media.

Probably the best justification for a public utility model is to fundamentally change the business model of social media companies. Once converted into public utilities, social media companies would give up advertising altogether and simply provide access and content moderation services in return for a fixed monthly subscription fee. (They might still be allowed to run ads, but the ads could not be targeted.) This arrangement would have to be combined with strict limits on collection, collation, and sale of end user data. That is because the mere fact that subscription services don’t serve you ads doesn’t mean that they respect your privacy or are not attempting to manipulate you; they might continue to collect end user data and sell it to other companies or use it for other purposes.

It may well be a good idea to have some subscription-based social media services in a larger mix of social companies that rely on advertising. These social media companies would be a sort of "public option" that people who want extra privacy protections could use as an alternative to free services. But the public utility model is not a general solution to the problems of the digital public sphere. Converting all large social media companies into public utilities does not solve the problems I mentioned above, because it does not provide diverse affordances, value systems, and innovations. Quite the contrary: converting social media companies into public utilities appears to concede that there will only be—and perhaps should only be—a relative handful of social media companies. The more important focus of regulation, therefore, should be on antitrust, privacy, and consumer protection regulation, as I explain below.

To Change Incentives, Change Business Models

I expect that most social media companies will continue to be privately owned and operated, and they will still rely on advertising models. If so, how is it possible to push privately owned social media companies to fulfill their proper social function?

We are slowly inching toward this approach. Social media companies already assert in their public relations materials that they have obligations to the public. They state that they understand that their businesses depend on public trust. They acknowledge that it is their goal to protect end user autonomy, enhance democracy, and facilitate free speech. They make similar claims in their terms of service and community standards. Whether social media companies actually live up to these claims is more complicated. That is because social media companies are not really willing to give up control of their “crown jewels”: business models based on data collection, behavioral advertising, and other aspects of surveillance capitalism.

Public pressure and media coverage of social media companies can push them, at the margins, to behave as more responsible curators of public discourse. (I should also say that people push social media to be irresponsible and arbitrary as well.) This sort of pressure is important because social media companies don’t want to lose their base of end users. But regulation is also necessary.

Facebook's Oversight Board for Content Decisions is yet another strategy to generate public trust by attempting to establish a kind of legitimacy in its content moderation decisions. Facebook hopes to use the model of a supreme court—complete with cases, judges and decisions—to establish that Facebook is a trustworthy, public-regarding institution.

I have no objection to the Board in theory. We should encourage every reform that gives social media companies incentives to act in a public-regarding fashion. As currently imagined, however, the Oversight Board won't be able to do very much. It will consider only a tiny fraction of the content moderated on Facebook in a given year. More importantly, it will have no jurisdiction over Facebook’s crown jewels: the company's system for brokering advertisements, its behavioral manipulation of end users, and its practices of data surveillance, collection, and use. For this reason, there is a very real danger that the Oversight Board will prove to be little more than a digital Potemkin Village—a prominent display of public-spiritedness that does nothing to address the larger, deeper problems with social media.

The logic of social media business models will tend to overcome any public statements of ideals, good will, and promises of good behavior. This has happened over and over again. Facebook’s history as a company has been a cycle of engaging in bad behavior, getting caught, apologizing profusely and promising to mend its ways, followed by the company engaging in slightly different bad behavior, offering new apologies and promises of reform, and so on. 10 10. See Zuboff, supra note 7, at 138–55 (describing the “Dispossession Cycle”). Facebook will keep misbehaving and it will keep apologizing, not because it is incompetent or clumsy, but because of a fundamental misalignment of incentives between its goals and the public’s needs, and because it has an inherent conflict of interest with its end users and, indeed, with democracy itself.

Social media companies will behave badly as long as their business models cause them to. Profit-making firms like Facebook will normally seek to externalize as many costs of their activities as possible onto others, so that the costs will be borne by society. Their business models don’t care about your democracy.

How do you make social media companies responsible participants in the digital public sphere? First, you must give them incentives to adopt professional and public-regarding norms. Second, you must make them internalize some of the costs they impose on the world around them.

There are no complete, perfect solutions. But we can make progress in incremental steps.

Before I discuss reform strategies, however, there is an important threshold question: Can the U.S. do this on its own? After all, anything we do in the U.S. will be affected by what other countries and the EU do. Today, the EU, China, and the U.S. collectively shape much of internet policy. They are the three Empires of the internet, and other countries mostly operate in their wake. Each Empire has different values and incentives, and each operates on the internet in a different way. I could write an entire essay just on these problems.

Models for Regulation

In the remainder of this essay, however, I will assume that the U.S. government—and the 50 state governments—can do something on their own. If so, what kinds of regulation should the U.S. consider?

First, don't rush to impose direct regulation on social media moderation practices. Requiring "neutrality" in content moderation is a non-starter. As I explained earlier, neutrality should apply lower down in the stack—to basic internet services—and to payment systems. One of the ironies of the current policy debate is that the very politicians who call for neutrality in content moderation have been most opposed to requiring neutrality where it is most needed—in basic internet services such as broadband.

Social media platforms must engage in content moderation. They may do it badly or well, but they will have to do it nevertheless. 11 11. Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (2018). Accordingly, governments should respect social media’s role as curators and editors of public discourse. Respecting that role means that social media should have editorial rights, which are a subset of free speech rights.

The goal of regulation is not to achieve an illusory neutrality in social media content moderation. Rather the goal is to shape the organization and incentives of the industry to better achieve public ends.

First, the goal should be to increase the number of players, so there can be many different companies, communities, affordances, and editorial policies.

Second, the goal should be to give social media companies incentives to professionalize and take responsibility for the health of the public sphere.

We can regulate social media using three policy levers.

  • Antitrust and competition law
  • Privacy and consumer protection law
  • Balancing intermediary liability with intermediary immunity.

Properly structured, none of these policy levers violate free speech values or the First Amendment.

Whatever we do, it is important to keep regulatory burdens manageable. If you make the regulatory burdens too great, you can create barriers to entry for new social media firms, which defeats the regulatory purpose of achieving a wide range of social media companies with different rules, affordances, and innovations.

Let me talk about antitrust, privacy, and intermediary liability in turn. The discussion that follows will be very broad brush and pitched at a high level of abstraction. I emphasize at the outset that you need all three of these policy levers to succeed. You can’t rely on just one. For example, if you don't use antitrust law and competition law, you will have to regulate more heavily in other ways.

Moreover, there are some kinds of problems that privacy law can’t fix and for which antitrust law is required; conversely, there are problems that antitrust law can’t fix that require privacy and consumer protection law. For example, even if you create many different Facebooks and Googles, each will still be practicing their own forms of surveillance capitalism. You will still need privacy and consumer protection regulations to keep these smaller companies from manipulating and/or abusing the trust of end users.

Antitrust and Competition Law

In competition policy, the goal is not simply separating existing social media services owned by a single company, for example, separating Facebook from Instagram and WhatsApp or YouTube from Google. Rather, there are three interlocking goals.

First, competition policy should aim at producing many smaller companies, with different applications, communities, and norms. You might think of this as a sort of social media federalism.

Second, competition policy should seek to prevent new startups from being bought up early. This helps innovation. It prevents large companies from buying up potential competitors and killing off innovations that are not consistent with their current business models.

Third, competition policy should seek to separate different functions that are currently housed in the same company. This goal of separation of functions is different from a focus on questions of company size and market share.

For example, Facebook and Google are not just social media companies, they are also advertising agencies. They are both Don Draper and NBC. They match companies who want to advertise with audiences they create, and then they serve ads to end users on their social media feeds and applications.

Hence, competition policy might seek to separate control over advertising brokering from the tasks of serving ads, delivering content, and moderating content. Each of these functions is currently housed in a single company, but some of these tasks could be performed by different companies, each in a separate market.

Conversely, we might want to relax antitrust rules to allow media organizations to collectively bargain with social media companies for advertising rates and advertising placements.

I use the term competition law in addition to antitrust law for a reason. In the United States, at least, antitrust law generally refers to the judicial elaboration of existing antitrust statutes. But in dealing with the problems that social media create for the public sphere, we should not limit ourselves simply to elaborating the current judge-made doctrines of antitrust law, which focus on consumer welfare. Even if we expand the focus of antitrust law to the exercise of economic power more generally, competition law has other purposes besides fostering economic competition, economic efficiency, and innovation. In telecommunications law, for example, media concentration rules have always been concerned with the goal of protecting democracy, and with the goal of producing an informed public with access to many different sources of culture and information. Existing judge-made doctrines of antitrust law might not be the best way to achieve these ends, because they are not centrally concerned with these ends. We might need new statutes and regulatory schemes that focus on the special problems that digital companies pose for democracy.

Privacy and Consumer Protection

I have written a great deal about how we might rethink privacy in the digital age and I won’t repeat all of my arguments here. My central argument is that we should use a fiduciary model to regulate digital companies, including both social media companies and basic internet services that collect end user data. A fiduciary model treats digital companies that collect and use data as information fiduciaries toward the people whose data they collect and use.

Information fiduciaries have three basic duties towards the people whose data they collect: a duty of care, a duty of confidentiality, and a duty of loyalty. The fiduciary model is not designed to directly alter content moderation practices, although it may have indirect effects on them. Rather, the goal of a fiduciary model is to change how digital companies, including social media companies, think about their end users and their obligations to their end users. Currently, end users are treated as a product or a commodity sold to advertisers. The point of the fiduciary model is to make companies stop viewing their end users as objects of manipulation—as a pair of eyeballs attached to a wallet, captured, pushed, and prodded for purposes of profit.

This has important consequences for how companies engage in surveillance capitalism. If we impose fiduciary obligations, even modest ones, business models will have to change, and companies will have to take into account the effects of their practices on the people who use their services.

The fiduciary model is designed to be flexible. It can be imposed by statute, through administrative regulation, or through judicial doctrines. Fiduciary obligations are one important element of digital privacy and consumer protection but they are not sufficient in and of themselves. Moreover, fiduciary obligations must work hand in hand with competition law, because each can achieve things that the other cannot.

Intermediary Liability

One of the central debates in internet law is whether and how much intermediary liability states should impose, and conversely, whether states should grant some form of intermediary immunity. In general, I believe that intermediary immunity is a good idea, and some (but not complete) intermediary immunity is actually required by the free speech principle.

Because the current broad scope of intermediary immunity is not required by the First Amendment or the free speech principle more generally, governments should use the offer of intermediary immunity as a lever to get social media companies to engage in public-regarding behavior. In particular, one should use intermediary immunity as a lever to get social media companies to accept fiduciary obligations toward their end users.

Governments might also condition intermediary immunity on accepting obligations of due process and transparency. Social media companies currently have insufficient incentives to invest in moderation services and to ensure that their moderators are treated properly. In some cases, governments might be able to regulate the provision of moderation services through employment and labor law (although there are a few free speech problems with media-specific regulations that I can’t get into here). But governments should also create incentives for platforms to invest in increasing the number of moderators they employ as well as providing more due process for end users. They should also require companies to hire independent inspectors or ombudsmen to audit the company’s moderation practices on a regular basis. 13 13. See Tarleton Gillespie, Platforms Are Not Intermediaries , 2 Geo. L. Tech. Rev. 198, 214–16 (2018). In short, I don’t want to scrap intermediary immunity. I want to use it to create incentives for good behavior.

Although the general rule should be intermediary immunity, governments may partially withdraw intermediary immunity and establish distributor liability in certain situations. Distributor liability means that companies are immune from liability until they receive notice that content is unlawful. Then they have to take down the content within a particular period of time or else they are potentially vulnerable to liability (although they may have defenses under substantive law).

First, governments might employ distributor liability for certain kinds of privacy violations; the most obvious example is non-consensual pornography, sometimes called “revenge porn.”

Second, governments might establish distributor liability for paid advertisements. The basic problem of intermediary liability—and the reason why intermediary immunity is a good thing—is the problem of collateral censorship. Because companies can’t supervise everything that is being posted on their sites, once they face the prospect of intermediary liability they will take down too much content, because it is not their speech and they have insufficient incentives to protect it. This logic does not apply in the same way, however, for paid advertisements. Companies actively solicit paid advertisements—indeed, this is how social media companies make most of their money. As a result, even with distributor liability, companies still have incentives to continue to run ads. These incentives lessen (although they do not completely eliminate) the problems of collateral censorship. Note that the rule of distributor liability is still more generous than the rule of publisher liability that currently applies to print media advertisements.

This approach does not require us to distinguish between commercial advertisements and political advertisements. Nor does it require us to distinguish between issue ads and ads that mention a particular candidate. The on/off switch is simply whether the company accepts advertising. This rule leaves matters up to the company to decide how best to handle advertising, which is, after all, the core of its business. Twitter has recently announced that it will no longer accept political advertisements. 14 14. Political Content , Twitter, https://business.twitter.com/en/help/ads-policies/prohibited-content-policies/political-content.html [ https://perma.cc/HRN2-Z4QA ] (last visited Mar. 10, 2020). Facebook’s policies are more complicated and currently in flux. Facebook does take down paid political ads that lie about polling times and places. But it will not take down other false political ads, even when Facebook knows that they are false. 15 15. Rob Leathern , Expanded Transparency and More Controls for Political Ads , Facebook Newsroom (Jan. 9, 2020), https://about.fb.com/news/2020/01/political-ads/ [ https://perma.cc/S8YE-BFUL ]; Q&A on Transparency for Electoral and Issue Ads , Facebook Newsroom (May 24, 2018), https://about.fb.com/news/2018/05/q-and-a-on-ads-transparency/ [ https://perma.cc/5LWC-CUBP ].

Facebook’s case is instructive for how to think about the problem. Facebook argues that it does not want to be the arbiter of public discourse. In fact, it already is the arbiter of public discourse worldwide; moreover, as I’ve argued above, its proper function as a social media company is to serve as a curator of public discourse. Facebook well understands this: it takes down lies about election dates and polling places; and it bans abusive and dehumanizing speech that would otherwise be protected under the First Amendment. It is true that policing political advertisements poses genuine problems of scale: Facebook would have to take down ads not only for federal elections in the U.S., but for every state and local government election, and for every election around the world. However, Facebook already invests in moderating a far larger class of non-advertising speech around the world. So it would have to show why moderating the far smaller class of advertisements—which are marked and inserted into end users’ feeds as advertisements—is significantly more difficult.

The real reasons why Facebook has decided not to take down false political ads are somewhat different, and they better explain Facebook’s incentives to host political ads. That is important because, as noted above, distributor liability is less troublesome from a free speech perspective when companies have independent incentives to protect certain speech and prevent it from being removed.

First, Facebook probably resists taking down false political advertisements because it makes money from these ads, perhaps more money than it lets on. It is, after all, an advertising company, and unless the law imposes costs for running advertisements, each advertisement adds to its bottom line. But political advertising is only a small fraction of its business, and so ad revenue is probably not the central motivating factor behind Facebook’s policies. A second and more important reason is that Facebook does not want to anger the politicians who place political ads, and who might be motivated to regulate or break up the company. Regulation or breakup might truly threaten Facebook’s revenues.

Third, Facebook is in the influence business. Serving political ads keeps Facebook connected to important politicians and political actors around the world and thereby increases the company’s power and political influence. That is one reason—although certainly not the only reason—why Facebook treats important political figures differently than ordinary individuals, and keeps up postings that would otherwise violate its community standards or terms of service if made by ordinary individuals. 16 16. See Thomas Kadri & Kate Klonick, Facebook v. Sullivan: Public Figures and Newsworthiness in Online Speech , 93 S. Cal. L. Rev. 37 (2020). Facebook believes that people want to know what these important figures think; but more importantly, it wants to be the conduit for people to hear what these important people have to say. It also wants to stay on the good side of powerful people who might someday threaten its business. Because Facebook has incentives to solicit, attract, and keep up political advertisements, including knowingly false political advertisements, imposing distributor liability for all advertisements will give Facebook better incentives than it currently has.

The thesis of this essay is that you shouldn’t regulate social media unless you understand why you want to regulate it.

We should regulate social media because we care about the digital public sphere. Social media have already constructed a digital public sphere in which they are the most important players. Our goal should be to make that digital public sphere vibrant and healthy, so that it furthers the goals of the free speech principle—political democracy, cultural democracy, and the growth and spread of knowledge. To achieve those ends, we need trustworthy intermediate institutions with the right kinds of norms. The goal of regulation should be to give social media companies incentives to take on their appropriate responsibilities in the digital public sphere.

Note: This essay was originally delivered as the keynote address of the Association for Computing Machinery Symposium on Computer Science and Law, New York City, October 28, 2019.

Printable PDF

© 2020, Jack M. Balkin. 

Cite as:  Jack M. Balkin, How to Regulate (and Not Regulate) Social Media , 20-07 Knight First Amend. Inst. (Mar. 25, 2020), https://knightcolumbia.org/content/how-to-regulate-and-not-regulate-social-media [ https://perma.cc/TE5Q-E7XV ].

1 Jack M. Balkin, Free Speech Is a Triangle , 118 Colum. L. Rev. 2011, 2037–40 (2018).

2 See Adult Nudity and Sexual Activity, Facebook Community Standards, https://www.facebook.com/communitystandards/adult_nudity_sexual_activity [ https://perma.cc/EFQ8-RGPW ] (last visited Mar. 10, 2020).

3 Jack M. Balkin, Cultural Democracy and the First Amendment, 110 Nw. U. L. Rev. 1053 (2016).

4 Associated Press v. United States, 326 U.S. 1, 20 (1945).

5 See Robert C. Post, Democracy, Expertise, and Academic Freedom: A First Amendment Jurisprudence for the Modern State (2012) (arguing that professional and disciplinary norms for knowledge production are necessary to achieve the “democratic competence” necessary for democratic self-government).

6 Sarah T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (2019); Casey Newton, The Trauma Floor: The Secret Lives of Facebook Moderators in America, The Verge (Feb. 25, 2019), https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona

7 Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019).

8 See Balkin, Free Speech Is a Triangle , supra note 1, at 2025–27.

9 K. Sabeel Rahman, Regulating Informational Infrastructure: Internet Platforms as the New Public Utilities , 2 Geo. L. Tech. Rev. 234 (2018).

10 See Zuboff, supra note 7, at 138–55 (describing the “Dispossession Cycle”).

11 Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (2018).

12 See Jack M. Balkin , Information Fiduciaries and the First Amendment , 49 U.C. Davis L. Rev. 1183, 1209 (2016); Jack M. Balkin, The Three Laws of Robotics in the Age of Big 78 Ohio St. L.J. 1217, 1228 (2018); Balkin, Free Speech is a Triangle , supra note 1; Jack M. Balkin, The First Amendment in the Second Gilded Age , 66 Buffalo L. Rev. 979 (2018); Jack M. Balkin, Fixing Social Media’s Grand Bargain (Hoover Working Group on National Security, Technology, and Law, Aegis Series Paper No. 1814, October 16, 2018), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3266942 [ https://perma.cc/SC32-HLTA ].

13 See Tarleton Gillespie, Platforms Are Not Intermediaries , 2 Geo. L. Tech. Rev. 198, 214–16 (2018).

14 Political Content , Twitter, https://business.twitter.com/en/help/ads-policies/prohibited-content-policies/political-content.html [ https://perma.cc/HRN2-Z4QA ] (last visited Mar. 10, 2020).

15 Rob Leathern , Expanded Transparency and More Controls for Political Ads , Facebook Newsroom (Jan. 9, 2020), https://about.fb.com/news/2020/01/political-ads/ [ https://perma.cc/S8YE-BFUL ]; Q&A on Transparency for Electoral and Issue Ads , Facebook Newsroom (May 24, 2018), https://about.fb.com/news/2018/05/q-and-a-on-ads-transparency/ [ https://perma.cc/5LWC-CUBP ].

16 See Thomas Kadri & Kate Klonick, Facebook v. Sullivan: Public Figures and Newsworthiness in Online Speech , 93 S. Cal. L. Rev. 37 (2020).

Jack M. Balkin is Knight Professor of Constitutional Law and the First Amendment at Yale Law School.

Filed Under

  • Social Media Regulation
  • Technology Companies

IMAGES

  1. Government Should Regulate Internet Content Essay

    government should regulate internet usage essay

  2. ⇉Government regulation of the Internet Essay Example

    government should regulate internet usage essay

  3. Should the Government Regulate the Internet More Strictly

    government should regulate internet usage essay

  4. Government Control on Internet Usage Essay Example

    government should regulate internet usage essay

  5. Government Intervention of the Internet Essay Example

    government should regulate internet usage essay

  6. Internet Regulation: Should Federal Government be Allowed to Regulate

    government should regulate internet usage essay

COMMENTS

  1. Social Media Should be Regulated

    Germany now regulates social media content via the Network Enforcement Act, aka NetzDG, by mandating that social media providers comply with government guidelines on blocking hate speech, defamation, and other illegal content. Fines go up to $56 million per violation. 4. The government should continue to provide research funding for private ...

  2. It's Time for Government to Regulate the Internet

    Three Cheers for Regulation. During the Industrial Revolution, labor organizations, social movements, the media, and government came together to rein in big business, providing lessons on how to regulate firms of today like Facebook, Amazon, and Google, writes SSIR' s editor-in-chief in an introduction to the Summer 2019 issue.

  3. Should the Government Regulate Social Media?

    Government regulation to prevent the spread of misinformation and disinformation is neither desirable nor feasible. It is not desirable because any process developed to address the problem cannot be made immune to political co-optation. Nor is it feasible without significant departures from First Amendment jurisprudence and clear definitions of ...

  4. Internet Regulation: The Responsibility of the People

    There, the Chinese government has used its internet regulation to track protestors and block access to helpful internet applications. Apple, for instance, has removed an application used by protestors to track police movements from their internet store. ... Big Data, Surveillance, and the Tradeoffs of Internet Regulation This essay written by ...

  5. Should the Government Regulate the Internet?

    Skeptics of net neutrality argue that the government is poorly suited to regulate such a vast and changing communications tool. Further, providing internet access is a costly business for ISP's, and businesses who provide and innovate valuable services should be reworded for their work. Net neutrality, in their view, harms economic prosperity ...

  6. Are We Entering a New Era of Social Media Regulation?

    The violence at the U.S. Capitol — and the ensuing actions taken by social media platforms — suggest that we may be at a turning point as far as how business leaders and government bodies ...

  7. The Internet: To Regulate Or Not To Regulate?

    Therefore, regulation of the internet will only be possible with the cooperation of multiple government agencies, private sector companies, and end users, particularly when it comes to regulation ...

  8. Should the Internet Be Regulated?

    Dec. 1, 2017. The Federal Communications Commission's plan to roll back net neutrality has sparked intense debate; those in favor worry that deregulation would limit access to information in a ...

  9. Social media: How might it be regulated?

    forcing social networks to disclose in the news feed why content has been recommended to a user. limiting the use of micro-targeting advertising messages. making it illegal to exclude people from ...

  10. Social Media Regulation in the Public Interest: Some Lessons from

    For two decades after the courts struck down the Communications Decency Act in 1997, direct government regulation of the internet was a political third rail. ... The first section of this essay explores the growing interest in cross-applying the public interest standard from broadcasting to the internet. The second section recounts the history ...

  11. How much control should a government have over citizens' social

    Federal officials, according to the court, did run afoul of the First Amendment by coercing and significantly encouraging the social media platforms to censor disfavored speech. And they did this ...

  12. NEW REPORT: Global Battle over Internet Regulation Has Major

    State intervention must protect human rights online and preserve an open internet. The emancipatory power of the internet depends on its egalitarian nature. To counter digital authoritarianism, democracies should ensure that regulations enable users to express themselves freely, share information across borders, and hold the powerful to account.

  13. PDF How to Regulate (and Not Regulate) Social Media

    I believe that content regulation should occur higher up the stack, to borrow a familiar computer science metaphor. Instead, these businesses should concern themselves only with the legality or illegality of transactions. Government should require nondis-crimination—otherwise the public and politicians will place irresistible

  14. Should The Governments Regulate The Internet?

    Survey conducted by Internet Usage Statistics shows that over twenty-five percent of world population are internet users [11]. Because of the internet popularizing trend, the importance of Internet censorship has also risen. As a result, I believe that the Internet should be regulated by the Governments.

  15. Should Social Media Be Regulated: [Essay Example], 614 words

    Should Social Media Be Regulated. Social media has transformed the way we connect, communicate, and consume information. As this digital landscape evolves, concerns about the impact of social media on society have prompted debates about the need for regulation. In this essay, we explore the arguments surrounding whether social media should be ...

  16. Should the Internet Be Regulated?

    Should the Internet Be Regulated? Essay. For purely legitimate purposes, some skeptics advocate for the control of the internet as it has become the world's most powerful tool, yet the least regulated. To others, the internet does not require regulation because there is no consent among users to be governed, a position I agree with.

  17. Should the Internet Be Regulated? Essay

    However, regulating the Internet may pose challenges, such as hindering the freedom of speech, right to information and right to privacy. China, for example, have been criticize for its regulation of the net the government has full control of the Internet as they have managed to block and filter websites (Zheng, 2013).

  18. Whom to Protect And How: The Public, the Government, and the Internet

    Television ownership in the United States exploded from 6 percent in 1949 to 52 percent in 1953 to 83 percent by 1956. Still, the increase in computer use and, in the second wave, Internet use is ...

  19. Should Government Regulate The Internet Essay

    The federal government should not control information on the internet because doing so breaks the first amendment and the citizen's right to privacy, it prevents the promotion and technology and it would develop civil unrest. One of the main important rights granted to American citizens was the right to free speech and freedom of the press.

  20. Why the Government Should Not Regulate Content Moderation of Social

    The exchange underlying social media thus implicates both commerce and fundamental rights. Some part of the protection for social media from government action derives from the protections accorded ...

  21. Why The Government Should Regulate Social Media

    Misinformation and Disinformation. One of the most compelling reasons for government regulation of social media is the rampant spread of misinformation and disinformation. The unchecked dissemination of false or misleading information on these platforms poses a significant threat to public discourse, democracy, and public health.

  22. How to Regulate (and Not Regulate) Social Media

    The goal of regulation should be to give social media companies incentives to take on their appropriate responsibilities in the digital public sphere. Note: This essay was originally delivered as the keynote address of the Association for Computing Machinery Symposium on Computer Science and Law, New York City, October 28, 2019. Printable PDF

  23. Essay About Government Should Regulate Internet Usage

    Government regulation of internet usage is crucial for several reasons. Firstly, it helps protect individuals from online threats such as cybercrime, fraud, and identity theft. Regulations can establish standards for data protection and enforce penalties for illegal activities, ensuring a safer online environment for users.