The technical complexity of the Internet makes choices around its regulation difficult. Users might not understand how private their interactions may really be. And perhaps worse, politicians who don’t really understand the technology or even the implications of their policy might attempt to exert political control over the Internet. Regardless of whether or not you agree that Internet should be regulated in some way, I think most people would agree that anyone discussing these possibilities should at least understand the technology in play and how everyone would be affected by attempts at regulation.
The United States government is primarily credited with starting the concept of the Internet. But really, all over the world networking was happening and the real problem was how to make everyone’s networks talk to each other. When CERN joined the Internet it started to become a more global distribution of networks, rather than something purely American in nature. This development may eventually be a critical concept as we go forward: the Internet’s distributed availability is a defining characteristic. In order for any governing body to gain control this distributed nature would have to be overcome.
Safety, intellectual property, and net neutrality are often put forth as reasons for regulation. But what on the surface all seem like very reasonable ideas have many drawbacks, and the enforcement itself is a difficult task.
Fear of child pornography, or children accessing content that is not age appropriate, has led to various legislation in the past like Children’s Internet Protection Act. This particular piece of legislation works pretty well because it is enforced at the end-user level, namely schools that already take advantage of communications incentives. School administrators can easily enforce a set of rules at the end-user network level.
Other safety concerns include fraud and malware attacks by malicious entities looking to steal money via the Internet. This aspect has largely avoided a need for regulation thanks to standard “analog” enforcement of local fraud laws but also at the Internet browser level, where competing browsers offer protection from attacks in an effort to gain market share. Browsers will often offer bounties to hackers who find security holes as a way to publicize their ongoing security capability.
Child predators have been generally dealt with by local law enforcement cooperatives. The anonymous nature of standard Internet communication makes it harder for predators to even identify targets – with many more cases of predators engaging undercover law enforcement officials than actual minors. And some research has shown fears are not factually based on actual danger. But apart from this, the very nature of the public Internet makes predatory activity incredibly traceable. A predator offering candy out of a van in a dark alley stands a much better chance of evading law enforcement than one who leaves all sorts of digital evidence connecting himself to the potential victim.
Very much in the news of late is the debate on copyright, digital rights, and intellectual property. Producers of movies and music are angry that their creative output can be very easily copied and distributed without their participation. Until the Internet age, distribution was controlled and monetized by the producers. Because the market inventory was under their full control, producers could competitively leverage the various analog distribution channels. But the Internet is a ubiquitous distribution channel and digital copying software allows anyone to contribute to the market inventory.
This is going to be the hardest area to overcome, both in terms of regulation and in terms of recovery of an pre-Internet model. Unlike the safety scenarios, tracking production of market inventory is difficult for law enforcement. So early on the majority of the focus for content producers has been on the distribution. The first major attempts at regulation included the Digital Millennium Copyright Act in the United States and the Copyright Directive in the European Union. The spirit of these regulations revolve around granting limited power to copyright holders in terms of takedown notices and the ability to go after distributors using technology intended to circumvent copyright prosecution.
The vagueness of the anti-circumvention aspects are where the majority of the problem resides. Several new American laws intended to make these definitions concrete including SOPA and PIPA, and the multinational trade agreement ACTA have caused an uproar among the Internet community for the side-effects of such legislation. The problem is that enforcement in every case has unintended consequences. File distribution can be not only completely harmless but extremely valuable outside any scope of copyright infringement. And file location through searching or directories (ways in which one would ostensibly find copyrighted content) also happen to be the premier features of Internet navigation. This leads to obvious abuses of power and more unintended consequences.
Other forms of regulation include seizing the domain name from an offending site. Here lies another fundamental problem with regulation. Businesses can be subject to many jurisdictions. But registration for all major generic top level domain names happens within the United States, including .com, .org, .net, and .biz (excluding country coded domains). Does this mean the United States can shut down any website it feels is in violation of United States law, regardless of whether or not the business and website reside in the US? It seems so. But why would someone living in Japan care to have their website domain name subject to take down by the United States? Well, the domain name lookup happens here in the US now thanks to the legacy of the Internet’s origins. But ISPs outside the US could look up domains on a server outside of the US if they wanted to. That is the nature of the Internet – it’s distributed. In its current form it cannot be regulated at the domain name level except through a country’s own ISPs.
In the case of Net Neutrality the possibility has been suggested that some entity might decide to offer some content at different quality than others, setting up monopoly models where ISPs control the flow of information to their users. Since ISPs are the last mile of the Internet connecting to users, and since they generally have strong market positions, this could be leveraged either by the ISPs themselves or by commercial partnerships with the ISPs or even governments looking to regulate speech or censor content. And in fact there have been cases of ISPs manipulating the flow of requested content.
The arguments against Net Neutrality are akin to arguments against public entities like governments telling private businesses how to run their businesses. In the current state of affairs in the United States, there are good reasons to feel insecure about net neutrality. This is because there is very little choice between ISPs for end users. The communications industry as a whole is quite consolidated. In this opinion piece, Timothy Karr argues to go after this larger problem, empowering consumers with choice to reject ISPs that frustrate user experiences.
I prefer leaving things alone and letting them grow organically on their own. Internet safety doesn’t seem to be as large of an issue as once thought. Intellectual property holders seem to be dealing with an inevitable major paradigm shift they cannot control. And the fears of net neutrality might belie other more serious market issues. But maybe the biggest problem is simply education. Most of the attempts to regulate seem to be doing so without truly understanding the situation in the first place. For me, I’d much rather see groups like The Internet Blueprint and EFF lead an *educated* effort toward Internet standards.