* US law relative to data rights trace back to the American Constitution in the 18th century. With the development of modern communications technology from early in the 20th century, beginning with the telephone, data rights became increasingly important and troublesome -- with the modern computer revolution throwing the controversy into high gear.
* The roots of concerns over data rights go back millennia. In ancient times, for example, governments obviously believed they had a right to data security; the laws were inclined to take an unforgiving view of spies. Governments also were under no real restrictions on what they could say -- though leaders who spoke unwisely might end up regretting it -- and were enthusiastic about "information warfare", using propaganda against foreign enemies and enemies of the state.
It is hard to say much about the data rights of citizens in those times. Of course, there were laws against trespass and theft, but it appears that in most ancient cultures, the restraints on government that raised obstacles to invasion of the privacy of citizens, including privacy of data, were limited. There are hints of general privacy law from early on -- for example, the Bible stating that "a man's house is his castle", with Roman law restating the principle, with qualifications -- along with hints of "intellectual property" protection, the right of reserved use of one's own creations.
Attempting to trace the roots of these concepts would be tricky and not particularly useful here. The useful starting point for the historical record of US law relative to general privacy, data rights, and data security -- derived from earlier English legal precedents, and not completely new -- was the 4th Amendment of the Bill of Rights in the US Constitution, which reads as follows:
BEGIN_QUOTE:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
END_QUOTE
Notice that the 4th Amendment specifies the right of people to be "secure in their papers", which explicitly identifies a right to data security. The 4th Amendment doesn't grant an unlimited right -- there are no unlimited rights in the Constitution, papers can be obtained if law enforcement has a valid warrant. The essence of data security in the old days was that the authorities couldn't search a person or invade a house and seize papers without a warrant, nor could they similarly open mails without a warrant. Warrants had to be issued against individuals; blanket warrants were not allowed. It should be noted that theft of private information wasn't really covered by privacy laws -- that was, in effect, just theft.
The Bill of Rights also includes the 1st Amendment, which protects rights of free expression: "Congress shall make no law ... abridging the freedom of speech ... " This again is a limited right, in that it doesn't grant a right to defamation -- that is, slander or libel. If Alice -- of the comedy duo Alice & Bob, useful for example purposes -- can prove Bob knowingly spread falsehoods about her and she was provably damaged as a result, Bob will be legally obligated to pay for the damages. The 1st Amendment of course does not grant the right to threaten or intimidate public figures, or anyone else for that matter. The 1st Amendment also does not grant any right to commit frauds.
Of course, democratic society is not compatible with unrestricted privacy, there also being a need for transparency. Public officials cannot conduct their business in secrecy, except to the extent that national security demands it, with that obligation for transparency filtering down, to a degree, to private individuals. If Bob is engaged in "trusted transactions" with, say, a bank, Bob is breaking the law if he fraudulently misrepresents significant facts, or attempts to conceal his identity by using forged identity documents; forgery is a crime. The Constitutional right to be "secure in their papers" does imply schemes for validating documents -- traditionally, by using seals, stamps, and signatures -- and issue of useful identification papers. There wasn't much constitutional fuss over validation early on, but much later it would become more difficult.
In addition, the 5th Amendment of the US Constitution gives citizens on trial the right to refuse to incriminate themselves: "No person ... shall be compelled in any criminal case to be a witness against himself ... " This is a double-edged right, in that if Bob "pleads the 5th" he cannot speak in his own defense. If the case doesn't hang on his testimony, he is simply refusing to defend himself.
As far as intellectual property went, that was covered in the "Patents & Copyrights Clause" of the main text of the Constitution, which empowered Congress to "promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries."
* The US Constitution was a landmark in the evolution of democracy, in general distilling much of the best of Enlightenment thinking. It was, however, a practical document, with major compromises -- most notably, carve-outs to accommodate if not endorse slavery in American states that kept slaves -- and was, of necessity, a brief and general document. It invoked broad principles, but what would they amount to in practice? The document had to be read as a whole, and its elements balanced against each other.
There were many practical questions that it did not answer, in some cases because the questions weren't being asked at the time. For example, did a citizen have a right to encrypt data? Could citizens be compelled by the authorities to decrypt it? The Constitution said nothing in particular about what is now called "crypto" technology. It also said nothing specific about foreign spying, since the right of the state to keep secrets was unquestioned, with the right to deal with spies broadly covered under the war powers clauses of the Constitution.
The emergency powers of the government sometimes overrode the Bill of Rights, most notably during the US Civil War, World War I, and World War II, with freedom of expression suppressed in the name of national security. The legal baseline was established by the "Espionage Act of 1917", which followed up earlier legislation, to establish harsh penalties for spying. A follow-up "Sedition Act of 1918" prohibited publishing subversive literature, or otherwise attempting to subvert the government. A number of cases followed, most notably the 1919 case of SCHENCK V. UNITED STATES, in which the Supreme Court upheld a conviction for distributing flyers protesting America's World War I draft law, Justice Oliver Wendell Holmes argued: "The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre, and causing a panic."
The Sedition Act was repealed in 1920, though the Espionage Act survives today, and has been amended several times. Holmes would later backtrack on his comment: was distributing leaflets protesting the draft like shouting FIRE in a theater? Few would say so now. Nonetheless, his basic principle still stands and remains relevant: nobody has the right to shout FIRE in a crowded theater -- or declare there is no fire when there is; or tell people they are safer remaining seated; or direct them to a locked door; or encourage people to set fires in theaters. In short, free speech does not imply a right to spread malicious disinformation.
BACK_TO_TOP* The collision between information technology and data rights began with the telephone. There had been legal questions following the invention of modern lock technology during the 19th century, with the courts concluding -- though the decision is very hard to trace back -- that a suspect could be legally compelled to hand over the key to a strongbox, but could not, by the 5th Amendment, be legally compelled to reveal the combination of a safe, because that was private information. That wasn't so much of a concession, however, because the authorities could legally bust open a safe if they wanted to.
The invention of the telegraph in the 19th century also hadn't really affected the laws much; the telegraph was, legally speaking, not much different from traditional mails, just faster. It was the telephone, which meant instantaneous communications, without a record of what was said, among households and businesses all over, that began the complication of the issue, with the authorities using "wiretaps" to listen in on those under suspicion.
Wiretaps were used almost from the beginning of the telephone era, with the invention of audio recording systems making them much more practical. In the 1920s, during Prohibition in the USA, a number of bootleggers were arrested on the basis of wiretaps, who then challenged the constitutionality of their convictions. In the 1925 case of OLMSTEAD V. UNITED STATES -- Roy Olmstead being a bootlegger -- the Supreme Court judged that wiretaps were constitutional, and more significantly that a warrant wasn't needed. That was overturned in the 1967 case of KATZ V. UNITED STATES -- Charles Katz being a citizen of Los Angeles who was into illegal gambling -- with the Supreme Court deciding that a warrant was needed for a wiretap after all.
KATZ V. USA was limited, in that it only covered normal criminal investigations of American citizens. Warrantless wiretaps for national-security purposes still took place -- but they would factor into the most famous case of a leak of secret information in US history. In June 1971, THE NEW YORK TIMES started to publish THE PENTAGON PAPERS, which were extracts from a Pentagon study on the origins of the Vietnam War that outlined deceptions by the government. Other major newspapers also began to publish the materials. They had been leaked by Daniel Ellsberg, a Pentagon employee who had worked on the original study.
President Richard Nixon hadn't wanted to make a fuss about the PENTAGON PAPERS, since they only discussed actions of the previous Kennedy and Johnson Administrations. However, Nixon was persuaded that the leak set a dangerous precedent, and so attempted to block continued publication. That was a mistake, since the Supreme Court, standing with the 1st Amendment, judged in favor of publication. Ellsberg freely surrendered to the authorities in 1971, saying he was "prepared to answer to all the consequences of this decision." He was charged under the Espionage Act and went to court in 1973. The case was dismissed, after the emergence of evidence that the White House had illegally wiretapped Ellsberg's phone, and even broken into the office of his psychiatrist.
That wasn't the Nixon Administration's only demonstration of excessive enthusiasm for spying on American citizens. Such abuses led to reconsideration of the loopholes in the wiretap law, leading in 1978 to the landmark "Foreign Intelligence Surveillance Act (FISA)." FISA specified that non-criminal surveillance could only be performed for collecting foreign intelligence or foreign counter-intelligence; and that a secret "Foreign Intelligence Surveillance Court (FISC)" be established to issue warrants for wiretaps. Its purview would later be expanded to physical searches and planting tracer devices. The FISC, more generally called the "FISA Court", has 11 judges, all appointed by the Chief Justice of the Supreme Court for 7-year terms. Its deliberations are, of necessity, secret.
* As footnotes to the emergence of the clash between technology and data rights:
* Personal computing got started in the late 1970s, mostly on a hobbyist basis, to boom in the 1980s with the introduction of the Apple II PC and the IBM PC. PCs became widespread, with companies springing up to provide software and other tools for them. Computer communications were primitive in the era, using low-speed "dialup modem" connections on the public phone network, and generally reliant on "online service providers", most prominently Compuserve. Compuserve and its rivals provided email, bulletin boards, news, weather, and other services. There was also a global BBS named "USENET" that ran on large mainframe computers, connected over the emerging global internet, which the computer-literate could access via dialup, using a PC as a terminal.
It was at this time that serious fighting over information technology began. One of the first regulatory acts relative to computing was the "Electronic Computing Privacy Act (ECPA) of 1986", which simply plugged a loophole, saying that digital data was covered by the same privacy laws as telephone voice conversations. The ECPA did say that the government had the right to read emails if they were over 180 days old, on the basis that they were abandoned property.
Popular computing was still largely a playful exercise in the 1980s. It didn't really start to become economically and socially significant until the 1990s, with the introduction of the internet and the World-Wide Web. While it meant a massive improvement in global access to information, of course it introduced problems as well:
Data security became increasingly important. Digital data can be encrypted with a number of schemes, all of which use a secret "key" to fold, transform, and shuffle data. Traditionally, digital ciphers were "symmetric", in that the same key would be used to both encrypt and decrypt a message. However, a symmetric-key encryption scheme was not particularly useful for the internet, at least not in itself, since users doing business on the internet would have to distribute their keys far and wide, and they would inevitably be compromised.
The solution was the development of "public-key encryption", in which a user would have both a "public key" and a "private key". The public key could be published without restraint; it could be used to encrypt messages, but it could not decrypt them again. They only way they could be decrypted was by the user's private key, which was kept secret. Public-key encryption was very computing-intensive and not so secure in itself, so it was only used to encrypt keys for more powerful and convenient symmetric ciphers -- a scheme known as "hybrid encryption". The technology was promoted by its inventors through a crypto company named "RSA Data Security INC". There was also a push to develop crypto technology that could validate documents, using "digital signatures", or validate websites, using a certification scheme named "certs".
There was nothing secret about any of this crypto technology, meaning there was little control over its spread. In the late 1980s, a crypto geek named Phil Zimmermann came up with a crypto software package named "Pretty Good Privacy (PGP)". PGP imitated the public-key encryption technology of RSA, with hybrid encryption provided using a number of available symmetric ciphers.
Zimmermann was ready to go with PGP in 1991, but he was confronted with two legal obstacles: one being that RSA INC owned and protected the patent on public-key encryption -- the other being that the US government was becoming aware of the potential for trouble in wide distribution of strong encryption, and so took a dim view of Zimmermann's activities. By 1996, he had come to an understanding with RSA INC, and the government had acquired some muddled understanding of the public need for strong encryption.
In 1994 the Clinton White House announced the intent to set up a cryptosystem in which the government would perform "key escrow", retaining the private keys of all the users. The cryptosystem was known as the "American Escrowed Encryption Standard". It involved two encryption standards -- one named "Clipper" for telephone communications, and the other named "Capstone" for computer communications. Alice could, in principle, buy a phone with a Clipper chip in it, and when she obtained the phone, the government would receive the private key for it. The key would be split in half, with each key provided to a different government organization. A law enforcement agency would have to go to both organizations to obtain the full key.
The US government mandated the use of Clipper and Capstone for its own use, and pressured companies doing business with the government to do the same. There was some basis for a private key escrow system; it would provide insurance for an organization in case a private key was lost, or an angry employee left a company and didn't bother to pass the key on to anyone. However, critics suggested that the government's desire to escrow cipher keys had about the same level of logic and desireability as would a government effort to escrow keys to the doors of all the homes of citizens.
Not surprisingly, the government was the only entity in favor of the scheme. The industry response was overwhelmingly negative -- one protest being that the Clipper Chip would make US products unattractive to foreign buyers, who would not like the US government having a "back door" into their communications. By 1996, the Clipper Chip had been effectively abandoned.
That set the pattern for later attempts to clamp down on encryption, always starting out with a grand initiative that then bogged down in reality. Governments began to embrace the reality. In the spring of 1999, the German government announced that it would help support the development of open-source cryptotech and recommend it to German citizens. The French government formally scrapped attempts at key escrow; the Canadian government formally announced a "hands-off" policy towards encryption, and in the US the Clinton Administration announced in a policy statement on crypto export that stated: "Americans will remain free to use any encryption system domestically."
In the summer of 2000, the Clinton Administration followed up this move by relaxing the restrictive export controls on crypto software that had been the law, much to the relief of US crypto-technology companies. A precedent had been set for the right to strong encryption -- but no definitive decision had been made in court. The US government would revisit the issue, replaying it repeatedly. As the saying goes: Lunacy is doing the same thing over and over again, expecting different results.
* Another significant development during the 1990s relative to internet law was Section 230 of the "Communications Decency Act (CDA)" of 1996. As its name implies, the CDA was primarily intended to censor porn on the internet, keeping it away from the innocent eyes of minors -- but in the 1997 decision in RENO V. AMERICAN CIVIL LIBERTIES UNION, the US Supreme Court judged that the CDA went too far in suppressing speech -- which was more or less typical of 1st Amendment cases, free speech generally winning in court.
In fact, although there was much fuss about internet porn early on, it didn't turn out to be such a problem. Pornographers remained largely in their own domains and generally stayed out of mainstream websites; search engines incorporated "SafeSearch" filters that, if a user wanted or parents stipulated, blocked access to porn sites. Porn sites always notified incoming users that they carried "mature content" and asked if the users wanted to proceed -- with many of the sites also demanding payment. The system was not perfect, but it was stable, and nobody had better ideas.
In fact, although there was much fuss about internet porn early on, it didn't turn out to be such a problem. Pornographers remained largely in their own domains and generally stayed out of mainstream websites; search engines incorporated "SafeSearch" filters that, if a user wanted or parents stipulated, blocked access to porn sites. The system was not perfect, but it was stable, and nobody had better ideas.
It was Section 230 that proved of more lasting significance. It said that operators of online media carrying user-contributed material, such as bulletin boards or USENET, could not be held legally liable for content issued by users on their systems. If an individual released vicious and slanderous texts on online media, the operators could not be sued for it. It did require that operators remove material in violation of copyright.
Section 230 was largely rooted in cases involving radio broadcasters and book publishers dating back to the 1930s. Publishers were responsible for the content of the books they published because they exercised editorial control over them, and so they could be sued for libel. However, a bookstore owner could not be sued for selling libelous books, since it was not reasonable to assume the owner had read every book.
The courts had earlier decided that online media outlets carrying user-contributed material were like bookstores, and so the operators could not be held liable for user content. There was a big catch: if the operators moderated the content, it was then liable. The result was predictable, operators stopped moderating content. That led to Section 230 -- which allowed the operators to engage in moderation, even delete content completely off their pages, without being penalized by the legal system. In other words, operators were given very broad discretion in what they would and would not let people post to their websites.
It wasn't such a big deal then, but the far-sighted could see that it might be eventually. At the time, trolls spreading conspiracy hoaxes and other disinformation online were seen as not much more than a nuisance. However, trolling in a broader sense had become more widespread by the end of the century, with popular websites such as Alex Jones' INFOWARS arising to spread wild conspiracy hoaxes -- and cable TV carrying Fox News, preaching the Troglodyte-Right gospel, obtaining an attentive audience. It was the beginning of the digital era of information wars, but it wasn't taken very seriously at the outset.
* Incidentally, although intellectual property law is a significant issue, it is somewhat off the track in this document -- except possibly to the extent that the growth of the internet led to widespread piracy of music, videos, and software, leading to enforcement actions, and the emergence of technologies to impede illegal copying. Such issues persist, but like online porn, they're seen as manageable.
In any case, the US Patent & Trademark Office can issue patents lasting for 20 years to protect inventions; it can issue copyrights of works of art that last for the life of the author, plus 70 years. There are also company trademarks, like "Coca-Cola", that can be renewed indefinitely. Patent and copyright protection law can be extremely complicated -- for one big example, biomedical research companies can get patent protection for modified genes, with conditions. Patent litigation can become protracted and expensive, with some firms, known as "patent trolls", acquiring large numbers of patents, to use them to press endless lawsuits.
The difficulty with copyrights is that they were greatly extended in 1976, with some adjustments later. As it turned out, the prime mover behind the push for copyright extension was Disney Corporation, which wanted to protect Mickey Mouse and friends. It is generally expected that when Disney's copyrights are about ready to run out, the company will lobby to extend copyrights further. Intellectual property law is a big subject in itself, and can't be pursued in detail here.
* By the turn of the century, the global internet had become solidly established, the world becoming effectively wired together. Amazon.com, founded in 1994 as an online bookseller, became a pioneer in online commerce, gradually selling every product under the Sun. Similarly Google, an "internet search engine" used to find online resources, was rising towards predominance -- notably sidelining Yahoo, which had trailblazed search engines in the late 1990s. Google would also become predominant in online advertising.
Modern "social media" firms, the descendants of USENET, emerged later in the decade, with Facebook being founded in 2004 and Twitter in 2006. Home videos were similarly enabled by the founding of YouTube in 2005; users had faster internet speeds that made video downloads more practical. That would lead in turn to the rise of Netflix and other online movie / video channels -- following in the path of music download sites, which had arisen in the 1990s. However, movie / video and music channels are not particularly relevant to this document.
None of these new systems meant a fundamental change in the data security landscape. Amazon did have problems with manipulation or "shilling" of its user product reviews, but that was addressed as an ongoing thing. Similarly, there were attempts from the beginning of internet search engines to influence Google's ranking of specific websites, but that was also an ongoing thing -- and to a certain extent, "search engine optimization (SEO)" was seen as a legitimate activity. Google invested effort into handling search manipulation, and also "click fraud" associated with Google ads to game the advertising system. Somewhat more ominously, the growth of big online firms led to a parallel growth in "data mining" -- the mass collection of data from users surfing the internet, with the data used for targeted advertising.
The destruction of the World Trade Center towers in New York City by Islamic terrorists on 11 September 2001 led to a quiet, but ultimately unsettling, interest by governments in using online surveillance to spot terrorists. In the US, the National Security Agency (NSA), the US Federal government's crypto organization, developed a comprehensive and powerful system for mass surveillance of digital traffic. The centerpiece of the effort, codenamed PRISM, was a system in which the NSA monitored the traffic of major US internet service providers.
PRISM performed bulk collection of the "metadata" that made up the traffic to pick out patterns from the flood. If suspicious individuals were picked out, they were then subjected to "targeted surveillance", subject to a warrant by the FISA Court. Congressional committees also performed oversight. Other nations performed online surveillance as well, with the US often collaborating with such efforts by allies.
Government surveillance was greatly enhanced though the use of data mining systems that could sort through and assemble data to find patterns. Any one action by an American intending to perform a terror attack would not be particularly suspicious; but finding out that person was acquiring a number of things, each from different sources, that collectively looked like a kit of parts for a bomb would be suspicious.
In late 2002, press reports described a data mining system being developed by the Pentagon named "Total Information Awareness". There were public concerns over the matter, and though the military renamed the effort "Terrorism Information Awareness" to emphasize that the goal wasn't to spy on law-abiding citizens, the US Congress was not impressed and cut funding a year later. Citizens' rights had been protected -- or had they? The effort went on almost unchanged, broken into smaller projects that were kept under a wrap of secrecy.
Of course, data mining wasn't inherently malign. Amazon.com could track customer purchases and use each customer's pattern of purchases to suggest new product buys. That wasn't very controversial, since it turned out that customers generally like to get ads for products they want to buy, and Amazon was usually careful not to let out customer data to other organizations. Similarly, supermarket chains and other retailers set up "loyalty card" programs that monitored customer purchases, with a retailer offering customized sales deals and possibly some "freebies", such programs becoming very popular. The big problem was in irresponsible use of customer data, with some internet firms selling it out to other companies or organizations without customer knowledge or approval. Big companies like Amazon became more scrupulous in letting users know about data collection and asking users for permission to collect it.
Incidentally, relative to the spam problem, in 2003 the US Congress passed the "CAN-SPAM Act", designed to limit the impact of spamming. It required that emails not be misleading, and that customers have the ability to unsubscribe. It did not deny companies the right to send emails without permission. Activists fighting the spam problem thought CAN-SPAM was too weak, but it did give internet service providers (ISP) a tool to suppress illicit spammers. Development of spam filters from that time brought the spam problem down to a much more reasonable level.
Not so incidentally, during the decade foreign governments began to increasingly use social media for distributing propaganda. Russia became a leader in the "weaponization" of social media, setting up "troll farms" to flood social media with pro-Kremlin postings, with the Russian government also supporting nominally independent gangs who cracked into foreign servers and conducted "denial of service (DOS)" attacks -- flooding websites with a barrage of postings to bring them to their knees. Of course governments conducted spying in cyberspace, but that was more or less universal, and it was much more discreet. Russian trolls generally worked tacitly with subversives in the target countries: the Russians would listen in on local trolls online, then amplify their message.
Roughly in parallel, social media began to take on a different color. Part of the issue was that social-media firms added gimmicks to boost the visibility of specific accounts, with Facebook introducing "likes" and Twitter introducing "retweets". They weren't bad ideas in themselves, but they opened the door to the manipulation of social media to boost accounts.
It was also becoming increasingly evident that some of the players in online media were very bad actors -- most prominently Alex Jones and INFOWARS. On 14 December 2012, 20-year-old Adam Lanza of Newton, Connecticut, shot and killed his mother, then went to Sandy Hook Elementary School and killed 26 people, including 20 children; when police arrived, he shot and killed himself.
There had been many mass shootings in the USA before Sandy Hook and there would be many more following, but it nonetheless ended up being a landmark. Jones and other conspiracy trolls online called the event a "giant hoax" in which no one had been killed, a "false-flag operation" conducted by the government to pass gun-control laws, and claimed the parents of the murdered children were paid actors who had been working for the CIA. The parents were then incessantly harassed and threatened by extremists. Eventually the parents decided they'd had enough, with lawyers taking on Jones for defamation.
BACK_TO_TOP* By that time, 2012, the popular computing market had shifted with the rise of the smartphone and tablet -- which were, if not immune to malware, at least not so vulnerable to it, since users got their "apps" from appstores, where the apps were screened to an extent for bad actors. Fast internet had become more common, as had "cloud computing", in which businesses gave up their mainframe computers and instead rented out computing power from "server farms" operated by players such as Amazon.com.
The issue of PRISM remained in the background until 2013, when Edward Snowden -- an NSA contractor -- fled the US, to then perform a mass unredacted public dump of NSA documents on PRISM and the NSA's public mass-surveillance program. The result was an international uproar. Privacy advocates regarded Snowden, who obtained asylum in Russia, as a hero, while the government regarded him as a traitor. Snowden was no Daniel Ellsberg, demonstrating little interest in coming back home to face the music.
It doesn't appear Snowden had access to any big secrets, his revelations being painted in loud colors but with a very broad brush. Nothing that was revealed was much surprise to those knowledgeable of US intelligence activities; public reaction was overheated. Although PRISM observed connections between users, there was no access to the communications themselves without a FISA warrant.
Claims that the US had tapped the phone of German Chancellor Angela Merkel were investigated by the German authorities, and were found to have no basis in the evidence. The German authorities did register concerns over the extent of US surveillance, even though US intelligence collaborated closely with German intelligence on sharing of data on terrorists. Protests by the French were even more dubious; the French not only collaborated with the US on intelligence, but had their own public surveillance systems. Due to ongoing terrorist attacks on French soil, the French government gave their intelligence services more leash than the US government gave the NSA.
In 2014, US President Barack Obama announced changes to the US surveillance system, but critics judged them largely cosmetic. Obama said he had no reason to believe that the NSA gone beyond the bounds, and heads were not going to roll. Surveillance would continue, if in a more circumspect fashion, and oversight would be enhanced. The modest response was not surprising: if a major terrorist attack took place on US soil, the government would be flogged for not having taken every reasonable, or even unreasonable, step to prevent it.
Indeed, the general public quickly forgot about Snowden. Fears of terrorism were very high, beyond the actual level of threat, and there was public acceptance that mass surveillance was an appropriate tool to deal with terrorism.
That did not, however, mean a return to business as usual. Snowden's revelations led a number of tech companies to integrate encryption more tightly and seamlessly into their products, and enable it as a default setting. The Apple company released their iOS 8 operating system in 2014, which established security as a major selling point; internet giant Google soon followed with a security-focused release of their Android operating system. The authorities grew restless as vendors began to push security, worried that it would allow terrorists and criminals to "go dark", to successfully evade surveillance, with criticisms of Apple in consequence.
Following the Snowden revelations, there was also a long-running effort led by the Wikimedia Foundation to challenge NSA surveillance, conducted by a secret system labeled as "Upstream", in court. The exercise went up and down in the courts to 2023, when the Supreme Court rejected it. The problem for Wikimedia was that they didn't really know exactly what Upstream was doing. The NSA said they were obeying the rules, and in the absence of any evidence they weren't, Wikimedia didn't have a case.
* Even those who were no fans of Snowden did acknowledge that he had successfully highlighted the deep tension between the government's rights of surveillance and the rights of the citizens to privacy. Civil libertarians are inclined to assert that the right to privacy is absolute, but that's not true. The state does have a right -- not unrestricted -- to conduct surveillance to protect public safety, and a right -- again, not unrestricted -- to keep secrets. The state has an obligation to prevent the Black Hats from committing acts of terror, and the state will necessarily keep secrets in that effort; otherwise, the Black Hats would be able to stay a step ahead of the authorities. Although civil libertarians question both these premises, they are legally unassailable.
A line has to be drawn on privacy, but there is no unrestricted right to it; again, the US Constitution specifies no unrestricted rights. Court decisions have established that defendants may "plead the Fifth" and refuse to divulge computer passwords to a court. Of course, as mentioned earlier, defendants then can't speak in their own defense, and might prejudice the jury against them. In addition, it was established that "biometric" identification -- fingerprints or face recognition -- is not covered under 5th Amendment protection, since neither fingerprints nor faces are confidential.
The 1974 Federal Privacy Act restricted the US government to storing only "relevant and necessary" data on citizens, but this is obviously vague. US states have passed their own privacy laws -- only to open up loopholes, such as permitting the mandatory drug testing of college athletes. Again, the right to privacy is not, never has been, anywhere near as strong as people are inclined to assume it to be. It was just that people did not worry about it as much in the past, it wasn't such a practical issue. Now that we are entering the era of "surveillance societies", the issue has become much more important.
If we have a limited right to privacy, we have effectively no right to anonymity; that would be like granting a right to fraud. In most countries, it is not possible to drive a car without registration plates, a license, or insurance. Most countries require babies to be registered at birth, and issue numbers to track payments in and out of social-security systems. People do not expect to live in an anonymous house; draw an anonymous income; or open an anonymous bank account, and conduct anonymous transactions with a bank. Every purchase or other trusted transaction that we perform online is necessarily accessible to the authorities, if possibly with a warrant, since the law is the ultimate enforcer of trusted transactions.
BACK_TO_TOP* The tensions between Apple and the US government came to a head in late 2015. In San Bernadino, California, on 2 December 2015, Syed Rizwan Farook -- who worked for the San Bernadino County department of public health -- and his wife Tashfeen Malik shot and killed 14 people and wounded 22 others at a department holiday luncheon. The killers were gunned down themselves later that day in a shootout with police.
The FBI canvassed the residence of the late terrorists, and found among other things three smartphones. Two had been broken, but the third was intact. It technically wasn't Farook's phone; it belonged to San Bernadino County and Farook used it as his work phone. It didn't seem likely there was much of interest on the third smartphone -- otherwise it would have been broken as well -- but the FBI wanted to inspect it anyway. The problem was that the phone demanded a four-digit passcode and used login throttling, delaying for a longer time after each failed passcode input. After ten failed attempts, the phone would wipe its contents.
The FBI had been talking to Apple about tricks to get into the phone. When they all failed, the FBI asked Apple to create a special version of the phone operating system without login throttling and the ten-guess limit, then install it on Farook's phone. Apple officials gave the matter serious thought, and then replied: NO.
A storm of public condemnation descended on Apple -- but Apple also had friends, with a torrent of "amicus curae (friend of the court)" briefs in defense of the firm being sent to the courts by technology firms such as AT&T, eBay, Kickstarter, Twitter, and Cisco. Even companies that could be seen as rivals in one sense or another to Apple -- Amazon, Facebook, Google, and Microsoft -- stood up to defend the company.
Indeed, while Apple's position might have seemed outrageous at first hearing, once the issues were laid out, a sensible person would have second thoughts. In the first place, the internet is an insecure place, full of Black Hats; Apple officials felt the company had an obligation to its customers to make sure the Apple products being used were secure, to the extent that even Apple couldn't break into them. If Apple could break in, so could the Black Hats.
On the other side of that coin, anybody could find strong encryption tools, such as the Signal encrypted messenger, on the global internet, and there was no realistic way of preventing them from doing so. The only result of preventing Apple and other technology firms from providing strong encryption would be to drive users to obtain encryption technology that might not be trustworthy. The slogan emerged that it made no sense to improve internet security by undermining it.
The FBI argued in reply that all the government wanted was for Apple to break into one phone, with the permission of the owners of that phone, meaning San Bernadino County. Apple was unimpressed, saying that once one iPhone was broken open, any other one could be as well, and doing so would set a legal precedent -- which the FBI conceded. Apple would then get a stream of court orders, demanding the company crack more iPhones. District attorneys were already standing in line to do so.
Having opened the gate, could it ever be closed again? Apple Chief Executive Officer Tim Cook said: "This case was domestic terrorism, but a different court might view that robbery is one. A different one might view that a tax issue is one. A different one might view that a divorce issue would be okay. And so we saw this huge thing opening and thought: You know, if this is where we're going, somebody should pass a law that makes it very clear what the boundaries are. This thing shouldn't be done court by court by court by court."
Also, what would happen if China demanded that Apple crack an iPhone belonging to a political dissident? To be sure, American legal precedent doesn't mean much in China -- but Apple only does business in China subject to Chinese law, and once Apple demonstrated they would crack open a phone, that's all the precedent the Chinese government would need.
* The FBI, in short, was asking Apple to get "a little bit pregnant". As Cook also pointed out, the authorities didn't really have much to complain about. They could, legally and without much difficulty, get their hands on torrents of data that simply wasn't there a generation ago: social media, phone connections, security cameras, the growing "internet of things". The law could obtain the call records for the San Bernadino terrorists, detailing who they called. Cook said: "Going dark? This is a crock! No one's going dark ... We should take a step back and look at the total that's available, because there's a mountain of information about us."
As Cook put it: "It wasn't very long ago when you wouldn't even think about there being health information on the smartphone. There's financial information. There's your conversations, there's business secrets. There's probably more information about you on here than exists in your home."
Cook echoed the findings of a report published earlier in 2016 by Harvard's Berkman Center for Internet & Society, signed by an impressive roster of legal and security professionals. The report pointed out how business data mining, the growth of cloud computing, and the expanding internet of things, all undermine privacy and security: "These are prime mechanisms for surveillance, alternative vectors for information-gathering that could more than fill many of the gaps left behind by sources that have gone dark -- so much so that they raise troubling questions about how exposed to eavesdropping the general public is poised to become."
Cook accepted that as the reality, and then wondered: If we can't draw the line of intrusion at strong encryption, is there any line at all? He questioned the right of the government to have access to data that runs through Apple's pipelines, when the company itself didn't feel it had a right to access that data: "I think [Apple customers] should have a reasonable expectation that your communication is private."
The confrontation was abruptly called off when the FBI announced that a third party -- believed to the mysterious NSO Group, an Israel-based branch of a global crypto company -- had cracked the iPhone in question. The issue hadn't really been resolved, however, no legal precedent having been set on the propriety of strong encryption for the public. The authorities were poised to move against Apple again when events required it, while Apple worked to make sure the iPhone was even more impregnable. One of the results of the squabble was that the iPhone got a reputation as the "gold standard" of security, which was exactly what Apple wanted.
The issue of secure phones then went quiet, even though the authorities kept on obtaining locked phones from criminal suspects. This was apparently due to the widespread use of "mobile device forensic tools (MDTF)" -- the most popular one in US law-enforcement use being "Cellebrite", made by an Israeli company of that name. MDTFs are used to extract data from phones, and have some ability to break through phone passwords, with some MDTFs being more capable (and expensive) than others. Any phone user really serious about security and with an up-to-date phone operating system might give an MDTF a very hard time -- but users are typically sloppy in varying degrees about security, and breaking in isn't necessarily difficult. If users don't take advantage of the security tools they are given, that's not Apple's problem.
Not so incidentally, law enforcement also often makes use of devices called "StingRays", after the most popular of the class, that mimic a cellphone tower to allow the identification, location, and tracking of cellphone operations in a location, including the ability to map and obtain statistics on all the cellphone traffic in that area. As a rule, they don't actually intercept phone conversations, since that would demand a warrant -- though that's not an issue in more authoritarian countries. There have been concerns about warrantless use of StingRays on 4th Amendment grounds, but so far the US Supreme Court hasn't come to a decision on the matter.
* In any case, at roughly the same time as the flap between the US government and Apple, the vulnerabilities of search engines, particularly Google, started becoming more obvious. All along, Google had struggled with the issue of searches that gave websites loaded with of disinformation equal or greater priority than legitimate websites.
The problem got more troublesome in early 2014, when Google introduced "snippets", which gave brief answers to queries without the user having to scour through the list of links returned by a Google query. The problem with snippets was that they could reflect disinformation from bogus websites; early on, a query for "King of the United States" gave "Barack Obama" as the answer. Similarly, other US presidents were falsely labeled as being members of the Ku Klux Klan, while Google could, under the right circumstances, agree that "Jews are evil" or "women are evil".
Google worked to qualify its links, introducing an "about this result" feature to suggest how much a link could be trusted, and also refined "content advisories" -- which were generally associated with fast-breaking news, for example giving warnings that the results presented were tentative and open to change. Google had also long flagged "unsafe sites" to warn people visiting them that such sites might cause a malware infection.
Google turned to artificial intelligence (AI) systems to help sort things out, in particular developing a "Multitask Unified Model (MUM)" to crosscheck information from multiple websites. The search engine problem turned out to be something along the lines of online pornography, being seen as a problem that appeared manageable over the longer run, with Google perceived as earnest in seeking solutions. It was a complicated task, made even more complicated by the reality that Google could not afford to be biased in its searches, except to the extent of screening out disinformation.
BACK_TO_TOP