It sits in your house, passively recording everything you say. It knows what you like. It knows what you listen to. It knows what you buy. It knows who’s in the room with you. And now, it might tell the police all about it.
“It” is the Amazon Echo, a revolution in the “internet of things.” The Echo is a smart speaker that connects directly to Amazon’s cloud-based personal assistant service, Alexa. It can play music; give you the traffic, weather, and news; handle your shopping; put things on your calendar; play games; and even respond appropriately to a wide array of cultural references, all in response to voice commands. If you have the right add-ons, Alexa can even control your entire home, dimming your lights, adjusting the thermostat, and locking the doors.
It does this by passively listening for a given activation phrase—the default is “Alexa.” Generally, Alexa does not record anything else (although it may store up to sixty seconds at a time in a buffer). Once it hears its name, Alexa will begin recording and will send what follows to Amazon for processing—both to respond to a given request, and to store to improve responsiveness later. On one hand, this means that Amazon is not actually recording everything you say, but only those specific commands directed to Alexa. On the other hand, it means that Alexa is always listening.
This became relevant in a recent murder case in Bentonville Arkansas, in which police obtained a warrant for recordings from Amazon of commands given to the suspect’s Echo. It is far from clear what police hope to gain from these recordings; they have a large amount of traditional evidence and, unless the murderer specifically asked Alexa for help, the recordings are unlikely to be incriminating. Nevertheless, an attempt by police to seek recordings from a device that is virtually always listening to us in our homes is extremely disturbing.
These efforts are made even more concerning by recent court rulings on cell phone location data. According to two federal appellate courts, because cell phones send this information to a third party (that is, to cell phone and app providers), it is not considered sufficiently private for protection from searches and seizures. That means that police can access this data—which often allows an individual to be physically tracked from moment-to-moment—without even requesting a warrant.
If this principle is upheld by the Supreme Court (which, so far, has refused to consider the issue), it would mean that police could access daily recordings from the privacy of your own home on little more than a hunch and an informal request. Though many may say they have nothing to hide, I doubt most of us would be comfortable knowing a police officer was looking over our shoulder twenty-four hours a day.
There is one barrier to that terrifying outcome, which is that Amazon has refused to comply with the Bentonville warrant and officers there have decided not to press the issue. Like Apple, Amazon has taken it upon itself to protect its customers’ privacy. But a private company cannot be expected to be the defender of its customers’ civil rights forever.
But until the law catches up to the state of technology, every one of our devices is capable of being turned into an informant against us. And though Alexa can do a lot, it has yet to learn how to invoke its Fifth Amendment right to remain silent. Until it does, you might want to think twice before inviting Alexa–and potentially the police–into your home.
What a difference two words can make. Just ask the Center for Competitive Politics (CCP) or Americans for Prosperity (AFP), two organizations that filed separate lawsuits against the same defendant, California Attorney General Kamala Harris, over the same issue: whether Harris’s office had the right to access the organizations’ donor information. (The cases are Center for Competitive Politics v. Harris and Americans for Prosperity v. Harris.)
The plaintiffs’ arguments in each case were basically the same: the state’s request to access donor information would violate the first and fourteenth amendments of the U.S. Constitution. But there the similarities stopped: the CCP never got to trial, whereas the AFP did—and won! Was the CCP the victim of a miscarriage of justice? Nah. It all came down to two words: “as applied.”
You know the saying “go big or go home?” Well, unfortunately the CCP did both: it tried to get the court to rule that Harris’s probe of donor information would be unconstitutional for all organizations. The AFP took a different approach: it asked the court to call the probe unconstitutional “as applied” to the AFP alone.
The AFP’s narrower approach enabled the court to provide relief without upsetting Harris’s authority and potentially affecting thousands of other organizations. Courts generally hesitate to invalidate a state’s actions when they can provide individual relief to the plaintiff instead. If the CCP had taken this course, it might have had a flying chance. But now it had the added burden of proving how the state’s actions would adversely affect all organizations subject to the same request.
Meanwhile, the AFP coasted without having to prove any such thing. All it had to show was how the state’s request had already affected the organization and could continue to do so. This was no fun task, though. Several individuals testified that they suffered reprisals, assaults, and even death threats due to their association with the AFP—a strongly conservative organization. Clearly, being publicly linked to the AFP could lead to serious fallout. For her part, Harris tried to argue that the state would keep donor information confidential, but the AFP was able to show how this had failed before, citing over one thousand instances of donor information being improperly disclosed on the AG’s own website!
The AFP showed that the risk of scaring, and therefore discouraging, would-be donors was real. The chilling effect on individuals’ freedom of association would be too steep a price to pay for a nominal benefit to the state.
It was a strong case—unlike the defendant’s. Harris claimed that accessing donor information was in the state’s best interest; reviewing the findings would help uncover potential irregularities tied to fraud, waste, or abuse. Maybe it would—but it doesn’t pass the “exacting scrutiny” test, which requires states to protect their interests by the least restrictive means in situations like this. More importantly, Harris could not produce any evidence or testimony to corroborate her argument that access to donor information was important to state law enforcement. Although several state-employed investigators and attorneys took the stand, none could claim that they needed, or even used, donor information to do their work—and if they did need it, they could generally get it elsewhere. This evidentiary failure undercut Harris’s arguments and called into question the state’s overall scheme.
In the end, it was not a tough decision: with so strong a case by the plaintiff, and so weak one by the state, the court sided plainly with the plaintiff. It could have gone a step further and declared the state’s actions broadly unconstitutional, but instead it judged the state’s actions to be improper as applied to the AFP alone. This was a good idea, because Harris will have a harder time challenging the decision on appeal.
So the AFP trial didn’t set a huge precedent for everyone—but that’s kind of the point. If you’re going to file suit, and there’s a path of least resistance, take it. Those sweeping courtroom victories you see in the movies are rare. In real life, justice takes baby steps.
Data breaches are as common as the common cold—unfortunately, just as incurable. Run a news search on “data breaches” and you’ll find that all kinds of institutions—major retailers, tech companies, universities, even government agencies—have been vulnerable at some point. Now run a search on “data breaches,” but include the word “lawsuit.” You’ll find that many of these cases are going to court, but ultimately getting dismissed. What’s going on?
First, you should look at some of these lawsuits more closely: are they filed against the alleged perpetrators of the data breach? Many of them aren’t; those perpetrators are usually hackers who live outside the country or are unable to pay a money judgment. (In legal parlance, that’s known as being judgment proof.) Faced by those limitations, individual victims of data breaches frequently settle for the next best thing: going after the institutions that endured the breach.
Often, this isn’t fair—the institutions are victims too. The point here is that although going after the institutions looks like an easy win from “deep pockets,” that seldom turns out to be the case.
It’s with the third and final point—demonstrating injury—that plaintiffs have the most trouble. Why? Because courts view injury in fiscal terms; you need to show that you actually lost something, not simply that you might. So even if you were the victim of a data breach, as long your data hasn’t yet been compromised, it doesn’t really count as injury.
There have been exceptions, when the court greenlit cases based mainly on speculative injury, but these usually ended in a settlement before a legal precedent could be set. (See cases against Home Depot, Target, Adobe, and Sony.) For the most part, the fiscal view of injury has prevailed—reinforced in 2013, when the Supreme Court, weighing in on Clapper vs Amnesty Int’l, determined that a plaintiff cannot proceed with a data breach lawsuit unless he or she can demonstrate actual injury or at least imminent threat of injury, each one measurable in economic loss. Otherwise, mere perception of injury is too tenuous to establish legal standing, which a case requires to go forward, and the lawsuit will probably get tossed.
The challenge of establishing legal standing recently made its way to the Supreme Court in Spokeo v. Robins. In that case, a plaintiff filed suit against the “people search engine” Spokeo for publishing false information about him. The issue before the Court was this central question of how much injury must be shown for a case to go forward. Prospective plaintiffs were optimistic that the high court would affirm a lower court’s decision that speculative injury was indeed enough. Alas, the Supreme Court sidestepped the issue and punted it back to the lower court for further review. The Court nonetheless reinforced the general tenets that, for a plaintiff to have standing to bring a case, he must allege an “injury in fact” that is both “concrete and particularized.” There is still room for the lower court to broaden the approach to what constitutes an injury, but the Supreme Court’s ruling keeps the status quo in place.
For now, individuals whose data has been compromised generally must be satisfied with what the institutions offer them after a breach occurs: free credit checks and/or access to credit monitors. Do checks and monitoring seem inadequate? Not if you think about what type of harm people face after a data breach. Individuals can detect and report problems in the event someone actually misuses their data. If they keep on top of it, their credit scores will not be impacted. Moreover, credit card companies and other financial institutions will bear the cost of any unapproved charges. In the event of further problems, plaintiffs can then take their injury to the legal system and have their day in court. But at this point, the courts are right to keep this type of class action litigation at bay.
FBI Director James Comey took a rare break from the posturing typical of investigators and prosecutors in the current showdown between Apple and the FBI. While prosecutors argue that Apple’s privacy concerns are a smokescreen to avoid “assist[ing] the effort to fully investigate a deadly terrorist attack,” Comey posted a statement over the weekend in which he took the position that the tension between security and privacy “should not be resolved by corporations that sell stuff for a living. It also should not be resolved by the FBI, which investigates for a living. It should be resolved by the American people deciding how we want to govern ourselves in a world we have never seen before.”
Comey’s statement highlights a crucial problem with the development of privacy law: it often is developed in the context of important criminal cases. This comes at a real cost. We all know that Syed Farook committed a horrific crime, and any rights he once had against government searches are now forfeit. But though Apple may have chosen to serve as a limited proxy for its consumers in the San Bernardino case, often the interests of private citizens are wholly absent from the courtroom (or, often, judge’s chambers) when issues of fundamental privacy are debated.
This leads to a serious imbalance: Apple is talking about the diffuse privacy rights of its consumers and the risks of potential incursions by more restrictive, less democratic governments such as China. On the other hand, Manhattan District Attorney Cyrus Vance can point to 175 Apple devices that he cannot physically access even though those devices may contain evidence helpful to the government.
New York Police Commissioner Bill Bratton and one of his deputies put an even finer point on it in an Op-Ed in The New York Times, citing a specific case of a murder victim in Louisiana (more than one thousand miles outside of Mr. Bratton’s jurisdiction) whose murder is unsolved because officers cannot unlock her iPhone, which is believed to contain her killer’s identity. “How is not solving a murder, or not finding the message that might stop the next terrorist attack, protecting anyone?” asks Bratton.
But in assuming that private citizens have no greater fear than whether the police can investigate and prevent crimes, Bratton begs the question. In reality, citizens may see law enforcement as a threat of itself. Learning that the NSA was engaging in comprehensive warrantless surveillance likely has given many law-abiding Americans a greater incentive to protect their data from being accessed by the government. Indeed, in light of the NYPD’s record over the last few years—including a finding by a federal judge that they were systematically violating the rights of black New Yorkers and a lawsuit over religion-based spying on Muslims—it is not hard to see why citizens might want protection against Bratton’s police force.
But even if the police were the angels they purport to be, opening a door for a white hat can easily allow access to a black one. Less than a year ago, hackers used a “brute force” approach to exploit a flaw in iCloud’s security, and dozens of celebrities had their private photos shared with the world. These sex crimes are all but forgotten in the context of the San Bernardino shootings, even though the security weakness the FBI wants installed in Farook’s iPhone is markedly similar to that exploited with respect to iCloud.
Nor do those who wish for privacy need to invoke hackers or criminals. A private, intimate moment with a spouse or loved one; a half-finished poem, story, or work of art; or even a professional relationship with a doctor or mental health professional cannot exist unless they can remain private. Once these interactions took place in spoken, unrecorded conversations or on easily discarded paper; now many of our daily activities are carried out on our mobile devices. Even if one has nothing to hide, many citizens might balk at the prospect of having to preserve their private conversations in a format readily accessible by the police.
But if Mr. Comey has shown unusual insight, Mr. Bratton’s one-sided, myopic question illustrates the importance of Apple’s position and the inability of law enforcement officials to be objective about the interests at stake. Police and prosecutors are not always your friends or your defenders. Their goals are—and always will be—investigating and solving crimes and convicting suspected criminals. The less an officer knows, the harder it will be to investigate a case. As a result, privacy rights—even when asserted by innocent, law-abiding citizens—make their job more difficult, and many officers see those rights as simply standing in their way.
This is hardly news. Nearly sixty years ago the Supreme Court observed that officers, “engaged in the often competitive enterprise of ferreting out crime,” are simply not capable of being neutral in criminal investigations. For precisely that reason, the Fourth Amendment requires them to seek approval from a “neutral and detached magistrate” before a search warrant may issue.
That is why Mr. Comey’s acknowledgement that the FBI is not a disinterested party is so refreshing. Pro-law-enforcement voices have been clamoring to require Apple to compromise the security it built into the iPhone, invoking their role as public servants to buttress their credibility. But when it comes to privacy, the police do not—and cannot—represent the public interest. As Comey acknowledged, they are “investigators,” and privacy rights will always stand as an obstacle to investigation.
It is a well-known maxim that “bad facts make bad law.” And as anybody even casually browsing social media this week likely has seen, the incredibly tragic facts surrounding the San Bernadino attacks last December have led to a ruling that jeopardizes the privacy rights of all law-abiding Americans.
First, it is important to clearly understand the ruling. After the horrific attack in San Bernadino on December 2, 2015, the FBI seized and searched many possessions of shooters Syed Rizwan Farook and Tashfeen Malik in their investigation of the attack. One item seized was Farook’s Apple iPhone5C. The iPhone itself was locked and passcode-protected, but the FBI was able to obtain backups from Farook’s iCloud account. These backups stopped nearly six weeks before the shootings, suggesting that Farook had disabled the automatic feature and that his phone may contain additional information helpful to the investigation.
Under past versions of iOS, the iPhone’s operating system, Apple had been able to pull information off of a locked phone in similar situations. However, Farook’s iPhone—like all newer models—contains security features that make that impossible. First, the data on the phone is encrypted with a complex key that is hardwired into the device itself. This prevents the data from being transferred to another computer (a common step in computer forensics known as “imaging”) in a usable format. Second, the iPhone itself will not run any software that does not contain a digital “signature” from Apple. This prevents the FBI from loading its own forensic software onto Farook’s iPhone. And third, to operate the iPhone requires a numeric passcode; each incorrect passcode will lock out a user for an increasing length of time, and the tenth consecutive incorrect passcode entry will delete all data on the phone irretrievably. This prevents the FBI from trying to unlock the iPhone without a real risk of losing all of its contents.
As Apple CEO Tim Cook has explained, this system was created deliberately to ensure the security of its users’ personal data against all threats. Indeed, even Apple itself cannot access its customers’ encrypted data. This creates a unique problem for the FBI. It is well-settled that, pursuant to a valid search warrant, a court can order a third party to assist law enforcement agents with a search by providing physical access to equipment, unlocking a door, providing camera footage, or even giving technical assistance with unlocking or accessing software or devices. And, as the government has acknowledged, Apple has “routinely” provided such assistance when it has had the ability to access the data on an iPhone.
But while courts have required third parties to unlock doors, they have never required them to reverse-engineer a key. That is what sets this case apart: to assist the government, Apple would have to create something that not only does not exist, but that it deliberately declined to create in the first instance.
On February 16, Assistant U.S. Attorneys in Los Angeles filed an ex parte motion (that is, without providing Apple with notice or a chance to respond) in federal court seeking to require Apple to create a new piece of software that would (1) disable the auto-erase feature triggered by too many failed passcode attempts and (2) eliminate the delays between failed passcode attempts. In theory, this software is to work only on Farook’s iPhone and no other. This would allow the FBI to use a computer to simply try all of the possible passcodes in rapid succession in a “brute force” attack on the phone. That same day, Magistrate Judge Sheri Pym signed what appears to be an unmodified version of the order proposed by the government, ordering Apple to comply or to respond within five business days.
Though Apple has not filed a formal response, CEO Tim Cook already has made waves by publicly stating that Apple will oppose the order. In a clear and well-written open letter, Cook explains that Apple made the deliberate choice not to build a backdoor into the iPhone because to do so would fatally undermine the encryption measures built in. He explains that the notion that Apple could create specialized software for Farook’s iPhone only is a myth, and that “[o]nce created, this technique could be used over and over again, on any number of devices. In the physical world, it would be the equivalent of a master key . . . .”
This has re-ignited the long-standing debate over the proper balance between individual privacy and security (and the debate over whether the two principles truly are opposed to one another). This is all to the good, but misses a key point: Judge Pym’s order, if it stands, has not only short-circuited this debate, it ignores the resolution that Congress already reached on the issue.
Indeed, a 1994 law known as the Communications Assistance for Law Enforcement Act (“CALEA”) appears to prohibit exactly what the government requested here. Though CALEA preserved the ability of law enforcement to execute wiretaps after changing technology made that more complicated than physically “tapping” a telephone line, it expressly does not require that information service providers or equipment manufacturers do anything to open their consumers to government searches. But instead of addressing whether that purpose-built law permits the type of onerous and far-reaching order that was granted here, both the government and the court relied only on the All Writs Act—the two-century-old catch-all statute that judges rely on when ordering parties to unlock doors or turn over security footage.
Though judges frequently must weigh in and issue binding decisions on fiercely contested matters of great importance, they rarely do so with so little explanation, or after such short consideration of the matter. Indeed, when the government sought an identical order this past October in federal court in Brooklyn, N.Y., Magistrate Judge James Orenstein asked for briefs from Apple, the government, and a group of privacy rights organizations and, four months later, has yet to issue an opinion. Yet Judge Pym granted a similar order, without any stated justification, the same day that it was sought.
An order that is so far-reaching, so under-explained, and so clearly legally incorrect is deeply concerning. And yet, but for Apple’s choice to publicize its opposition, this unjustified erosion of our privacy could have happened under the radar and without any way to un-ring the bell. Fortunately, we appear to have avoided that outcome, and we can hope that Apple’s briefing will give the court the additional legal authority—and the additional time—that it will need to revisit its ruling.
If you’ve ever let your kids sign into your Netflix or HBO Go account, or given your marketing department access to your Twitter feed, you may be committing a federal crime, depending on how the Ninth Circuit rules on a case argued before it just last month.
The case, United States v. Nosal, is the latest chapter in a series of cases in which federal prosecutors have used a thirty-year-old anti-hacking statute to turn seemingly routine business disputes into federal felony cases. The statute, known as the Computer Fraud and Abuse Act (CFAA), contains broad prohibitions on accessing a computer system “without authorization” or in a way that “exceeds authorized access.” Though intended to prevent malicious hacking and espionage, those prohibitions have repeatedly been applied to disgruntled former employees who logged back into company databases to access proprietary information after their termination and when their authorization to access those files had been revoked.
However, the Nosal case goes a step further, and a ruling in favor of the United States threatens to criminalize password sharing of all kinds. Nosal was an executive at the recruiting firm Korn Ferry International (KFI). After he left the firm, he obtained the help of several former colleagues to obtain protected KFI data to start a competing business. Although several of the charges against Nosal were thrown out in an earlier case, he was still prosecuted for accessing KFI files using his former assistant’s login information, which she had given him willingly.
According to prosecutors, Nosal’s former assistant was not authorized to give him access to KFI’s systems under the company’s computer usage policy, and so his use of that password was “without authorization” by the proper authorities. Upholding that argument could have a broad reach because so many password-protected services have prohibitions against password sharing in their user agreements, including Netflix, LinkedIn, Facebook, and HBO Go, to name a few. For that reason, a ruling that the CFAA prohibits password sharing when not authorized by these agreements could turn us all into criminals.
Following argument, this case is difficult to handicap. Although Judge McKeown seemed particularly concerned with the fact that Nosal clearly had engaged in wrongful conduct when he knew his authorization had been revoked, Chief Judge Thomas and Judge Reinhardt clearly recognized the scope of the issue at stake, and all three panel members were concerned by the government’s apparent lack of a limiting principle.
A ruling can be expected in the next few months. Until then, all we can do is hold our breath, and hope that the court ensures that the next time we share an account with the others in our household, we won’t end up living an episode ofOrange is the New Black instead of just watching it.
The government has voluntarily dismissed its case against Jae Shik Kim, the South Korean businessman for whom Ifrah Law obtained a motion to suppress in federal court. In 2012, Mr. Kim was stopped by federal agents as he tried to board a plane to South Korea from LAX. The government seized his laptop and copied his hard drive based on suspicion that he had engaged in illegal activity years earlier. The government indicted Mr. Kim based on evidence it found on the laptop relating to past transactions.
Everyone who has been through a security checkpoint at an airport knows that the government has wide latitude to conduct certain warrantless searches at the border without any suspicion of illegal conduct. However, the U.S. District Court for the District of Columbia concurred with Ifrah Law’s argument that the government’s latitude is wide, but it is not unbounded. In order to conduct a non-routine search of electronics at the border–including copying a hard drive for the government to conduct a later search unbounded in time and scope—the government must have reasonable suspicion that the owner is presently engaged or will imminently engage in illegal activity. An ongoing investigation of suspected past criminal activity is not a sufficient basis on which to perform such a search. To use a border search for that purpose is an illegal attempt to circumvent the warrant requirements imposed by the Fourth Amendment to obtain evidence in an ongoing investigation, and any evidence obtained in that manner cannot be used to convict the defendant.
The government understood that when the court suppressed the evidence obtained from Mr. Kim’s laptop, it did not have a case on which it could obtain a conviction. Shortly after the court granted Ifrah Law’s motion to suppress, the government filed an interlocutory appeal of the court’s order. The government hoped that the Court of Appeals would reverse the order and allow the government to present evidence obtained from the laptop in order to secure a conviction.
This week, the government reversed course. The government not only dropped its appeal on the suppression issue, but moved to dismiss the indictment entirely, resulting in an event all too rare in the criminal justice system—a dismissal of all charges against the defendant. The government’s action implicitly acknowledges restrictions on its authority to conduct non-routine searches at the border when there is no suspicion of present criminal activity. It is a big win not only for our client, but for the ongoing effort to preserve our right to privacy.
Last July, we reported on United States v. Davis, an Eleventh Circuit decision in favor of privacy rights. In that case, a three-judge panel held that cell phone users have a reasonable expectation of privacy in their cell phone location data. If the government wants to collect the data, it must first obtain a probable-cause warrant, as required by the Fourth Amendment.
The groundbreaking decision seemed a clear victory for privacy rights, but the victory proved to be ephemeral. Last year, the en banc court agreed to revisit the question and, weeks ago, declared that subscribers do not have a reasonable expectation of privacy in their cell tower location data. As a result, the government can collect such data from third-party service providers if it shows reasonable grounds to believe the information is relevant and material to an ongoing criminal investigation.
In February 2010, defendant Quartavius Davis was convicted on multiple counts for robbery and weapons offenses. Davis appealed on grounds that the trial court admitted cell tower location data that the prosecution had obtained from a cell phone service provider in violation of Davis’ constitutional rights. An Eleventh Circuit panel agreed with Davis. Speaking for the court, Judge Sentelle explained that Davis had a reasonable expectation of privacy in the aggregation of data points reflecting his movement in public and private places. The government’s collection of the data was a warrantless “search” in violation of the Fourth Amendment.
To reach that decision, the panel leaned heavily on a 2012 Supreme Court case called United States v. Jones. In Jones, the Court announced that the government must have a probable-cause warrant before it can place a GPS tracking device on a suspect’s car and monitor his travel on public streets. The Court so held based on a trespass (or physical intrusion) theory. Absent probable cause, the government could not commandeer the suspect’s bumper for purposes of tracking his movement, even if each isolated movement was observable in public. Several Justices went further, suggesting that the same result should obtain even without a trespass. They hinted that location data might be protected because individuals have a reasonable expectation of privacy in the sequence of their movements over time. It was this persuasive but nonbinding privacy theory that guided the Eleventh Circuit’s panel decision.
On rehearing, the en banc court rejected the panel’s approach. The court noted that Davis could prevail only if he showed that a Fourth Amendment “search” occurred and that the search was unreasonable. He could show neither. To demonstrate a search, Davis had to establish a subjective expectation and objective expectation of privacy in his cell tower location data. But this case involved the collection of non-content cell tower data from a third-party provider who collected the information for legitimate business purposes: the records were not Davis’ to withhold. According to the court, Davis had no subjective expectation of privacy in the data because cell phone subscribers know (i) that when making a call, they must transmit their signal to a cell tower within range, (ii) that in doing so, they are disclosing to the provider their general location within a cell tower’s range, and (iii) that the provider keeps records of cell-tower usage. But even if Davis could claim a subjective expectation of privacy, he could not show an objective expectation. In the court’s view, Supreme Court precedent made clear that customers do not have a reasonable expectation of privacy in non-content data voluntarily transmitted to third-party providers. Because there was no “search,” there could be no violation of Davis’ constitutional rights.
The en banc court explained further that Jones did nothing to undermine the third-party doctrine. For one, Jones involved a government trespass on private property. But the records in Davis were not obtained by means of a government trespass or even a search, so Jones did not control. Additionally, Jones involved location data that was first collected by the government in furtherance of a criminal investigation. By contrast, Davis involved location data that was first compiled by a service provider in the ordinary course of business. Simply put, “[t]he judicial system does not engage in monitoring or a search when it compels the production of preexisting documents from a witness.”
Photo: “LAX-International-checkin” by TimBray at en.wikipedia.
Developments in law are sluggish compared to the rapid rate of technological advancement, and courts must constantly apply old legal principles to technologies which were not contemplated at the time the laws were enacted. Recently, technology has been at the forefront of privacy rights debates, in light of revelations that the government has access to online communications, personal data storage and extensive monitoring via technology. The Fourth Amendment of the United States Constitution establishes a privacy right by prohibiting unreasonable search and seizure, but the extent to which that applies to technology is largely untested. Last week, a federal judge upheld this fundamental right as she ruled that our client’s rights had indeed been violated by an unreasonable search and seizure of a laptop computer conducted by the government.
U.S. District Court Judge Amy Berman Jackson granted a motion which we filed on behalf of our client, South Korean businessman Jae Shik Kim, to suppress evidence seized from his laptop as he departed the country from Los Angeles International Airport in October 2012. The decision severely cripples the government’s case alleging that Kim conspired to sell aircraft technology illegally to Iran, in United States of America vs. Jae Shik Kim, Karham Eng. Corp. (Crim. Action No. 13-0100 in the U.S. District Court for the District of Columbia).
The seizure of Mr. Kim’s laptop presents a unique challenge in an undeveloped area of law. The government claimed that because Mr. Kim’s laptop was seized at the border, it was free to search the computer without having any suspicion that he was presently engaged in criminal activity, the same way the government is free to search a piece of luggage or a cargo container. Yet anyone who owns a laptop, smartphone, tablet, or any other personal mobile device, knows that the breadth and depth of private information stored within these gadgets are intimately tied to our identities and should be entitled to a heightened level of privacy.
Judge Jackson, who understood this aspect of modern mobile devices, wisely rejected the government’s argument that a computer is simply a ‘container’ and that the government has an ‘unfettered right’ to search. In her memorandum opinion and order, she wrote, “…given the vast storage capacity of even the most basic laptops, and the capacity of computers to retain metadata and even deleted material, one cannot treat an electronic storage device like a handbag simply because you can put things in it and then carry it onto a plane.”
In her decision, Judge Jackson also repeatedly referred to “reasonableness” as the “touchstone for a warrantless search.” She keenly balanced the government’s imperative to protect our borders with individuals’ privacy rights. Judge Jackson found that the nature of the search — including that the government conducted the search as Kim departed the country (and not as he entered) to gather evidence in a pre-existing investigation, and that it made a copy of the entire contents of Kim’s laptop for an “unlimited duration and an examination of unlimited scope” — amounted to an invasion of privacy and an unreasonable search and seizure.
While the search of Mr. Kim was technically a border search, his laptop was not searched at the airport. Instead, it was transported 150 miles to San Diego and held until government agents were able to find and secure information they deemed valuable to their case. In fact, Mr. Kim was deemed so little of a threat to national security that he was permitted to board his flight. Judge Jackson noted that if the government’s asserted justification for the search were to stand, it “would mean that the border search doctrine has no borders.”
In this case, unfortunately, the government overstepped the boundaries established by Fourth Amendment of the Constitution, however the checks and balances imposed by the same foundational document proved to correct this error, and rightly so, as our laws continuously strive to adjust to the reality of rapidly evolving technology.
The IRS has unveiled a secure web application, the International Data Exchange Service (IDES), for cross-border data sharing. IDES will allow Foreign Financial Institutions (FFIs) and tax authorities from other countries to transmit financial data on U.S. taxpayers’ accounts, via an encrypted pathway, to the IRS.
The tool is part of the IRS’s effort to track U.S. taxpayer income globally. It is intended to assist FFIs and foreign tax authorities in their compliance with the U.S. Foreign Account Tax Compliance Act (FATCA). The act requires that financial institutions send to the IRS financial information of American account holders or face a hefty 30 percent withholding penalty on all transfers that pass through the U.S. With such steep fines, FFIs and their respective countries across the globe have agreed to comply with FATCA and submit account holder information, regardless of conflicts with their local laws. According to the IRS website, some 112 countries have signed intergovernmental agreements with the U.S., or otherwise reached agreements to comply, and more than 145,000 financial institutions have registered through the FATCA registration system.
IRS Commissioner John Koskinen called the portal “the start of a secure system of automated, standardized information exchanges.” According to the IRS, IDES will allow senders to encrypt data and it will also encrypt the data pathway. IDES reportedly works through most major web browsers.
It may sound efficient and it may even be secure; but IDES also serves as a reminder of the contradiction between FATCA and data privacy laws of many of the FATCA signatory countries. The conflict is part of why FATCA has earned the billing by many as an extra-ordinary extra-territorial law and an example of American overreach.
Countries like the United Kingdom, France, Italy, and Germany have data protection laws that restrict disclosure or transfer of individual’s personal information. To accommodate their own laws, these countries have entered agreements with the U.S. whereby FFIs report to their national tax authorities and the tax authorities then share data with the IRS. (The agreements highlight the questionable value to countries of their data protection laws—at least insofar of U.S. account holders are concerned—as they willingly sidestep their policies to avoid U.S. withholding penalties.)
Meanwhile, as FATCA-compliant countries prepare to push data overseas to the U.S., the E.U. is publishing factsheets directed to its citizens indicating that data protection standards will not be part of agreements to improve trade relations with the U.S. The E.U. is also working on more stringent data protection rules for member countries to strengthen online privacy rights. Are the E.U. member countries speaking out of both sides of their mouths? Or are they trying an impossible juggling act? Between the implementation of FATCA reporting and the growing concern of data privacy among FATCA signatory countries, these countries are bound either for intractable conflict or the continued subrogation of the rights of those citizens also designated U.S. taxpayers (an unfortunate result for dual citizens with minimal U.S. ties).
Regardless of ultimate upshot of this conflict, U.S. taxpayers—including those living abroad—should take heed that FATCA reporting is underway. You should consider how to disclose any unreported global income before your bank does it for you.