Cyb3r Crim3

Dec 15

Invoking the Fifth Amendment and refusing to give up passwords

 The U.S. District Court for the District of Vermont recently held that you can invoke the Fifth Amendment privilege against self-incrimination and refuse to give up the password you have used to encrypt data files.

Here are the essential facts in United States v. Boucher, 2007 WL 4246473 (November 29, 2007):

On December 17, 2006, defendant Sebastien Boucher was arrested on a complaint charging him with transportation of child pornography in violation of 18 U.S.C. § 2252A(a)(1). At the time of his arrest government agents seized from him a laptop computer containing child pornography. The government has now determined that the relevant files are encrypted, password-protected, and inaccessible. The grand jury has subpoenaed Boucher to enter a password to allow access to the files on the computer. Boucher has moved to quash the subpoena on the grounds that it violates his Fifth Amendment right against self-incrimination.

The district court held that Boucher could invoke the Fifth Amendment and refuse to comply.

I did an earlier post about this general issue and, as I explained there, in order to claim the Fifth Amendment privilege the government must be (i) compelling you (ii) to give testimony that (iii) incriminates you.  (“Encrypted Hard Drives and the Constitution, August 23, 2006). All three of these requirements have to be met or you cannot claim the Fifth Amendment privilege.  (And if you voluntarily comply by giving up your password, you can’t try to invoke the privilege later because a court will say that you were not compelled to do so – you did so voluntarily.)

In the earlier post or two I did on this issue, I was analyzing a scenario, which I believe has come up in a few instances, though not in any reported cases I’m familiar with, in which someone is stopped by Customs officers while entering or leaving the U.S.  In my scenario, which is the kind of circumstance I’ve heard about, the officers check the person’s laptop, find it’s encrypted and demand the password.  The question then becomes whether the laptop’s owner can (i) invoke the Fifth Amendment privilege or (ii) invoke Miranda.  As I’ve written before, to invoke Miranda you have to be in custody, and you arguably are not here.  And to be “compelled” under the Fifth Amendment, you have to be commanded to so something by judicial process or some analogous type of official coercion (like losing your job); you probably (?) don’t have that here, either.

But in the Boucher case, he had been subpoenaed by a federal grand jury which was ordering him to give up the password, so he was being compelled to do so.

As to the second and third requirements, the district court held that giving up the password was a testimonial, incriminating act:

Compelling Boucher to enter the password forces him to produce evidence that could be used to incriminate him. Producing the password, as if it were a key to a locked container, forces Boucher to produce the contents of his laptop. . . .

Entering a password into the computer implicitly communicates facts. By entering the password Boucher would be disclosing the fact that he knows the password and has control over the files on drive Z. The procedure is equivalent to asking Boucher, `Do you know the password to the laptop?’ . . .

The Supreme Court has held some acts of production are unprivileged such as providing fingerprints, blood samples, or voice recordings.  Production of such evidence gives no indication of a person's thoughts . . . because it is undeniable that a person possesses his own fingerprints, blood, and voice. Unlike the unprivileged production of such samples, it is not without question that Boucher possesses the password or has access to the files.

In distinguishing testimonial from non-testimonial acts, the Supreme Court has compared revealing the combination to a wall safe to surrendering the key to a strongbox. The combination conveys the contents of one's mind; the key does not and is therefore not testimonial. A password, like a combination, is in the suspect's mind, and is therefore testimonial and beyond the reach of the grand jury subpoena.

United States v. Boucher, supra.

The government tried to get around the testimonial issue by offering “to restrict the entering of the password so that no one views or records the password.”  The court didn’t buy this scenario:

While this would prevent the government from knowing what the password is, it would not change the testimonial significance of the act of entering the password. Boucher would still be implicitly indicating that he knows the password and that he has access to the files. The contents of Boucher's mind would still be displayed, and therefore the testimonial nature does not change merely because no one else will discover the password.

United States v. Boucher, supra.

So Boucher wins and the court quashes the subpoena, which means it becomes null and void and cannot be enforced.

I applaud the court’s decision.  I’ve argued for this outcome in chapters I’ve written for a couple of books and in some short articles (and in discussions with my students).  I think this is absolutely the correct result, but I strongly suspect the government will appeal the decision.  Let’s hope the appellate court goes along with this court.

There is, again, a caveat:  Remember that Boucher had been served with a grand jury subpoena so there was no doubt he was being compelled to give up the password.  The airport scenario is much more difficult, because compulsion is not as obvious.  We won’t know whether anyone can take the Fifth Amendment in that context unless and until someone refuses to provide their password to Customs officers and winds up litigating that issue in court.

Dec 12

"Consent to Assume Online Presence"

I just ran across something I’d not seen before:  a law enforcement (FBI) form called “Consent to Assume Online Presence.”

Before I get to the form, what it does and why it’s new to me (anyway), I should explain what I mean by “consent.”

As I wrote in an earlier post, the Fourth Amendment creates a right to be free from “unreasonable searches and seizures.”  "Be Careful What You Consent To" (June 18, 2007).  That means, among other things, that law enforcement officers do not violate the Fourth Amendment when they conduct a search or seizure that is “reasonable.” 

As I also explained in that post, a search or seizure can be reasonable in either of two ways:  (i) if it is conducted pursuant to a warrant (search warrants for searching and seizing evidence, arrest warrants for seizing people); or (ii) if it is conducted pursuant to a valid exception to the warrant requirement.  As I explained in the earlier post, consent is an exception to the warrant requirement.  With consent, you essentially waive your Fourth Amendment rights and let law enforcement search and/or seize you or your property. 

Unlike many of the exceptions to the warrant requirement, consent does not require that the officer have probable cause to believe he or she will find evidence of criminal activity in the place(s) they want to search.  Probable cause is irrelevant here because you’re voluntarily giving up your Fourth Amendment rights.

To be valid, consent must be voluntary (so police can’t threaten to beat you until you consent) and it must be knowing (which means you have to know you had the right not to consent . . . but courts presume we all know that, so an officer doesn’t have to tell you that you have the right NOT to consent for your consent to be valid). 

Officers can rely on oral consent (they ask if you’ll consent to let them search, say, your car, you say “ok” and they proceed, having gotten your consent), but there’s really a preference in law enforcement for having the person sign a form.  Consent is, after all, a kind of contract:  You agree to give up your Fourth Amendment rights and that creates an agreement with law enforcement under which they will search the property for which you have given consent.  If officers rely on oral consent, the person can always say later that they didn’t’ consent at all or didn’t consent to the scope of the search that was conducted (i.e., the officers searched more than the person agreed to having them do).  So officers, especially federal officers, generally have the person sign a form, a “Consent to Search” form.

Enough background.  Let’s get to the “Consent to Assume Online Presence.”  As far as I can tell, the “Consent to Assume Online Presence” form has so far been mentioned in only two reported cases, both federal cases and both involving FBI investigations.

In United States v. Fazio, F. Supp.2d, 2006 WL 1307614 (U.S. District Court for the Eastern District of Missouri, 2006), the FBI was conducting an online investigation of child pornography when they ran across an account (“salvatorejrf”) that was associated with the creation and posting of “four visual depictions of naked children.” United States v. Fazio, supra.  They traced the account to Salvatore Fazio and, after some more investigation, obtained a warrant to search his home.

FBI agents executed the warrant, seized computers, CDs and other evidence.  One of the agents, Agent Ghiz, also wound up interviewing Fazio, who said “he was acting in an undercover capacity to identify missing and exploited children” and “admitted that he had downloaded images of children from the internet and uploaded them on other sites.” United States v. Fazio, supra.  According to the opinion, during the interview

Agent Ghiz did not accuse the defendant of lying nor did he use any psychological ploys to encourage Mr. Fazio to talk. . . . According to Agent Ghiz, [Fazio] never attempted to leave during the execution of the search warrant or the interview. Toward the conclusion of the interview, Agent Ghiz asked [Fazio] if he would be willing to continue to help in the investigation by allowing the FBI to use his online identity to access other sites to help investigate other child pornography crimes. [Fazio] was willing to cooperate and gave consent to the FBI's assuming his online presence. Government's Exhibit 8, a copy of a form entitled Consent to Assume Online Presence, was introduced at the evidentiary hearing. It was signed by [Fazio] in the presence of Agent Ghiz.
United States v. Fazio, supra.  The evidentiary hearing came when Fazio moved to suppress the evidence the agents had obtained. 

The other, more recent case is much more recent.  In United States v. Jones, 2007 WL 4224220 (U.S. District Court for the Southern District of Ohio, 2007) the FBI was conducting another investigation into the online distribution of child pornography.  In the course of the investigation, they ran across an account that was registered to Joseph Jones.  United States v. Jones, supra. They obtained a warrant to search his home and went there to execute it but no one was there.  The agents and some local police officers then went looking for Jones, whom they eventually found talking to two other men at the end of a driveway in what seems to have been a rural area. United States v. Jones, supra.

And FBI agent, Agent White, explained to Jones why they were looking form him and, at his request, showed him the search warrant for the property they had identified earlier.  I won’t go into all the details, but Jones wound up consenting to their searching another location with which he also had ties. United States v. Jones, supra.  He gave his consent to the search of that property by, as I noted earlier, signing a “Consent to Search” form, a traditional form.  The FBI agent also had brought a “Consent to Assume Online Presence” form and Jones wound up signing that, too:

[Agent] White and [Jones] completed the `Consent To Assume Online Presence’ form. This form gave the FBI permission to take over [Jones’] `online presence’ on Internet sites related to child pornography so agents could discover other offenders. [Jones] filled in the spaces on the form calling for his online accounts, screen names, and passwords, and he signed and dated the form at the bottom.

United States v. Jones, supra.

I find the “Consent to Assume Online Presence” form very interesting, for a couple of reasons.  One is that it doesn’t act like a traditional consent in that it doesn’t conform to the usual dynamic of a Fourth Amendment search and seizure.

The usual dynamic, which goes back centuries, is that law enforcement officers get a warrant to search a place for specified evidence and seize the evidence when they find it.  They then go to the place and, if the owner is there, give the owner a copy of the warrant (which is their “ticket” to be there), conduct the search and seizure, give the owner an inventory of what they’ve taken and then leave.  This dynamic is structured, both spatially and temporally:  It happens “at” a specific real-space place (or places).  It has a beginning, a middle and an end.

The same thing is true of traditional consent searches.  So if consent to let police search my car for, say, drugs, they can search the car for drugs.  The car is the “place,” so they can search that “place” and no other.  And the search will last as only long as it takes to reasonably search the car (can’t routinely take it apart).  Here, too, the owner of the car is usually there and observes the search. 

Now look at the “Consent to Assume Online Presence” search, as I understand it:  Agents, or officers, obtain the consent to assume the person’s online identity, which they do at some later time (that not being convenient at the moment consent is given, as we see in these two cases).  The “place” to be searched is, I gather, cyberspace, since the Consent to Assume Online Presence lets officers use the person’s online accounts to search cyberspace for other evidence, i.e., to find others involved in child pornography in the two cases described above.  So the “place” to be searched is apparently unbounded, and I’m wondering if the temporal dimension of the consent is pretty much the same.  I don’t see any mention of the “Consent to Assume Online Presence’s” form limiting the length of time in which the consenting person’s online accounts can be used for this purpose.  I suppose there’s a functional self-limitation, in that the consent expires when the accounts do or when they’re otherwise cancelled. 

But even with that limitation, this is a pretty amazingly unbounded consent to search. It’s basically an untethered consent to search:  As I said earlier, traditional consent searches have definite spatial and temporal limitations:  “Yes, officer, you can search my car for drugs” lets an officer search the car (only that car) until he either finds drugs or gives up after not finding drugs.  There, the search is tethered to the place being searched and is limited by the reasonable amount of time such a search would need.  Here, the consent is untethered in that it apparently lets officers use the consenting person’s accounts to conduct online investigations. 

I’m not even sure this is a consent to search, in the traditional sense.  In these two cases, law enforcement had already gained access to the persons’ online accounts, so there wasn’t any going to be any additional, incremental invasion of their privacy.  Law enforcement officers had already been in their online accounts and seen what there was to see.  The consent in these cases picks up, as you can see from the facts summarized above, after the suspect has already been identified, after search warrants have been executed (and, in one case, a regular, spatial consent search executed) and after the suspect has effectively been transformed into a defendant.  So that investigation is really over. 

This is a consent to investigate other, unrelated cases.  That’s why it doesn’t strike me as a traditional search.  It’s really a consent to assume someone’s identity to investigate crimes committed by persons other than the one consenting.  Now, there are cases in which law enforcement officers key in on a suspect, get the suspect to consent to letting them search property – a car, say – where they think they will find evidence of someone else’s being involved in the criminal activity they’re investigating the suspect for.  There the officers are getting consent to carry on an investigation that at least partially impacts on someone other than the person giving consent.  But there the consent search is a traditional consent search because it conforms to the dynamic I outlined above – it has defined spatial and temporal dimensions. 

I could ramble on more about that aspect of the “Consent to Assume Online Presence” searches (or whatever they are) but I won’t.  I’ll content myself with making one final point that seems interesting about them.

When I consent to a traditional search, I can take it back.  That is, I can revoke my consent.  So if the officer says, “Can I search your car for drugs?” and I (foolishly) say, “yes,” I can change my mind.  If, while the officer is searching, I say, “I’ve changed my mind – stop searching right now”, then the officer has to do just that.  If the officer has found drugs before I change my mind, then the officer can keep those drugs and they can be used in evidence against me because they were found legitimately, i.e., they were found while my consent was still in effect. 

How, I wonder, do you revoke your “Consent to Assume Online Presence”?  Do you email the agency to which you gave the consent, on call them or visit them or have your lawyer get in touch and say, “by the way, I changed my mind – quit using my account”? 




 
 

Dec 08

Law and the 3D Internet

I’ve read several news stories lately about how IBM and Linden Labs, along with a number of IT companies, are working on “avatar interoperability.”

“Avatar interoperability,” as you may already know, means that you or I or anyone could create an avatar on Second Life and use that same avatar in other virtual worlds, such as HiPiHi or World of Warcraft or Entropia. The premise is that having created my avatar – my virtual self – I could then use that avatar to travel seamlessly among the various virtual worlds.  In a sense, I guess, the interoperable avatar becomes my passport to participate in as many virtual worlds as I like; I would not longer be tethered to a specific virtual world by my limited, idiosyncratic avatar. 

Avatar interoperability seems to be one aspect of creating a new 3D Internet.  One article I read said the ultimate goal is to replace our current, text-based Internet with “a galaxy of connected virtual worlds.”  So instead of experiencing cyberspace as a set of linked, sequential “pages,” each of which features a combination of text, graphics and sound, I’d log on as my virtual self and experience cyberspace as a truly virtual place.  Or, perhaps more accurately, I would experience cyberspace as a linked series of virtual places, just as I experience the real-world as a linked series of geographically-situated places.

Cyberspace would become an immersive, credible pseudo 3D reality – the evolved  twenty-first analogue of the hardware-based virtual reality people experimented with fifteen years or so ago . . . the tethered-to-machinery virtual reality depicted in 1990’s movies like The Lawnmower Man and Disclosure.  That older kind of virtual reality was seen as something you used for a particular purpose – to play a game or access data. 

The new 3D Internet featuring interoperable avatars is intended to make cyberspace a more immersive experience.  Our approach to privacy law in the United States is often described as sectoral; that is, instead of having general, all-encompassing privacy laws, we have discrete privacy laws each of which targets a distinct area of our lives.  So we have medical privacy laws and law enforcement search privacy laws and wiretap privacy laws and so on. 

I think our experience of cyberspace is currently sectoral, in this same sense:  I go on, I check my email, I check some news sites, I might do a little shopping on some shopping sites, then I might watch some videos or check out some music or drop into Second Life to socialize a bit or schedule flights or do any of the many, many other things we all do online. I think my doing this is a sectoral activity because I move from discrete website to discrete website.  I may log in multiple times, using different login information.  I go to each site for a specific, distinct purpose.  I think, then, that the custom of referring to websites as “web pages” accurately captures the way I currently experience cyberspace. really is much more analogous to browsing the pages in a book than it is to how we experience life in the real, physical world.  In the real-world I do go to specific places (work, grocery, dry cleaner’s, restaurants, hotels, dog groomer, book store, mall, etc.) for distinct purposes.  But I’m “in” the real-world the whole time.  I don’t need to reconfigure my reality to move from discrete place to discrete place; the experience is seamless.

So that seems to be the goal behind the development of the 3D Internet.  It seems to be intended to promote a more immersive, holistic experience of cyberspace while, at the same time, making it easier and more realistic to conduct work, commerce, education and other activities online.  Avatars, currency and the other incidents of our online lives would all become seamlessly portable.

Personally, I really like the idea.  I think it would make cyberspace much easier and much more interesting to use.  It would also really give us the sense of “being” in another place when we’re online.

When I first heard about avatar interoperability, I wondered about what I guess you’d call the cultural compatibility of migrating avatars.  It seemed, for example, incongruous to imagine a World of Warcraft warrior coming into Second Life or vice versa (Second Life winged sprite goes into WoW).  And that’s just one example.  I had basically the same reaction when I thought of other kinds of avatars leaving their respective environments and entering new and culturally very different worlds.

But then, as I thought about it, I realized that’s really what we do in the real world.  We don’t have the radical differences in physical appearance and abilities (or inclinations) you see among avatars, but we definitely have distinct cultural differences.  We may still have a way to go in some real-world instances (I’m personally not keen on going to Saudi Arabia, for example), but we’ve come a long way from where we were centuries ago when xenophobia was the norm. 

And the ostensible cultural (and physical) differences among avatars will presumably be mitigated by the fact that an avatar is only a guise a human being uses to interact online.  Since it seems humanity as a whole is becoming increasingly cosmopolitan and tolerant, the presumably superficial, virtual differences among avatars may not generate notable cultural incompatibilities as they move into the galaxy of interconnected virtual worlds.

I also wondered about what this might mean for law online.  Currently, as you may know, the general operating assumption is that each virtual world polices itself.  So Linden Lab deals with crimes and other “legal” issues in Second Life, and the other virtual worlds do the same.  There have been, as I’ve noted in other posts, some attempts to apply real world laws to conduct occurring in virtual worlds.  Earlier this year, the Belgian police investigated a claim of virtual rape on Second Life; I don’t know what happened with the investigation.  As I’ve written elsewhere, U.S. law currently would not consider whatever occurs online to be a type of rape, because U.S. law defines rape as a purely real-world physical assault.  Online rape cannot qualify as a physical assault and therefore cannot be prosecuted under U.S. law, even though it can inflict emotional injury.  U.S. criminal law, anyway, does not really address emotional injury (outside harassment and stalking).

That, though, is a bit of a digression.  My general point is that so far law generally treats online communities as separate, self-governing places.  Second Life and other virtual worlds functionally have a status analogous to that of the eighteenth- and nineteenth-century colonies operated by commercial entities like the Hudson Bay Company or the British East Indian Company.  That is, they are a “place” the population of which is under the governing control of a private commercial entity.  As I, and others have written, this makes a great deal of sense as long as each of these virtual worlds remains a separate, impermeable entity.  As long as each remains a discrete entity, and as long as we only inhabit cyberspace by choice, we in effect consent to have the company that owns and operates a virtual world settle disputes and otherwise act as law-maker and law-enforcer in that virtual realm.

Things may become more complicated once avatars have the ability to migrate out of their virtual worlds of origin and into other virtual worlds and into a general cyberspace commons.  We will have to decide if we want to continue the private, sectoral approach to law we now use for the inhabitants of discrete virtual worlds (so that, for example, if my Second Life avatar went into WoW she would become subject to the laws of WoW) or change that approach somehow. 

It seems to me the most reasonable approach, at least until we have enough experience with this evolved 3D Internet to come up with a better alternative, is to continue to treat discrete virtual worlds as individual countries, each of which has its own law.  This works quite well in our real, physical world:  When I go to Italy, I submit myself to Italian law; when I go to Brazil I submit myself to Brazilian law and so on.  At some point we might decide to adopt a more universal, more homogeneous set of laws that would generally conduct in cyberspace.  Individual enclaves could then enforce special, supplemental laws to protect interests they deemed uniquely important.

One of my cyberspace law students did a presentation in class this week in which she told us about the British law firms that have opened up offices and, I believe, practices in Second Life.  That may be just the beginning.  Virtual law may become a routine aspect of the 3D Internet.

  

 

Dec 08

Criminal liability for an unsecured wireless network?

I just received this email (from a source that will remain anonymous):

Good afternoon,

I have a wireless router (WiFi) which for technical reasons I won’t bore you with, has no encryption. If a third party were to access the internet via my unencrypted router and then commit an illegal act, could I be held liable? I’m not sure if this question in anyway broaches your area of expertise and if not please excuse the intrusion. I’ve asked some technical colleagues but they were not able to answer.


It’s a very good question.  I’ve actually argued in several law review articles that people who do not secure their systems, wireless or otherwise, should be held liable – to some extent – when criminals use the networks they’ve left open to victimize others.

In those articles, as in nearly everything I do, I was analyzing the permissibility of using criminal liability to encourage people to secure their computer systems . . . which I think is the best way to respond to cybercrime.  Since I’m not sure if the person who sent me this email is asking about criminal liability, about civil liability or about both, I’ll talk about the potential for both, but focus primarily on criminal liability.

There are essentially two ways in which one person (John Doe) can be held liable for crimes committed solely by another person – Jane Smith, we’ll say (with my apologies to any and all Jane Smiths who read this).  One is that there is a specific provision of the law – a statute or ordinance or other legal rule – which holds someone in Doe’s position (operating an unsecured wireless network, say) liable for crimes Smith commits. 

I’m not aware of any laws that currently hold people who run unsecured wireless networks liable for crimes the commission of which involves exploiting the insecurity of those networks.  I seem to recall reading an article a while back about a town that had adopted an ordinance banning the operation of unsecured wireless networks, but I can’t find the article now.  If such an ordinance, or such a law, existed, it would in effect create a free-standing criminal offense.  That is, it would make it a crime (presumably a small crime, a misdemeanor, say) to operate an unsecured network. 

That type of law goes to imposing liability on the person who violated it, which, in our hypothetical, would be John Doe, who left his wireless network unsecured.  That approach, of course, simply holds Doe liable for what Doe, himself, did (or did not do).  It doesn’t hold him criminally liable for what someone else was able to do because he did not secure his wireless network.  And unless that law explicitly creates a civil cause of action for people who were victimized by cybercriminals (our hypothetical Jane Smith).  Some statutes, like the federal RICO statute, do create a civil cause of action for people who’ve been victimized by a crime (racketeering, under the RICO provision) but absent some specific provision to the contrary, statutes like this only let a person who’s been victimized sue the individual(s) who actually victimized them (Jane Smith). 

As I wrote in a post elsewhere, there are essentially two ways one person (John Doe) can be held liable for the crimes another person (Jane Smith) commits:  one is accomplice liability and the other is a type of co-conspirator liability.  While these principles are used primarily to impose criminal liability, they could probably (note the qualifier) be used to impose civil liability under provisions like the RICO statute that let victims sue to recover damages from their victimizers.

So let’s consider whether John Doe could be held liable under either of those principles.  Accomplice liability, it applies to those who “aid and abet” the commission of a crime.  So, if I know my next-door neighbor is going to rob the bank where I work and I give him the combination to the bank vault, intending to assist his commission of the robbery, I can be held liable as an accomplice. 

The requirements for such liability are, basically, that I (i) did something to assist in or encourage the commission of the crime and (ii) I did that with the purpose of promoting or encouraging the commission of a crime. In my example above, I hypothetically provide the aspiring robber with the key to the bank vault for the express purpose of helping him rob the bank.  The law says that when I do this, I become criminally liable for the crime – the robbery – he actually commits.  And the neat thing about accomplice liability, as far as prosecutors are concerned, is that I in effect step into the shoes of the robber.  That is, I can be held criminally liable for aiding the commission of the crime someone else committed in the same way as, and to the same extent as, the one who actually committed it.  In this hypothetical, my conduct establishes my liability as an accomplice to the bank robbery, so I can be convicted of bank robbery.

I don’t see how accomplice liability could be used to hold John Doe criminally liable for cybercrimes Jane Smith commits by exploiting his unsecured wireless network. Yes, he did in effect assist – aid and abet – the commission of those cybercrimes by leaving his network unsecured.  I am assuming, though, that he did not leave it unsecured in order to assist the commission of those crimes – that, in other words, it was not his purpose to aid and abet them.  Courts generally require that one charged as an accomplice have acted with the purpose of promoting the commission of the target crimes (the ones Jane Smith hypothetically commits), though a few have said you can be an accomplice if you knowingly aid and abet a crime. 

If we took that approach here, John Doe could be held liable for aiding and abetting Jane Smith’s cybercrimes if he knew she was using his unsecured wireless network and did nothing to prevent that.  It would not be enough, for the purpose of imposing accomplice liability, if he knew it was possible someone could use his network to commit cybercrimes; he’d have to know that Jane Doe was using it or was about to use it for that specific purpose.  I don’t see that standard’s applying to our hypothetical John Doe – he was, at most, reckless in leaving the network unsecured, maybe just negligent in doing so.  (As I’ve written before, recklessness means you consciously disregard a known risk that cybercriminals will exploit your unsecured network to commit crimes, while negligence means that an average, reasonable person would have known this was a possibility and would have secured the network).

The other possibility is, as I wrote in that earlier post, what is called Pinkerton liability (because it was first used in a prosecution against a man named Pinkerton).  To hold someone liable under this principle, the prosecution must show that they (John Doe) entered into a conspiracy with another person (Jane Smith) the purpose of which was the commission of crimes (cybercrimes, here).  The rationale for Pinkerton liability is that a criminal conspiracy is a type of contract, and all those who enter into the contract become liable for crimes their fellow co-conspirators commit.

Mr. Pinkerton (Daniel, I believe) was convicted of bootlegging crimes his brother (Walter, I think) committed while Daniel was in jail.  The government’s theory was that the brothers had entered into a conspiracy to bootleg before Daniel went to jail, the conspiracy continued while he was in jail, so he was liable for the crimes Walter committed.  I don’t see how this could apply to our John Doe-Jane Smith hypothetical because there’s absolutely no evidence that Doe entered into a criminal conspiracy with Smith.  He presumably doesn’t even know she exists and/or doesn’t know anything about her plans to commit cybercrimes by making use of his conveniently unsecured network.

In my earlier post, which was about a civil lawsuit, I talked about how these principles could, or could not, be used to hold someone civilly liable for crimes.  I’ll refer you to that post if you’re interested in that topic.

Bottom line?  I suspect (and this is just speculation, not legal advice) that it would be very difficult, if not impossible, to hold someone who left their wireless network unsecured criminally liable if an unknown cybercriminal used the vulnerable network to commit crimes.


 

Dec 02

Defrauding a machine

A recent decision from the United Kingdom held that it is possible to defraud a machine, as well as a human being.

The case is Renault UK Limited v. FleetPro Technical Services Limited, Russell Thoms (High Court of Justice Queen’s Bench Division) (November 23, 2007),  [2007] EWHC 2541.  According to the opinion, FleetPro Technical Services operated a program with Renault UK that let members of the British Air Line Pilots Association (BALPA) buy new Renaults at a discount. In the ten months the program was in effect, FleetPro sent 217 orders through the system, only 3 of which were submitted by members of BALPA. According to the opinion, Russell Thoms, FleetPro’s director and employee, placed the other 214 orders and passed on the discounts to brokers who sold the cars to members of the public. 

Renault discovered what had been going on and sued FleetPro and Thoms for fraud. At trial, the defense counsel argued that there was no fraud because there was, in effect, no fraudulent representation made by one human being to another.  The court described the relevant facts as follows:

Wilted Flowerhat happened when orders produced by Mr. Thoms and sent by e-mail as attachments to Mr. Johnstone [the Renault fleet sales executive who handled the orders] were received was that he opened them, printed them off and gave them to Fiona Burrows to input into a computer system information including the BALPA FON [the code used to process orders]. The evidence was that no human mind was brought to bear at the Importer's end on the information put into the computer system by Fiona Burrows. No human being at the Importer consciously received or evaluated the specific piece of information in respect of each relevant order that it was said to fall within the terms of the BALPA Scheme.  . . . [T[he last human brain in contact with the claim that a particular order fell within the terms of the BALPA Scheme was that of Fiona Burrows at the Dealer. The point of principle which thus arises is whether it is possible in law to find a person liable in deceit if the fraudulent misrepresentation alleged was made not to a human being, but to a machine.

Renault UK Limited v. FleetPro Technical Services Limited, supra.

Judge Richard Seymour held that it is, in fact, possible to hold someone liable when a fraudulent misrepresentation is made to a machine:

I see no objection . . . to holding that a fraudulent misrepresentation can be made to a machine acting on behalf of the claimant, rather than to an individual, if the machine is set up to process certain information in a particular way in which it would not process information about the material transaction if the correct information were given. For the purposes of the present action, . . . a misrepresentation was made to the Importer when the Importer's computer was told that it should process a particular transaction as one to which the discounts for which the BALPA Scheme provided applied, when that was not in fact correct.

Renault UK Limited v. FleetPro Technical Services Limited, supra.

After I read this decision, I did some research to see if I could find any reported American cases addressing the issue.  I could not. 

I’m not sure why.  Maybe the argument simply has not been raised (which, of course, means that it may be, and some U.S. court will have to decide whether to follow this approach or not). 

Or maybe the reason it hasn’t come up has to do with the way American statutes, or at least American criminal statutes, go about defining the use of a computer to defraud. Basically, the approach these statutes take is to make it a crime to access a computer “or any part thereof for the purpose of: . . . executing any scheme or artifice to defraud”.   Idaho Code § 18-2202.  You see very similar language in a many state computer crime statutes, and the basic federal computer crime statute has language that is analogous.  See 18 U.C. Code § 1030(a)(4) (crime to knowingly “and with intent to defraud” access a computer without authorization or by exceeding authorized access and thereby further “the intended fraud”). 

So maybe the issue of defrauding a machine hasn’t arisen in U.S. criminal law because our statutes are essentially tool statutes.  That is, they criminalize using a computer as a tool to execute a “scheme or artifice to defraud.” 

In the U.K. case, Renault was claiming that Thoms had defrauded it by submitting false purchase orders for discounted cars.  The defense’s position was that to recover Renault would have to show that Thoms had intentionally made a false statement of fact directly to Renault, intending that Renault rely on the representation to its detriment.  And that is the classic dynamic of fraud. Historically, fraudsters lied directly to their victims to induce them to part with money or other valuables.  That is why, as I’ve mentioned before, fraud was originally known as “larceny by trick:” The fraudster in effect stole property from the victim by convincing him to hand it over to the fraudster in the belief he would profit by doing so.  Here, the distortion of fact is direct and immediate; the victim hands over the property because he believes what the perpetrator has told (or written) him.

Many American fraud statutes predicate their definition of fraud crimes on executing a “scheme or artifice to defraud,” language that comes from the federal mail fraud statute, 18 U.S. Code § 1341. Section 1341, which dates back to 1872, makes it a crime to send anything through the mail for the purpose of executing a “scheme or artifice to defraud.”  It was enacted in response to activity that is functionally analogous to online fraud:  After the Civil War, con artists were using the U.S. mails to defraud many people remotely and anonymously.  The sponsor of the legislation said it was needed “to prevent the frauds which are mostly gotten up in the large cities . . . by thieves, forgers, and rapscallions generally, for the purpose of deceiving and fleecing the innocent people in the country.”  McNally v. United States, 483 U.S. 350 (1987).  So § 1341 is really a fraud statute; it merely utilizes the “use of the mail to execute a scheme or artifice to defraud” language as a way to let the federal government step in an prosecute people who are committing what is really a garden variety state crime:  fraud.

But as I said, many modern state computer crime statutes also use the “scheme or artifice to defraud” terminology.  To some extent, that may simply be an artifact, a result of the influence federal criminal law has on the states; we have grown accustomed to phrasing fraud provisions in terms of executing schemes or artifices to defraud, so that language migrated to computer crime statutes.
Does that language eliminate the problem the U.K. court deal with?  Does it eliminate the need to consider whether it is possible to defraud a machine by predicating the crime  on using a computer to execute a scheme to defraud instead of making it a crime to make false representations directly to another person for the purpose of inducing them to part with their property? 

On the one hand, it might.  Under computer crime statutes modeled upon the mail fraud statute, the crime is committed as soon as the perpetrator makes any use of a computer for the purposes of completing a scheme to defraud a human being.  Courts have long held that you can be charged with violating the federal mail fraud statute as soon as you deposit fraudulent material into the mail; it’s not necessary that the material actually have reached the victim, been read by the victim and induced the victim to give the perpetrator her property. 

I think the same approach applies to computer crime statutes based on the mail fraud statute:  the computer fraud offense is committed as soon as the perpetrator makes use of a computer with the intent of furthering his goal of defrauding a human being out of their property.  Under that approach, it doesn’t really matter whether a person was actually defrauded, or whether a computer was defrauded.  It’s enough that the perpetrator used a computer in an effort to advance his goal of defrauding someone.

I suspect this accounts for the fact that I, anyway, can’t find any U.S. cases addressing the issue of whether or not it is possible to defraud a computer.  It’s an issue that may not be relevant in criminal fraud cases.  It may, however, arise in civil fraud cases where, I believe, you would actually have to prove that “someone” was defrauded out of their property by the defendant’s actions.
 

Nov 21

The Stop Terrorist and Military Hoaxes Act of 2004

I’d somehow overlooked this one.  This statute, which was added to the federal code in December of 2004 by § 6702(a) of Title VI of Public Law # 108-458, criminalizes disseminating hoax information about possible terrorist or military attacks.

The statute is codified as 18 U.S. Code § 1038.  The statute has two different prohibitions, the first of which appears in § 1038(a)(1).  It provides as follows:

Whoever engages in any conduct with intent to convey false or misleading information under circumstances where such information may reasonably be believed and where such information indicates that an activity has taken, is taking, or will take place that would constitute a violation of chapter 2, 10, 11B, 39, 40, 44, 111, or 113B of this title, section 236 of the Atomic Energy Act of 1954 (42 U.S.C. 2284), or section 46502, the second sentence of section 46504, section 46505(b)(3) or (c), section 46506 if homicide or attempted homicide is involved, or section 60123(b) of title 49, shall--

(A) be fined under this title or imprisoned not more than 5 years, or both;

(B) if serious bodily injury results, be fined under this title or imprisoned not more than 20 years, or both; and

(C) if death results, be fined under this title or imprisoned for any number of years up to life, or both.

The other substantive prohibition appears in § 1038(a)(2).  It provides as follows:

Any person who makes a false statement, with intent to convey false or misleading information, about the death, injury, capture, or disappearance of a member of the Armed Forces of the United States during a war or armed conflict in which the United States is engaged--

(A) shall be fined under this title, imprisoned not more than 5 years, or both;

(B) if serious bodily injury results, shall be fined under this title, imprisoned not more than 20 years, or both; and

(C) if death results, shall be fined under this title, imprisoned for any number of years or for life, or both.

Amazingly (to me, anyway), someone has been convicted of violating this statute.  Actually, a number of people have been convicted of violating it, a number for anthrax hoaxes.  I’m more interested in the case I’m going to talk about because it involves publishing a story online, not sending a letter claiming to have deposited anthrax in a government facility.

According to the district court’s opinion in United States v. Brahm, 2007 WL 3111774 (U.S. District Court for the District of New Jersey), in September of 2006 Jake Brahm, who lived in Wauwatosa, Wisconsin, posted this message on the www.4chan.org site:

On Sunday, October 22, 2006, there will be seven “dirty” explosive devices detonated in seven different U.S. cities: Miami, New York City, Atlanta, Seattle, Houston, Oakland, and Cleveland. The death toll will approach 100,000 from the initial blast and countless other fatalities will later occur as a result from radio active fallout.

The bombs themselves will be delivered via trucks. These trucks will pull up to stadiums hosting NFL games in each respective city. All stadiums to be targeted are open air arenas excluding Atlanta's Georgia dome, the only enclosed stadium to be hit. Due to the open air the radiological fallout will destroy those not killed in the initial explosion. The explosions will be near simultaneous with the city specifically chosen in different time zones to allow for multiple attacks at the same time.

The 22nd of October will mark the final day of Ramadan as it will fall in Mecca, Al-Qaeda will automatically be blamed for the attacks later through Al-Jazeera, Osama Bin Laden will issue a video message claiming responsibility for what he dubs “America's Hiroshima”. In the aftermath civil wars will erupt across the world both in the Middle East and within the United States. Global economies will screech to a halt and general chaos will rule.

The opinion says the post “became a news story of some national prominence” in the days leading up to October 22, even though the authorities did not take it seriously.

Federal agents eventually tracked Brahm down and he was indicted for violating §§ 1038(a)(1) and (a)(2).  He moved to dismiss the indictment, arguing, among other things, that the phrase “may reasonably be believed” in § 1038(a)(1) had to be construed in light of his target audience. United States v. Brahm, supra.

Brahm claimed that the term “reasonably” had to be interpreted in a way that took into account whether the “audience addressed by false or misleading information would believe it to be true.” United States v. Brahm, supra.  He argued for a subjective audience-sensitive interpretation of “reasonably,” so he could be held liable only if the government could prove that the readers of the www.4chan.org website would have believed his statement. The prosecution argued that it should be interpreted to permit a conviction if, under the circumstances, a reasonable person would have believed the posting. The district court reviewed the use of “reasonableness” in other threat statutes, and agreed with the government. United States v. Brahm, supra.

At the time, Brahm was a 20-year-old grocery clerk living with his parents. According to an FBI agent, Brahm thought it would be funny to put out the story because he thought it was so preposterous no one would believe it.  As to that, the agent said, "`It's a hoax. It's nonsense, not a credible threat. . . . But in a post 9-11 world, you take these threats seriously. It's almost like making a threat going onto an airplane -- you just don't do it’”.

The district court denied Mr. Brahm’s motion to dismiss the indictment on October 19 of this year, which wasn’t very long ago.  I can’t find any reported developments in the case since then.  He faces up to 5 years in prison on the federal charge.  He was extradited to New Jersey for prosecution there.

The district court did not consider whether Brahm’s posting – the joke – was protected by the First Amendment, though it noted that the First Amendment protects humorous speech, even when it’s false.  So that may be an issue he will raise in the future. 

In its opinion, the court cited the famous War of the Worlds broadcast as a hoax that “might not qualify as something within the reasonable belief required by the statute,” but  “would represent the kind of intentionally false information anticipated by section 1038.” United States v. Brahm, supra. It noted that “a fictitious broadcast of a terrorist attack on a major city with the goal of making a . . . political or artistic statement, causes greater concern, as . . . expressive, protected speech . . . might be affected” by the statute. United States v. Brahm, supra.  In a footnote, the court pointed out that the War of the Worlds broadcast and 1983 and 1994 broadcasts dealing with fictional terrorist attacks provided disclaimers intended to alert the audience to their fictional nature. United States v. Brahm, supra. 

The disclaimers in the 1983 and 1994 broadcasts were repreated throughout the shows. The War of the Worlds disclaimers did not begin until that show had been on the air for 40 minutes.  They came after a number of New York police officers invaded the control room of the studio from which the broadcast was originating.  The officers seemed to think they should arrest someone, but weren’t sure who to arrest or for what.  According to one story I’ve read, Welles expected to be arrested immediately after the broadcast ended, but the police finally gave up and left because they still couldn’t figure out what to charge him or anyone else involved in the broadcast with.

It looks like Welles could have been prosecuted under § 1038, if it had existed at the time. I’m not sure anything in the radio play would violate § 1038(a)(1), but I believe members of the armed forces die fighting the armed Martian invaders in the “War of the Worlds” radio script, so that would probably violate §1038(a)(2).  He could try to defend by pointing out the inherent incredibility of the broadcast, i.e., by arguing that no one would be silly enough to really believe we were under attack by Martians but a lot of people were silly enough to believe just that in 1939. 

I’m not sure where I come out on this statute.  I can definitely see the utility of being able to prosecute people who pull off anthrax and similar hoaxes.  There, though, the conduct is far less ambiguous:  They send letters or other messages claiming to have planted anthrax – or bubonic plague or bombs or the horrors of your choice – somewhere it can do a great deal of damage.  Conduct like that is a threat, just as it’s a threat for John to tell Jane he’s going to kill her. 

Our law has no difficulty criminalizing that kind of speech because what is being criminalized is not speech, as such – it’s the act of using speech to terrorize people (and perhaps cause consequential injuries and damage, as in the anthrax hoax cases).  The speech at issue in the Brahm case is very different:  He did not send a threat directly to anyone.  He probably did go beyond what Orson Welles did because the “War of the Worlds” broadcast was purely expressive speech – art, in other words.  Brahm claimed that what he posted was a joke – a satire analogous to the stories posted on The Onion, say.  If it’s satire, it should not be criminalized.

The problem Brahm faces is to a great extent one of context:  In the direct, anthrax kind of hoax, the hoaxer sends the functional equivalent of a threat to his victims.  The prosecution’s theory in the Brahm case seems to be that he perpetrated an indirect kind of hoax by putting his joke online, where it could be read by anyone.  Context comes into play in deciding whether a post like Brahms’ will reasonably be understood by those who read it as (i) satire or (ii) a credible threat report.  If he’d posted his joke on an obviously satiric site like The Onion, would that take it out of the category of a criminal hoax under § 1038?  Or are there some things we just cannot joke about at the beginning of the twenty-first century?
 

Nov 19

Fradulently obtaining a website?

When someone is hired to create a website for a business, does so, turns it over to the business but isn’t paid for their work, who owns the site . . . the website creator or the business that contracted for it?

That’s the issue the New Mexico Supreme Court addressed in State v. Kirby, 141 N.M. 838, 161 P.3d 883 (2007).  You can find the opinion on the New Mexico Supreme Court’s website, if you’re interested.

The facts in the case are pretty simple.  Here is how the court explained what happened:

[Richard] Kirby owned a small business, Global Exchange Holding, LLC. . . . Kirby hired Loren Collett, a sole proprietor operating under the name Starvation Graphics Company, to design and develop a website. The two entered into a website design contract. As part of the contract, Kirby agreed to pay Collett $1,890.00, plus tax, for his services.

Collett . . . designed the web pages and incorporated them into the website, but he was never paid. When Kirby changed the password and locked Collett out from the website, Kirby was charged with one count of fraud over $250 but less than $2,500, a fourth degree felony. New Mexico Statutes Annotated § 30-16-6. The criminal complaint alleged that Kirby took `a Website Design belonging to Loren Collett, by means of fraudulent conduct, practices, or representations.’

State v. Kirby, supra.  After a jury convicted him of the charge, Kirby appealed to the Court of Appeals, which upheld his conviction.  He then appealed to the Supreme Court. State v. Kirby, supra.

Kirby claimed he couldn’t be convicted of defrauding Collett out of the website because the website belonged to him, Kirby.  Kirby argued, in effect, that he could not defraud himself.  The New Mexico Supreme Court therefore had to decide who owned the site.

The Court of Appeals found that “because a `website includes the web pages,’ and Kirby never paid Collett for the web pages as contractually agreed, ownership remained with someone other than Kirby”, i.e., remained with Collett. The Supreme Court agreed with “that reasoning as far as it goes,” but decided “further analysis may assist the bar and the public in understanding this . . . novel area of the law.” State v. Kirby, supra.

We first turn our attention to the legal document governing the agreement between Collett and Kirby the `Website Design Contract.’  Collett was engaged `for the specific project of developing . . . a World Wide Website to be installed on the client's web space on a web hosting service's computer.’ Thus, the end product of Collett's work was the website, and the client, Kirby, owned the web space. Kirby was to `select a web hosting service’ which would allow Collett access to the website. Collett was to develop the website from content supplied by Kirby.

While the contract did not state who owned the website, it did specify ownership of the copyright to the web pages. `Copyright to the finished assembled work of web pages’ was owned by Collett, and upon final payment Kirby would be `assigned rights to use as a website the design, graphics, and text contained in the finished assembled website.’ Collett reserved the right to remove web pages from the Internet until final payment was made. Thus, the contract makes clear that Collett was, and would remain, the owner of the copyright to the web pages making up the website. Upon payment, Kirby would receive a kind of license to use the website.

State v. Kirby, supra.

Kirby conceded the site “`contained copyright material that belonged to Loren Collett’” but claimed Collett's ownership of the copyright was “separate from ownership of the website. Thus, because the contract only specified ownership of the copyright interest in the web pages and not ownership of the website,” Kirby argued that “from the very beginning he and not Collett owned the website.” State v. Kirby, supra.

Kirby argues that because he owned certain elements that are part of a website and help make it functional, he was the website owner regardless of who owned the copyright to the web pages. Kirby purchased a `domain name’ for the website and had contracted with an internet hosting service for `storage’ of that website. This same hosting service was the platform from which the website was to be displayed on the internet. Kirby, as the owner of the domain name and storage service, also owned the password that enabled him to `admit or exclude’ other people from the website. Kirby argues that his control of the password, ownership of the domain name, and contract with an internet hosting service provider gave him ownership of the web site.

State v. Kirby, supra.

The New Mexico Supreme Court disagreed:

While a domain name, service provider, and password are all necessary components of a website, none of them rises to the importance of the web pages that provide content to the website. A domain name is also referred to as a domain address. A domain address is similar to a street address, `in that it is through this domain address that Internet users find one another.’ . . . But it is nothing more than an address. If a company owned a domain name . . . but had no web pages to display, then upon the address being typed into a computer, only a blank page would appear. A blank web page is of little use to any business enterprise. It is the information to be displayed on that web page that creates substance and value. Similarly, the service provider only stores that information on the web pages and relays that communication to others. Having a service provider meant little to Kirby if the web pages were blank. Thus, the predominant part of a website is clearly the web page that gives it life.

State v. Kirby, supra.

The Supreme Court held that Collett owned the website:  “the contract between Kirby and Collett clearly recognized Collett's legal ownership of the copyright to the web pages. Payment was to be the pivotal point in their legal relationship, and even then Kirby was only to receive a license to use those pages. The contract never transferred any interest in the web page design or ownership of the website to Kirby. As the owner of the copyright, Collett was the owner of the website, and any change was conditioned upon payment.”  It therefore upheld Kirby’s conviction.
 

Nov 17

Causing suicide (again)

A few months ago I did a post about whether someone could be held criminally liable for causing another person to commit suicide.  “Suicide” (May 29, 2007).  In that post I primarily focused on whether it would be possible to hold a person criminally liable if the prosecution could show that this was their purpose, i.e., that they WANTED the other person to kill themselves.  That is, of course, a logical possibility.  As the drafters of the Model Penal Code, a template for criminal statutes, said of this scenario, “it’s a pretty clever way to commit murder.”

Here I want to talk about a different, but related issue:  Whether someone can be held criminally liable for another’s suicide if it was not their purpose to cause the victim to kill herself, but their conduct in fact contributed to the victim’s doing so.  I’m prompted to write this post by what I’ve read recently of the Megan Meier case, the tragic story of the 13-year-old Missouri girl who committed suicide after being the victim of a MySpace hoax.

Here’s a summary of the facts of that case as they appeared in the St. Charles Journal:  Thirteen-year-old Megan Meier lived with her parents in a Missouri suburb.  She had attention deficit disorder, battled depression and had “talked about suicide” when she was in the third grade.  She had been heavy but was losing weight and was about to get her braces off.  She had just started a new school, and was on their volleyball team.  She had also recently, according to her mother, ended an off-again, on-again friendship with a girl who lived down the street. 

Megan had a MySpace page, with the permission of her parents.  She was contacted by a sixteen-year old boy named Josh, who said he wanted to add her as a friend. Megan’s mother let her add him, and for six weeks Megan corresponded with Josh.  Josh seems to have told her she was pretty and clearly gave her the impression that he liked her . . . at last, until one day when he sent her an email telling her he didn’t know if he wanted to be her friend because he’d heard she wasn’t “very nice” to her friends.  He seems to have followed that up with other, not-very-nice emails.  And then, according to the news story I cited above, Megan began to get messages from others, saying she was fat and a ***.  After this went on for a bit, Megan hanged herself in her closet and died the next day.  Her father went on her MySpace account and saw what he thought was the final message from Josh – a really nasty message (according to her father) that ended with the writer telling Megan that the “world would be a better place without her.”

Megan’s parents tried to email Josh after she died, but his MySpace account had been deleted.  Six weeks later, a neighbor met with them and told them there was no Josh, that he and his MySpace page were created by adults, the parents of the girl with whom Megan had ended her friendship.  According to the police report in the case, which is quoted in the story cited above, this girl’s mother and a “temporary employee” created the MySpace page so the mother could “find out” what Megan was saying about her daughter.  It gets really murky from there, as to what was going on in the Megan-Josh correspondence, but it seems that others – including other children who knew Megan – had passwords to the Josh account and posted messages there.  When the police interviewed this woman, she said she believed the Josh incident contributed to Megan’s suicide, but did not feel all that guilty because she found out Megan had tried to commit suicide before.  (Actually, she seems to have talked about it in the third grade, as I noted above).

Megan’s parents and others in the community seem to have wanted the police to charge the adults who created and operated the Josh MySpace page with some type of crime for their role in Megan’s suicide.  There are several reasons why they can’t be charged even if, as seems reasonable, their conduct was a factor resulting in Megan’s decision to take her own life.

One factor is that they clearly never intended for that to happen.  I can’t begin to figure out what these adults thought they were doing (never mind the children involved), but whatever it was, they didn’t set out to kill Megan.  They were, at most, reckless or negligent in embarking on a course of conduct that resulted in tragedy.

Every state makes it a crime to cause another person’s death recklessly or negligently.  The difference between the two types of homicide goes to the foreseeability of the result.

  • You act recklessly if you consciously disregard and substantial and unjustifiable risk that the result (death) will result from your conduct.  So to be liable for recklessly causing Megan’s death, these adults would have had to have been aware, at some level, that what they were doing could cause her to kill herself.  If they were actually aware that this was a possibility and persisted in sending emails that could cause this result, then they could be held liable for reckless homicide.
  • You act negligently if a reasonable person (an objective standard) would have realized that your conduct created a risk that Megan could commit suicide.  Here, the law looks not at what the allegedly culpable person actually knew, but at what a reasonable person, the average American adult, would have realized in this situation.  So if the law finds that a reasonable person would have realized there was a risk that Megan would kill herself if that person conducted the Josh hoax, they could be held liable for negligent homicide.

I sympathize with Megan’s parents and I cannot comprehend why adults had nothing better to do than to play such a cruel trick on a child, but however stupid and cruel their conduct was, those responsible for the Josh hoax cannot be held liable under either of these standards.  To explain why, I’m going to use a very recent Minnesota case:  Jasperson v. Anoka-Hennepin Independent School District (Minnesota Court of Appeals, Case # A06-1904, decided October 30, 2007).

It’s a very sad case.  The opinion says, “J.S. was a 13-year-old eighth-grade student . . . . who lived with his mother and father and his older brother.”  He’d been having trouble in school:  He had received failing grades in his classes, but was bringing his grades up.  He was being bullied by two boys who attended a different school (a school for students with “behavioral problems”).  According to the opinion, they grabbed his bike, told J.S. they knew where he lived and which room in the house was his and threatened to kill him.  His mother met with Assistant Principal Ploeger at J.S.’ school and told him all this.  The AP said he’d see that the boys were charged with trespassing on school property, but told her she’d have to talk to the school liaison police officer Wise about protecting J.S. from the boys.  The AP advised J.S. to leave school by a different route or leave with friends.  Wise met with J.S. and his mother, determined that no crime had been committed and suggested he walk with friends and avoid the boys.

A week or two later, J.S. got F’s in his mid-quarter grades for all his classes except for Physical Education.  According to one student, J.S.’ science teacher Lande told him he was the “dumbest student” the teacher had ever had, and that he was “going nowhere.”  Lande later said he had been angry and may have spoken louder than he intended.  The observing student said J.S. cried afterward.  The family discussed J.S.’ grades that night, and he told them it was his teachers’ fault.  He also said he couldn’t concentrate because the two boys were handing around his school.  J.S.’ mother told him she’d talk to the school about getting him a new science teacher and about dealing with the two boys.

The next day, J.S.’ father, mother and brother left for work and school before he did, which wasn’t unusual.  He often rode to school with friends.  When J.S.’ brother came home that afternoon, he found J.S. dead on the living room floor, with a suicide note beside him.  J.S. had shot himself.  In the note he said his life was going nowhere so he didn’t need to live, left his love for his family and his dog and said he’d miss them.

The parents brought a civil suit against the school, claiming the school’s negligence caused J.S.’ suicide.  The trial court held that J.S. suicide was not foreseeable, so the parents didn’t have a claim.  The Minnesota Court of Appeals agreed:

[T[he record does not support assertions that any school personnel knew or had reason to know that J.S. continued to have problems with the two boys [or[ that J.S.'s failing grades were caused by his terror of the two boys. . . . Mere speculation or conjecture is not sufficient. . . . The district court did not err in concluding that given the evidence, Ploeger, Wise and Lande could not have foreseen any harm to J.S.

Jasperson v. Anoka-Hennepin Independent School District, supra.

Both courts also found that the evidence did not establish that Ploeger’s and Lande’s conduct caused J.S. to commit suicide:

Appellant argues that the school district failed to protect J.S. from a known danger; was in a position to end J.S.'s “terror” and should have anticipated that its failure would likely result in J.S.'s harm; and was in a far superior position to end the threats from the two boys than J.S. or his parents. But the record does not show that anyone at the school had any knowledge that J.S. was subject to harm from the two. The record does not suggest any change in J.S.'s behavior indicating that he was experiencing terror, and none of J.S.'s friends alerted school personnel that J .S. was in fear. There is no evidence that J.S.'s suicide was foreseeable and therefore could have been prevented.

Appellant relies on the fact that J.S.'s midterm grades and a suicide note containing the same words Lande allegedly used were found at his side as evidence that Lande's remarks were a substantial factor in bringing about J.S.'s suicide. But “a mere possibility of causation is not enough.” The district court did not err in concluding that, as a matter of law, the required causal connection between the conduct of school personnel and this tragic suicide is not established by evidence in the record.

Jasperson v. Anoka-Hennepin Independent School District, supra.






 

Nov 17

Privacy and anonymity

As you may have read, Donald Kerr, a deputy director of national intelligence, said last week Americans need to re-think their conception of privacy.  He said privacy will no longer mean anonymity but will, instead, mean that government and the private sector will have to take appropriate steps to “safeguard people’s private communications and financial information.”

I’m not sure if I agree with him or not, so I thought I’d use this post to sort through my reactions to Kerr’s comments.

On the one hand, I don’t know that privacy has ever been synonymous with anonymity. My neighbors know who I am and where I live, as do the local police in my small Ohio suburb. I’m far from being anonymous to those around me.  That, though, doesn’t mean I’ve lost my privacy.  What I do in my home is still private, at least to the extent I pull the drapes and otherwise take some basic steps to shield what I’m doing from public view. 

Maybe that’s what he means by equating privacy and anonymity . . . the notion that what I do in public areas is not private, at least not unless I take steps to conceal my identity and my activities.  But that’s not a new notion – it’s common sense.  As far as I know, no one has ever tried to argue that what they do in public (walk, drive, shop, go to a movie, rollerblade, whatever) is private under our Constitution . . . because they believed it was private or under some other theory. 

Anonymity is not really an aspect of privacy under our Fourth Amendment law, except insofar as remaining anonymous makes it difficult or impossible for someone to tell what you’ve been doing in an area where you can readily be observed by others.  Our Fourth Amendment law has traditionally been about the privacy of enclaves – your home, your office, your car, phone booths (when they still existed), and other physical (and perhaps intangible) places.  One court, at least, has assumed that a password-protected website is a private enclave, analogous to these real-world enclaves.  The Fourth Amendment also protects the containers (luggage, safes, lockers, sealed mail, DVD’s and other storage media) we use to store and to transport things. It is intended to prevent the police from intruding into real and conceptual spaces as to which we have manifested a reasonable expectation of privacy. 

I don’t see where anonymity comes in to the traditional Fourth Amendment conception of privacy.  The police can see John Doe walking down the street carrying a bag and really want to open that bag because they think he’s transporting drugs, but they can’t open it, or make him open it, just because they know who he is (John Doe).  His lack of anonymity has no impact on the legitimate Fourth Amendment expectation of privacy he has in the contents of that bag.  The fact that he’s carrying a bag is not private because anyone can see him carrying it.  The contents of the bag, though, are private unless and to the extent that the bag is transparent; as long as it’s opaque, its contents are and will remain private.

Anonymity, as such, is actually the focus of a different constitutional provision:  the First Amendment.  The Supreme Court has interpreted the First Amendment as establishing the rights both to speak anonymously and to be able to preserve the anonymity of one’s associations.  The Court has found that protecting anonymity in this context furthers free speech, political advocacy and other important values.

I think what Mr. Kerr is really talking about is an issue I’ve written on before:  whether we have a Fourth Amendment expectation of privacy in the information we share with third-parties, such as businesses, Internet and telephone service providers and financial institutions.  I think what he’s referring to is what I believe to be a widespread, implicit assumption among Americans, anyway:  the notion that what we do online stays safely and obscurely online.  I may be wrong, but I think we unconsciously tend to assume that the data we generate while online – the traffic data our ISP collects while we’re surfing the web and the transactional data companies collect from us when we make purchases or otherwise conduct business online – is entre nous . . . is just between me and my ISP or me and my bank or me and Amazon.

We know at some level that we are sharing that data with an uncertain number of anonymous individuals -- the employees of ISPs, banks, businesses, etc. – but we don’t tend to correlate sharing information with them with sharing that information with law enforcement.  We essentially assume we are making a limited disclosure of information:  I inevitably share data with my ISP as an aspect of my surfing the web or putting this post on my blog.  I know I’m sharing information with the ISP, but I don’t assume that by doing that I’m also sharing information with law enforcement. 

The problem with that assumption is, as I’ve noted before, that the Supreme Court has held that data I share with third-parties like banks or ISPs is completely outside the protections of the Fourth Amendment.  According to the Court, I cannot reasonably expect that information I share with others, even with legitimate entities, is private.  This means that under the Fourth Amendment, law enforcement officers do not have to obtain a search warrant to get that information. 

(There are statutory requirements, but these both requirements go beyond the current interpretation of the Fourth Amendment and often provide less protection than that Amendment.  They often allow officers to obtain third-party data without obtaining a search warrant; a subpoena or court order may suffice.)

So how does all of this relate to Mr. Kerr’s comments about anonymity and privacy?  Well, at one point he said that we have historically equated privacy with anonymity but “in our interconnected and wireless world, anonymity - or the appearance of anonymity - is quickly becoming a thing of the past”.  Actually, I’d tend to argue the opposite:  I think cyberspace actually gives us more opportunities to remain anonymous than we’ve ever had. 

Think about a pre-wired world.  Think about the America of a hundred or a hundred and fifty years ago.  Most Americans in this era, like most people throughout the millennia preceding that era, lived in small towns or villages.  They pretty much knew everyone in the town or village where they lived.  They traveled very little, both in terms of frequency and distance, so they lived their lives almost exclusively in that town or village.  One consequence of this is that everyone in the town or village tended to know pretty much everything about everyone else.  They knew who was having an affair with whom.  They knew who was buying opium-based products at the general store and getting high.  They knew who the drunks were and who the wife- and child-beaters were.  They might not know everything that went on in each other’s homes behind closed doors, but they knew pretty much everything else. 

The lives of those who lived in cities were probably not subject to quite so much scrutiny from their neighbors.  My impression, though, is that city-dwellers during this and earlier eras tended to reside in a specific neighborhood, do their shopping in that neighborhood and generally socialize with people in that neighborhood.  So much of what I said about town and village dwellers also applied to those who lived in cities.  City dwellers probably had the possibility of going into other parts of the city to carry out their affairs, buy their opium products or otherwise engage in conduct they’d prefer not be widely known in the neighborhood where they resided.

My point is that there wasn’t much anonymity back then, or in all the years before then. 

In modern America, we have much more control over the information we share with others.  Our neighbors may still be able to pick up a lot of information about our habits and predilections, good and bad, but if we’re concerned about that we have alternatives:  We can seclude ourselves in a remote area and commute to work, live in a high-rise and ignore our neighbors or take other means to reduce the amount of information that leaks out to those with whom we share living space.  We may still buy our groceries and medications and clothing and other necessities from a face-to-face clerk (or not, as I’ll note below), but we can conceal our identity from the clerk by paying with case.  We can try to obscure patterns in our purchases of necessities by patronizing various stores, in the hopes of interacting with different clerks.  We can also rely on the fact that in today’s increasingly-urbanized, increasingly-jaded world clerks may not pay attention to use and our purchases because they don’t care who we are.  We’re no longer joint components in a small, geographically-circumscribed social unit.

We can also take information about our purchasing habits and financial transactions out of local circulation by making purchases and conducting financial and other transactions online.  This brings us back to Mr. Kerr’s comments.  I may be wrong, but I don’t think we assume we’re anonymous when we conduct our affairs online.  I do think we believe we are enhancing the privacy of our activities by removing them from the geographical context in which we conduct our lives.  Online, I deal with strangers, with people who do not know Susan Brenner and, by inference, do not care what Susan Brenner is buying or selling or otherwise doing online.

Empirically, that’s a very reasonable assumption.  The problem is that it founders on a legal and practical Catch-22:  We conduct our online transactions with strangers who don’t know us and, by extension, don’t care about what we do.  We therefore assume we have overcome the memory problem, the fact that historically those with whom we dealt face-to-face could, and would, remember us and our transactions.  This brings us to the first, practical component of the Catch-22.  Although we overcome the memory problem, we confront another problem:  the technology we use to conduct our online transactions records every aspect of those transactions.  We replace the uncertain memory of nosy clerks with disinterested but the irresistibly accurate transcription of machines. 

The second, legal component of the Catch-22 is the issue I noted above – the recorded data we share with these third parties is not private under the Fourth Amendment and can, therefore, be shared with law enforcement.  So in one sense we have more privacy as we move our activities online, and in another sense we have less.

I’m not sure what Mr. Kerr meant when he said that privacy now means that government and the private sector will have to take appropriate steps to “safeguard people’s private communications and financial information.”  Does he mean we should revise our view of the Fourth Amendment to bring this information within its protections?  Or does he mean we should enact statutes designed to accord a measure of privacy to this data by setting limits on how it can be shared with law enforcement?




 


 

Nov 03

Virtual child pornography -- the product?

Some may find this posting offensive or disturbing. 

I’m extrapolating a scenario that might ensue from the current state of U.S. law on child pornography.  I’m not arguing for this or any other result, just working through the logical possibilities, to point out what could be a consequence of current U.S. law.

As I explained in an earlier post (“Child pornography:  real and pseudo,” September 1, 2006), the U.S. Supreme Court has held:

  • that the First Amendment does not preclude U.S. law’s criminalizing “real” child pornography, i.e., child pornography the creation of which involves the use of real children;
  • that the First Amendment does bar U.S. law from criminalizing virtual child pornography, i.e., child pornography the creation of which does not involve the use of real children but is, instead, based on computer-generated images (CGIs).

The Supreme Court held that the First Amendment does not prevent U.S. law from criminalizing real child pornography, even though it qualifies as speech under the First Amendment, because its creation involves the victimization of children, both physically and emotionally.  Real child pornography is essentially a product and a record of a crime, or crimes, against children.  The Court also held that the First Amendment does prevent U.S. law from criminalizing virtual child pornography because it is speech and because no real person is “harmed” in its creation; unlike real child pornography, virtual child pornography is fantasy, not recorded reality.

We were covering all this in my cyberspace law class, and I asked the students to think about where virtual child pornography’s protected status under the First Amendment might take us once computer technology evolves so it is possible to create virtual child pornography (or adult pornography or movies or any visual media) that are practically indistinguishable from the real thing.  That is, the question is what might happen with virtual child pornography once the average person cannot tell it from child pornography the creation of which involved the use of real children. We came up with what I think are some interesting scenarios. 

For one thing, it would not be illegal to possess this indistinguishable-from-the-real-thing virtual child pornography.  That, alone, has several consequences.  It could create real difficulties for law enforcement officers who are trying to find real child pornography and prosecute those to create, distribute and possess it.  If a regular person cannot tell real from virtual child pornography by simply looking at a movie or other instance of child pornography, how are police involved in investigations supposed to know what they’re dealing with? 

Another consequence could be that it becomes functionally impossible, or at least very difficult, for prosecutors to prove that someone being prosecuted for possessing real child pornography did so knowingly.  The defendant could claim he or she believed the material he possessed was virtual, not real, child pornography.  Since the prosecution has to prove the defendant “knowingly” possessed real child pornography beyond a reasonable doubt, it would presumably be difficult for prosecutors to win in cases like these (assuming the jurors followed the law and were not swayed by personal distaste for the defendant’s preferences in pornography).

Since U.S. jurisdictions cannot criminalize the possession and distribution of virtual child pornography, we might see the emergence of websites selling virtual child pornography.  It would perfectly legal to sell the stuff in the U.S., to buy it or to possess it.  Virtual child pornography would essentially have the same status as any other kind of fictive material; it is, after all, a fantasy, just as slasher movies or violent video games are fantasy. 

To protect themselves and their clients, these hypothetical businesses might watermark the virtual child pornography they sold, to provide an easy way of proving that the stuff was virtual, not real.  We talked a bit about this in my class.  The watermark would have to be something that could withstand scrutiny and that would be valid, clearly credible evidence that child pornography was virtual, not real.  Those who created and sold the stuff might be able to charge more if their watermark hit a gold standard – if it basically provided a guarantee that those who bought their product could not be successfully prosecuted for possessing child pornography. 

The international repercussions of all this might be interesting; some countries, such as Germany, criminalize the creation, possession and distribution of all child pornography, real and virtual.  The criminalization of all child pornography is the default standard under the Council of Europe’s Convention on Cybercrime, but the Convention allows parties to opt out of criminalizing virtual child pornography.  So countries that take the same approach as the U.S. could also become purveyors of virtual child pornography.  We could see a world in which virtual child pornography was illegal in some countries and for sale in others.

In analyzing where all of this might go, my students and I realized there could be one, really depressing implication of the commercialization of virtual child pornography:  Real child pornography would probably become particularly valuable, because it would be the real thing.   

 

Nov 01

Bonnie & Clyde and Cybercrime

I spoke at a conference in Italy last week; after I spoke I got a question from a member of the audience.  His question, which went to how and why we in the U.S. federalize certain crimes, made me think about a part of our history in a way I had not done before.

His question basically went to why we make it a federal crime for someone to hack the computer system of a private company.  He was coming from the very logical premise that hacking the system of a private company is an attack on private property, so he wondered why that should become a federal crime. 

In answering him, I recapped the history of the increasing federalization of crime in the U.S., something I’m going to repeat here.  After I do that, I want to offer a few thoughts on what that history may, or may not, suggest about how we can go about dealing with cybercrime.

Until early in the twentieth century, crime in the U.S. was proscribed and prosecuted almost exclusively at the state level.  There were some federal crimes, such as counterfeiting and treason, but they tended to be the exception.  The drafters of the U.S. Constitution intended that crime would be handled primarily at the local level; a few years ago, an American Bar Association study found that this was their intention, the theory being that it makes more sense for crime to be punished as close as possible to the local community. 

This theory derives both from history (that’s the way it had always been done) and from the assumption that handling crime at the local level was the best way to deter crime and encourage people to follow the law.  The notion is, essentially, that the closer we are to the process of prosecuting and punishing criminal behavior, the more likely we are to take the process seriously and see it as something that could affect us.

That was the way things worked until about the second decade of the twentieth century, when automobiles began to become more common in the U.S.  As they became more common, motor vehicles began to influence how certain crimes were being committed. 

One crime, arguably a “new” crime at the time, was automobile theft:  Someone could steal a car in, say, Ohio and drive it into Indiana or Illinois or Texas . . . which would pretty much defeat the Ohio police’s efforts to find the car and prosecute the thief.  In other words, car thieves pretty quickly figured out that they could exploit state borders to their advantage; they figured out that each state only had jurisdiction to investigate crime within its borders.  There really was no effective way for, say, Ohio officers to pursue a car thief into Indiana and then Illinois and however many other states he took the car into. 

This concept of using the mobility of motor vehicles to elude apprehension and prosecution then migrated into other areas, such as kidnapping and bank robbery.  As those of us who’ve seen the movie “Bonnie & Clyde” know, the 1930’s say the rise of bank robbing gangs who used high-speed automobiles to rob a bank in one state and then flee to another, thereby avoiding the police.  Indeed, according to one book I read, Clyde went so far as to send Henry Ford a letter, thanking Ford for making such fast cars; Clyde assured Ford that he always preferred using Fords in his car thefts, both because they were so fast and because they were so common it was easy to hide out in them.

The question the Italian gentleman asked me last week made me think about all of this a little more deeply.  In answering him, I realized something I had already known, but hadn’t really thought about:  What American bank robbers and kidnappers and car thieves were doing 70 and 80 years ago is functionally indistinguishable from what cybercriminals are doing today.  Both use(d) then-current technology to exploit the fact that states (whether discrete states in a federal system like ours, or nation-states in our global system) have jurisdiction only within their own borders.

In the law, there are two fundamental principles governing a sovereign state’s exercise of jurisdiction in criminal cases:  One is that a sovereign state has jurisdiction to adopt law criminalizing conduct occurring within its territory and to sanction those who violate that law.  The other principle is that one state (Ohio or France) cannot enforce its laws inside the territory of another state (Indiana or Italy).  So, criminals – who generally tend to be among the first adopters of new technologies -- can use those principles against sovereign states by committing a crime in one state and then fleeing to another state or, for cybercrime, by remotely committing a crime in another state.

Okay, none of this is new.  What I realized last week goes not to the fact that all of this has happened and is happening.  It goes, instead, to the strategy the U.S. used to deal with the motor vehicle as criminal tool issue.  It occurred to me that the strategy might, or might not, be an instructive example for how we could deal with cybercrime.

The way the U.S. dealt with the motor vehicle as criminal tool issue was to enact federal laws that made it a crime to, for example, steal a motor vehicle in one state and take it across state lines for the purpose of evading apprehension and to kidnap someone in one state and take them across state lines for the same purpose.  In other words, the U.S.’ approach was to move to a supra-state system of laws, a national set of laws.  This meant that the criminals could no longer find a safe haven:  Federal authorities could chase them from Ohio to Indiana to Illinois and all the way to Texas, if necessary.

I thought of this last week both because it was relevant in answering the gentleman’s question and because it perhaps suggests something about how we need to approach cybercrime.

Like the 1930’s bank robbers, cybercriminals are using new technology to exploit the jurisdictional limitations of specific sovereigns to their advantage.  Everyone recognizes that.  The question is, what do we do about this?

The Council of Europe’s Convention on Cybercrime attempts to deal with the problem by encouraging countries to adopt standardized, consistent laws that (i) criminalize certain activities (such as hacking, child pornography, etc.) and (ii) facilitate law enforcement cooperation with officers from other countries.  The goal, in effect, is to achieve a voluntary, lateral solution to the problem.  The notion is that if the various nation-states all have a core of consistent laws criminalizing behaviors and specifying what police can do in collecting and sharing information about cybercrimes, then this will make it much more difficult for cybercriminals to exploit the parochial jurisdictional capacities of the various nation-states. 

I like that solution because it is voluntary, and because it is lateral.  As we all know, cyberspace favors the lateral, rather than the hierarchical, organization of human behavior.  So this seems a flexible, adaptive solution. My only concerns with it are that (i) it may take a very long time to achieve this consensus and (ii) it may prove difficult to achieve consensus in certain areas, because national laws are bound up with local culture.  We in the U.S. are already outliers because of our First Amendment; it means that we can, indeed must, host content that is criminalized elsewhere, a circumstance that will not change unless and until we eliminate that aspect of the First Amendment (which is highly unlikely).

What about the alternative? . . . What about a solution analogous to what the U.S. did with motor vehicle-facilitated crime about 80 years ago?  Could we somehow adopt a set of supranational laws targeting cybercrime and use that to defeat cybercriminals’ ability to evade and frustrate the application of national laws?

As a federal system, the U.S. was in a perfect position to move to the next level – to shift to a higher-tier, system-wide set of laws targeting motor vehicle-facilitated crime.  We do not have a global federal system or any thing comparable.  We therefore do not have a structure which could be used to implement a similar approach, however logical it might be.

This brings me back to the comments I made in my last post, “A law of cyberspace.”  On the one hand, a global, over-arching network of cybercrime laws, with an accompanying, equally-global enforcement system, would clearly be the optimum way to address the exploitation of jurisdictional limits by cybercriminals.  The first problem with that strategy is that we do not have an institution capable of achieving this; the United Nations is the only possible candidate for the task but this really does not come within its charter.  The other problem is, as I noted in my last post, that nation-states tend to be possessive of their territory and jealously protective of their own, idiosyncratic laws.  I think it will be a long, long time before a global solution to the cybercrime jurisdictional law problems will be a possibility, assuming, of course, that such a solution is desirable. 
 

Oct 16

A law of cyberspace

I’ve heard many people complain about the fact that law in cyberspace is a mess:  different laws in different countries (and different laws in parts of countries, as with the U.S. states), laws that conflict, laws that seem to make no sense when they are applied to online activity, etc., etc.  And they’re right.

We’re clearly in some kind of transitional phase, at least as far as the law goes.  Law has always been territorial and parochial:  England had its unique set of laws, which applied only within the territory controlled by England; Japan had its unique set of laws, which applied only within the territory Japan controlled; Egypt had its unique set of laws, which applied only within the territory Egypt controlled, and so on. 

This wasn’t a problem as long as people were pretty much parochial, i.e., pretty much stayed in the country in which they were born.  It began to become a problem a hundred years or so ago, when international travel began to become easier and therefore more common. One downside of international travel’s becoming increasingly easier has been that the “bad guys,” the criminals can commit crimes in Country A and then go to another country, in an effort to avoid being apprehended and punished by Country A. 

As I’ve written elsewhere, there is and has been a core of consistency in the laws of the various nation-states, because every state has to protect certain interests (e.g., property, matrimony, parentage) and prohibit certain types of conduct (e.g., murder, rape, theft).  States may well go about protecting these interests and prohibiting conduct in different ways, but there has generally been a baseline of consistency in certain fundamental areas.  That has made it possible – not necessarily easy, but possible – for countries to cooperate in bringing criminals to justice.  As you probably know, modern states have extradition treaties which let Country B arrest the criminal I hypothesized above upon being requested by Country A; Country B returns him to Country A, where he can be tried, convicted and punished for crimes he committed against citizens of that country.

That system works pretty well in the real, physical world, though it has been facing more and more challenges as international travel becomes more common.  It’s still, though, relatively easy to identity and apprehend a human being traveling from one country to another because actions in the physical world leave traces:  A criminal who moves from Country A to Countries B and C can be identified by his appearance (unless he alters it substantially), by his fingerprints, by his passport, by his habits or other methods.  The relatively cumbersome nature of real-world travel also contributes to the identification of criminals such as our hypothesized victimizer of Country A.  It takes time and funds and arrangements (public arrangements) to go from Country A to Country B and then to Country C.  Our perpetrator cannot move that quickly (unless he has his own private jet and other resources, which is possible but unlikely), and that, too, can facilitate his being identified, apprehended and returned to face justice in appropriate venues.

Cyberspace of course changes all this.  We can virtually “travel” the globe in instants, or less than instants.  We can do so anonymously or pseudonymously.  We can conceal where we are in the traditional, territorial sense, where we were and where we will be.  Identity and actions disconnect from territory, which can also mean they disconnect from law, which still is territorial and parochial.

We’re not sure yet how to think about cyberspace.  One option, which was long ago proposed by many other people, is to conceptualize it as a “place” in itself – a virtual “place.”  That makes sense if you equate “place” with the context in which we carry on various activities; we buy things, sell things, communicate, do art, harass and annoy each other, commit crimes, make friends, form communities and do a host of other typically human things online. It would, therefore, be quite logical to conceptualize cyberspace as another “place,” something analogous to a new, as-yet unsettled country.

If we did that, we could develop cyberspace-specific laws.  These laws would be laws that, like the nation-state laws I noted earlier, were unique and parochial  -- specific to the territory within which they applied.  That “territory” would be the ever-expanding confines of cyberspace.  The cyberspace laws would therefore only apply when we were “in” cyberspace, i.e., only when we were online.  Offline activities would continue to be governed by the unique, parochial laws of the nation-state whose territory we occupied while we were online. 

I see two problems with this approach.  One is devising cyberspace-specific laws:  Who would do this?  Would the countries of the world develop a code of laws for cyberspace?  Would the UN do this, alone or in collaboration with these countries?  Who would decide what law governs cyberspace? 

Before I analyze that issue, I want to note the other problem, which I am not specifically going to address in this post.  If we were to devise and implement a set of cyberspace-specific laws, who would enforce them?  I think that if it ever happens, it will be a long time before the various countries of the world are wiling to let an independent entity (the Cyberspace Court? A UN Cyberspace Court?) exercise jurisdiction over their citizens with regard to their online activities. 

Think what that would mean:  I’m a U.S. citizen and I’m writing this from my home in Dayton, Ohio.  Assume that what I’m writing somehow violates our hypothetical set of cyberspace laws, which would subject me to the efforts of the entity charged with enforcing those laws.  That would mean, I assume, that I would somehow have to be extradited from Dayton/Ohio/USA to a cyberspace-jurisdiction to be tried, probably convicted and somehow punished.  If the United States. or any other country were to allow that to happen to its citizens, it would be surrendering a measure of its national sovereignty.  It would be conceding part of its authority to control its citizens’ behavior to another, albeit virtual, sovereign entity.  That may happen someday, in a distant future in which the influence of nation-states has declined or disappeared, but it will not happen for a long time.

That problem, though, cannot arise unless and until there is a set of cyberspace-specific laws.  Let’s go back to the first problem:  Who would devise such laws?

Logically, the articulation of these laws can from either from sources outside cyberspace or from sources inside cyberspace.  In the first alternative, we either (i) devise entirely new laws that are specific to cyberspace or (ii) extrapolate existing, external law to cyberspace.  In the second alternative, we would “grow” cyberspace law in cyberspace.  Let’s consider each alternative.

I do not think we will see an external effort to devise cyberspace-specific laws primarily because of the enforcement problem.  By agreeing to the articulation of cyberspace-specific laws, countries would already be surrendering a measure of their sovereign authority, because the issue of enforcement is implicit in, and an inevitable consequence of, the articulation of such laws. 

Another external approach – which the Council of Europe is pursuing on a limited basis – is to encourage the harmonization of national laws insofar as they impact on activities in cyberspace.  The Council of Europe has promulgated a Convention on Cybercrime in an effort to do this for the laws that define certain type of cybercrime and govern what law enforcement can do in investigating online criminal activity.  The Convention has been ratified by many European countries, plus the U.S.; it can also be signed and ratified by other non-European countries, but I suspect that will take a while if, indeed, it happens.  I’m not going to go into detail on why I think that here, because it would be a digression; suffice it to say that the Convention is a very complex document, one that incorporates certain perspectives about law that may not be common in all countries.  I’m not saying that harmonization is a bad idea or that it is impossible; I am saying that I think it will take a long time to break down the barriers that exist between the unique, parochial laws that are found in every modern nation-state.

We needn’t give up hope for cyberspace laws, though.  There’s still the other alternative:  growing cyberspace laws in cyberspace.  While this might seem a peculiar approach, it actually has its roots in history.

A system of law known as the Lex Mercatoria developed in medieval Europe, beginning around the tenth and eleventh centuries. The Lex Mercatoria was, as Wikipedia explains, a

body of rules and principles laid down by merchants themselves to regulate their dealings. It consisted of usages and customs common to merchants . . .  in Europe, with slightly local differences. It originated from the problem that civil law was not responsive enough to the growing demands of commerce: there was a need for quick and effective jurisdiction, administered by specialised courts. The guiding spirit of the merchant law was that it ought to evolve from commercial practice, respond to the needs of the merchants, and be comprehensible and acceptable to the merchants who submitted to it.

The Lex Mercatoria was the product of a world in which nation-states had yet to evolve.  During this era, merchants did not operate from a single location; instead, they traveled to sell goods they bought in one place at another.  Because they were doing business in so many places, the itinerant merchants became increasingly frustrated at having to deal with a patchwork of parochial, often inconsistent and inadequate local laws.  To remedy this, they developed their own laws, which evolved from their needs and from the nature of the transactions in which they engaged. (They also developed their own courts, to ensure that disputes could be settled quickly and fairly, but that’s another story.)

Many scholars have suggested that the solution for cyberspace is the evolution of a new, online Lex Mercatoria – a Lex Cyberspace.  The Lex Cyberspace would be, like the Lex Mercatoria, a specialized set of laws that exist separate and apart from the general laws governing activities in the various countries of the world.  And like the Lex Mercatoria, the Lex Cyberspace would apply only to those who participate in activities undertaken in a specialized context, the context being trade for the Lex Mercatoria and cyberspace for the Lex Cyberspace.  The rationale for the two law codes would be essentially the same:  specialized endeavors that transcend national boundaries and national cultures require their own laws.

This approach could have certain advantages.  Like the Lex Mercatoria, a consensually-evolved Lex Cyberspace should be uniquely tailored to the needs of the cyber-community, instead of a modified version of real-world law extrapolated online, where it may or may not be appropriate. And a Lex Cyberspace would be internally-derived, instead of than being imposed by an external entity (or entities). This aspect of a Lex Cyberspace has implicitly manifested itself in certain ways, one of which is a general sentiment to the effect that online communities should be self-policing, i.e., should develop and enforce their own standards and rules of behavior. 

A Lex Cyberspace could, it seems, resolve both of the issues I noted above:  developing laws governing online activity and enforcing those laws.  The problem I see with the Lex Cyberspace solution is that nation-states are unlikely to be willing to surrender control to a body of online law-makers and law-enforcers.  The demise of the Lex Mercatoria, after all, is attributed to the rise of nation-states, which were jealous of and insecure with this independent, transnational legal institution.  Nation-states therefore assumed exclusive responsibility for making and enforcing law and, in so doing, subsumed the principles of the Lex Mercatoria into their own laws.

Perhaps there will someday be a Law of Cyberspace.  It seems, though, that such a phenomenon cannot exist unless and until the influence of nation-states erodes or until the laws governing the territories claimed by the discrete nation-states coalesce into a single, consistent whole.
 

Oct 12

Envelopes and encryption

As I’ve mentioned, last June the U.S. Court of Appeals for the Sixth Circuit held, in United States v. Warshak, that Americans have a reasonable expectation of privacy in the contents of emails they have stored on an ISP’s servers. 

(If the link to the Warshak opinion doesn’t work, you can find it by going to http://www.ca6.uscourts.gov and searching for it in the opinions section either by name or by opinion # 07a0225p.06).

The Warshak opinion means that law enforcement can no longer use a court order, which issues without a showing of probable cause, to obtain the contents of emails someone has left stored with their ISP.  They must, instead, obtain a search warrant, which does require them to show probable cause to believe that the emails contain or constitute evidence of a crime. 

The opinion was, as is usual, issued by a panel of three of the Sixth Circuit judges; my understanding is that the federal government, which was the losing party in the case, is asking the entire Sixth Circuit to rehear this case en banc, i.e., to have all the judges on the Sixth Circuit sit as a panel and re-decide the case.  If the Sixth Circuit does that, then the en banc panel can either agree with the three judges who said we have a Fourth Amendment expectation of privacy in our email, or disagree, and reject their conclusion.  If the Sixth Circuit rehears the case en banc and agrees that we do have a reasonable expectation of privacy in stored emails, then I’d say there’s a good chance the case will go to the Supreme Court, because the effect of such a decision is to invalidate a federal statute law enforcement officers routinely use to obtain access to stored emails. 

I don’t want to talk about the Warshak opinion, though.  I want to talk about the larger issue – the question of whether or not we can reasonably expect the contents of our emails to be, and to remain, private.  To do that, I need to review the standard courts apply when this issue comes up.

In 1877, the U.S. Supreme Court held, in Ex parte Jackson, that Americans have a Fourth Amendment expectation of privacy in the contents of sealed letters and packages they send through the U.S. mails (which was the default mail/package delivery service available at the time).  The Jackson Court specifically said “letters and sealed packages . . . in the mail are as fully guarded from examination and inspection, except as to their outward form and weight, as if they were retained by the parties forwarding them in their own domiciles.”  The Court also held that anything we send that is not sealed – such as a postcard – is not encompassed by this rule because we have taken no steps to protect the privacy of its contents.

The Warshak court cited the Jackson decision, as well as the Supreme Court’s 1979 decision in Smith v. Maryland.  In Smith, the Court held that we do not have a Fourth Amendment expectation of privacy in the phone numbers we call, even from our homes, because by dialing those numbers we voluntarily convey that information to the phone company and, in so doing, surrender any privacy interest in it.  I personally think the Smith decision was, and is, wrong, but that’s irrelevant. 

In Warshak the government essentially argued that we have no Fourth Amendment expectation of privacy in emails we leave stored with an ISP because the ISP staff can read those emails, since we have not “sealed” them.  Prosecutors often analogize email to a postcard:  we send our emails through a system in which they are “visible” to other people without doing anything to shield their contents, to make them unreadable.  The premise then is that the emails are like the phone numbers in Smith:  We voluntarily share them, in the clear, with an entity whose staff can decipher the information they contain.

The Warshak court rejected that, essentially finding that we have an expectation of privacy if and when our ISP’s terms of service state that its staff either will not read our emails or will do so only under certain circumstances.  That conclusion makes a certain amount of sense, but it really doesn’t resolve the Jackson-Smith problem, i.e., that the contents of stored emails CAN be read by ISP staff.

What I find interesting is that this whole controversy really does not need to arise.  If we encrypted our emails, we would be “sealing” them, just as we seal the letters and other correspondence we send through the mails.  If we “sealed” our emails, the Jackson rule would apply, even though we are sending emails via private carriers rather than through the U.S. mails.  The Jackson Court’s point went not to the vehicle by which a message is being transmitted but to the steps taken to shield the contents of the message from the eyes of those involved in its transmission.

So why don’t we encrypt our emails?  We were talking about this in my cyberspace law class yesterday, and one student pointed out that the general public doesn’t encrypt their emails because the process is too complex and/or too esoteric for them to use easily.  I think she’s absolutely right.  I think we are in a situation analogous to the situation letter writers were in until the mid-nineteenth century.

The adhesive envelope, which we assume has always been around, was not introduced until the mid-1800s. See Robert Ellis Smith, Ben Franklin's Web Site 56 (Providence RI:  Privacy Journal 2000).  Until then, people didn’t use envelopes; instead, they wrote their letters on a sheet of paper, which they folded and sealed with sealing wax, which was notoriously unreliable. See id. Letter writers knew the wax would probably fail, the letters would come unsealed and postal employees would probably read them. See id.  Many, including Thomas Jefferson, wrote their letters in code – encrypted them -- to avoid this.  See id. at 43. The adhesive envelope eliminated the need to encrypt letters because it was reliable AND easy to use.  The Jackson Court’s holding was implicitly based on the impact adhesive envelopes had on securing the contents of written correspondence from prying eyes.

When it comes to email, our situation is, and is not, analogous to that of pre-adhesive envelope letter writers.  Our situation is analogous because we have no simple way to “seal” our emails.  Since we consequently do not “seal” our emails, it is, as a practical matter, difficult to argue that the contents of those emails are private.  They really are postcards; their contents CAN (may not actually be, but CAN be) read by the staff of the entity involved in their transmission.  Since they can be read by anyone who comes in contact with them, it is not, as a matter of common sense, reasonable for us to claim that their contents are private. 

On the other hand, our situation differs from that of a pre-adhesive envelope letter writer in an important respect:  We have tools available that will allow us to “seal” the contents of our emails. We do not use those tools because, as I said earlier, using them involves a lot more effort and expertise than simply sealing an adhesive envelope.  I also think we don’t use these tools because most people, at least in this country, don’t realize that their emails are postcards, rather than letters.  That is, I think most people in the U.S., anyway, don’t realize that the contents of their emails are not private.

All of this can, and probably will, change.  Two things can transform the default status of email from that of postcard to that of sealed letter:  One is for people to realize that they must “seal” their emails for them to be private.  The other is the introduction of simpler, more intuitive encryption tools.  I think the transformation will require the interaction of both factors:  People will have to become receptive to the idea of encrypting emails, and the process of encrypting them will have to become at least a little more user-friendly.

If people begin to understand the utility of encrypting their emails, they will look for easy ways to do that.  One way is, as I said, for developers to introduce new, user-friendly encryption tools.  Another possibility is for ISPs to offer “one-click encryption” (if, in fact, that is a possibility), i.e., a system that automatically encrypts emails sent via that and compatible ISPs.  I don’t know if this kind of one-click encryption is technically possible, and even if it is, I can see implementation problems.  If I’m using a one-click encryption ISP, I assume I either can’t email people who don’t use my ISP or another, compatible  one-click encryption ISP or if I can email them, I lose my encryption.  But I assume the same kinds of problems will arise if and when our culture moves to one in which we seek to encrypt emails.

Encryption definitely is, and will probably continue to be, more challenging than an envelope.
 

Sep 29

Booze and bytes

We were talking in my cyberspace class about the DMCA and the recording and movie industries’ war on file-sharing. 

Specifically, we were talking about whether our current system for controlling the ownership and use of intellectual property makes sense in a world in which such property generally ceases to be tangible and instead becomes digital. Tangible property can be duplicated to some extent – books can be copied and music can be taped, for example – but it tends to be much more difficult and time-consuming to do so than it is to duplicate digital property.  I read a law review article a while back that explained how, and why, it was functionally impossible for regular people to duplicate or otherwise copy vinyl records when they were the only way in which music was distributed commercially.  The article also said that while the introduction of taping devices gave people the ability to copy records and tape songs off the radio, the cumbersomeness of the process and the often erosion in quality meant that this did not become a huge problem for the record companies.  And much the same is true for the movie industry; the introduction of video recording devices made it at least possible to record movies shown on television and, later, to copy tapes rented from a store or obtained otherwise.  But it was still enough of a pain that it didn’t become a major problem.

As we all know, digital technology changes all that.  Property moves from being tangible to becoming bytes, and bytes can be copied easily and quickly and without the erosion of quality you saw in older methods. And that, of course, causes major problems for the industries the existence of which is predicated on monopolizing the ability to distribute music, movies and other types of intellectual property.

And we all, I’m sure, know the approach these industries are taking to the problem of file-sharing:  First, they’re encouraging the federal government to bring criminal copyright infringement suits against larger-scale file-sharing operations.  Second, because there simply aren’t enough federal agents and federal prosecutors to go after a significant percentage of the people who are engaging in file-sharing, these industries – particularly the music industry – are using the threat of civil suits to deter individual file-sharers. The music industry has been sending offer to settle letters to students at many colleges and universities; they tell students they have been identified as having illegally downloaded copyrighted music and can either settle their liability by paying $3,000 or face a lawsuit seeking damages for the full amount, which can easily run into six figures. 

We were talking about all this in class, and about whether it is a rational and/or effective way to deal with these issues.  I said that what the music and movie industries are doing reminds me of the United States’ experiment with Alcohol Prohibition in the 1920s, but with one difference.  The laws implementing Alcohol Prohibition in this country did not make it a crime to have or consumer alcohol; they only made it a crime to manufacture and/or distribute alcohol.  Whoever drafted these laws apparently realized that it would be impossible to go after everyone in the United States who continued to buy and drink alcohol, so they tried to address the problem by cutting off the source of alcohol. 

As we probably all know, that didn’t work either because alcohol could be manufactured pretty easily.  Distilling alcohol is not, as they say, rocket science.  I’ve never tried it, but I know people who’ve made their own wine and their own beer, and from what I’ve read online it’s really not that complicated . . . even in our contemporary, primarily urban environment.  It would probably have been even easier in the 1920s, when the country was less urban and therefore more used to doing things from scratch.

My point is that Alcohol Prohibition failed because the government could not cut off the source of the prohibited item.  That is, they could not prevent people from making and sharing alcohol.  We all know about Al Capone and the infamous bootlegging mobs, but there was a lot of local, home-grown bootlegging, as well.

And that’s what I was mentioning in my class:  I see some interesting parallels between our failed national experiment with Alcohol Prohibition and the mostly private effort that is currently underway to eliminate the digital distribution of unlicensed copies of music, movies, software, and whatever else industries are or will be concerned about.

The music and movie industries are in one sense making the mistake the architects of the Alcohol Prohibition laws avoided:  They’re trying to go after the consumers of the product, the people who possess and use unlicensed digital copies of music or movies. The problem with that is scale:  To do this effectively, the music and movie industries, alone and/or with the assistance of law enforcement, would have to continually track down and prosecute a segment of the domestic file-sharing market that is substantial enough to put the fear of God into everyone who might even consider file-sharing. 

I don’t think that’s possible.  Law enforcement, and especially federal law enforcement, has a number of other priorities it needs to attend to; so law enforcement can spare only a small part of its resources to assist in this effort.  The civil suits are intended to act as a separate deterrent, but I’m not sure how effective they are going to be.  Some colleges and universities are resisting the industries’ efforts to get them to provide the names of students linked to IP addresses used to illegally download files, and this trend might increase.  The industries’ approach does put colleges and universities in something of an untenable position; their posture toward their students has traditionally been semi-parental, but this threatens to transform it into a more adversarial one.  There’s also the issue of resources:  Why should colleges and universities have to bear the added, extraordinary cost of identifying students and otherwise contributing to the industries’ effort to discourage illegal file-sharing?

There are other problems.  The civil suit approach only works within a country; the music and recording industries cannot use the threat of civil suits hear to target file-sharers in other countries, even if they can identify them.  They could transport the civil suit tactic abroad, and threaten to sue file-sharers in France and Pakistan and many, many other countries, but I suspect the cost of such an effort would soon become prohibitive.  And, on another note, the technology of file-sharing may become more sophisticated and therefore more undetectable. If it becomes impossible, even extraordinarily difficult, to identify file-sharers, then this approach simply cannot be effective. 

What is the alternative?  Well, we could continue our analogy to Alcohol Prohibition and argue that the music and movie industries should emulate the approach taken there by targeting not the consumers of the outlawed product but those who create and distribute the product. 

I see two problems with that analogy:  One is that with digital file-sharing the distinction between production and consumption arguably erodes; each file-sharer is, or can be, not only the recipient of shared files but also the distributor of shared files.  So the industries might argue that the approach they are currently using is inevitable.  I’m sure they would also point out that they, in cooperation with law enforcement, have also gone after the higher-level distributors in file-sharing operations.  The RIAA got Napster shut down, and there have been prosecutions of software-sharing, warez sites.

That brings us to the other problem:  Even if we assume that the music and movie industries and the laws they employing are analogous to the approach taken with Alcohol Prohibition, that approach did not work.  It simply failed to cut off the supply of alcohol in this country, both because it could be imported from Canada and elsewhere and because it could be manufactured here.  And manufacturing and distributing alcohol is a much more time-consuming and risky endeavor than digital file-sharing.  The former takes place in the real-world, and is therefore a highly visible endeavor; you need space and materials and time and then you have to transport a very bulky, somewhat fragile product over highways or sea lanes or rail lines.  All of that increases the chances you will be identified and apprehended.  You run some of those risks with file-sharing, but they are much reduced, both because of the relative invisibility and ease of the process and because it is highly distributed.  You’re no longer looking for Al Capone’s operation; you’re looking for, what, 10% of Illinois?

Like many others, I think our current approach to the protection of intellectual property rights is seriously flawed when it comes to dealing with distributed file-sharing.  I will not attempt to outline the alternatives, because others who are far more knowledgeable than I have already done so. 

My point is that the use of criminal and quasi-criminal sanctions cannot be effective when it is impossible to control the manufacture and distribution of a product and when the culture sees such control as illegitimate.  When people in a culture see such control as illegitimate, and have access to the product, norms of evasion grow up.  Compliance with the control system becomes the exception; the norm becomes the process of evading the system.  During Alcohol Prohibition, alcohol use increased in this country, especially among women.  There was a disconnect between what the law forbade and what people saw as appropriate. 

We have something similar, albeit on a much smaller scale, with file-sharing.  And it may be that the disconnect we see see between laws outlawing file-sharing and attitudes toward file-sharing in a segment of the populace are merely an indicator of what is to come.  We may find that the approach our law has used to allocate and enforce tangible property rights is not a viable approach for intangible property rights.    
 

Sep 23

GPS detectors, jammers & spoofers

I did a presentation on Friday for a bar association, in the course of which we discussed the law governing government use of Global Positioning System (GPS) devices to track people’s movements.  (GPS devices are used by private parties, as well, but my focus here is on criminal matters, so I’m only going to deal with the government’s use of them.)  This is really a follow up to a post I did a while back, where I talked a bit about this.

Under current law in the U.S., the Fourth Amendment’s prohibition on unreasonable searches and seizures are not implicated when the government installs a GPS device on someone’s vehicle and uses it to track their movements in public places.  Courts have held that installing the device is not a seizure of your vehicle because it in no way interferes with your operation or use of the vehicle; you don’t even know the device is there (which is the point, after all).  At least one state court has held that police do need a warrant, but that decision was made under state law and I’m focusing on the Fourth Amendment, because it is the default general national standard. 

They have also cited a 1983 U.S. Supreme Court decision upholding the use of “beepers” to help police follow suspects in their vehicles for the proposition that it is not a search for the government to use a GPS device to track your movements in public places.  (Courts say it is a search if police use GPS to track you into a private place, like your garage.)  They have reached this conclusion even though a GPS device, unlike the beeper at issue in the Supreme Court case, substitutes for a police officer; the device tracks your movements without a police officer’s having to be assigned to follow you around.  At least one state court has said that makes a difference under state law, because GPS lets police conduct surveillance on a larger scale than they could if they had to have officers follow people around; and the Seventh Circuit Court of Appeals noted that this might be a problem in the future, if police really begin to use GPS devices on a wide scale.

But, as of now, police do not have to get a search warrant to install and/or monitor a GPS device.  Both activities are completely outside the Fourth Amendment, and that means police can install and monitor the device without your knowing anything about it (which, as I noted above, is the whole point).  The basic practice when they have to get a warrant to do something is that they serve the warrant on you, then conduct a search of your home or other property, and then leave you with an inventory of what you took.  That way, you know they were there, why they were there and what they took.

In the course of talking about this with the bar association on Friday, I suggested this might create a market for GPS detectors, and we talked about that a bit.  One of the attorneys there, who has a good technical background, said there’s no way to use a detector to discover a passive GPS tracking device (GPS logger) that simply stores up information about your movements, but that a detector could be effective for an active device (GPS tracker) that transmits information periodically.  So we talked about that a bit, and I joked as to how there could become a real market for these things.

I decided to see if GPS detectors are on the market and, yes, there are some.  There are also GPS jamming devices.  I’ll talk about the detectors first, then the jammers; and then we’ll consider the legality of using these devices, now and in the future.

According to one site, it is possible to detect a GPS tracking device that transmits information by using a radio frequency detector/scanner.  How Do You Detect A GPS Tracking Device, Security Products.  The problem, according to this site, is that the RF detector/scanner will only detect the transmissions of the GPS device when the device is actually transmitting.  This site also notes that GPS devices use different technologies, which can also cause complications in detecting them.  Another site follows up on that, explaining that some trackers do transmit a constant signal, which makes them easier to detect, and that the same is true of GPS devices that use cell phone connections.  The site says that ultimately the best way to find a GPS device is to use a combination of detection and “finger-tip searching.”

We, though, are interested in the use of technology to find the devices, so we’ll stay with that.  Before we consider the legality of using such devices, I want to consider the other logical approaches to dealing with a GPS tracking device:  GPS jamming and spoofing.

I found a website that advertises at least two different GPS jamming devices.  Both plug into your vehicle’s cigarette lighter, and both jam a GPS device’s ability to collect and/or transmit location information.  I also discovered that it is possible to spoof the signals sent to a GPS device, so the device thinks it is in Place A when it’s really in Place B.
 
As far as I can tell, there are no laws outlawing the use of GPS detectors, jammers or spoofers.  Since I can see objections being raised to the use of these devices if and when they become more popular, I want to speculate a bit as to whether the use of any of these devices could legitimately be outlawed.

The obvious source of analogy here is radar detectors.  Like GPS detectors, jammers and spoofers, radar detectors allow those who use them to evade police surveillance technology.  

A very few US states outlaw radar detectors.  Virginia, for example, has a statute that makes it unlawful “to operate a motor vehicle on the highways of the Commonwealth when such vehicle is equipped with any device . . . to detect or purposefully interfere with . . . the measurement capabilities of any radar, laser, or other device . . . employed by law-enforcement personnel to measure the speed of motor vehicles on the highways”.  Virginia Code section 46.2-1079(A). 

In 1987, a bill was introduced in Congress that would have made it a federal crime to manufacture, sell or possess a radar detector, but languished and then disappeared.  See H.R. 2102, 100th Congress, 1st Session (1987). There was apparently little support for such a measure because, as one author notes, “the nationwide criminalization of a segment of the electronics industry and its consumers is arguably unjustifiable and implicates questions of federalism. Proponents of federalism allege that the issue is best left to state legislatures.”  Nikolaus Schandlbauer, Busting the “Fuzzbuster:” Rethinking Bans on Radar Detectors, 94 Dickinson Law Review 783, 789 (1990). 

There seems to be no reason why states cannot outlaw radar detectors.  A federal court of appeals upheld the constitutionality of the Virginia ban, agreeing with the district court that it “furthers a significant state interest in the health or safety of Virginia’s motorists”.  Bryant Radio Supply, Inc. v. Slane, 507 F. Supp. 1325 (District Court of Virginia 1981), affirmed 669 F.2d 921 (Fourth Circuit Court of Appeals 1982).  Notwithstanding that, most states have chosen not to outlaw them, presumably because they do not feel the evasion of law at issue here warrants such a punitive measure.

What about GPS detectors, jammers and spoofers?  Can they legitimately be outlawed?  Should they be outlawed?

In answering those questions, there may be some reason to differentiate between (a) detectors and jammers and (b) spoofers.  From my brief research online, it seems that spoofers can be used by thieves who want to hijack cargo being moved by trucks; the thieves can apparently use the spoofed GPS signals to disguise the fact that a truck is deviating from its authorized route, a deviation which is leading up to the theft of its cargo.  So, spoofing can be used to commit distinct, freestanding crimes as well as to frustrate law enforcement surveillance.  While the same might be true of the other two types of GPS countermeasures, I am going to assume they only frustrate surveillance, and so am going to treat them differently.

As to spoofers, the answers to the questions I posed above are “yes,” in both instances. If spoofers can be used to set up cargo thefts and other crimes, then they are analogous to burglar’s tools.  As I have explained before, many states outlaw the mere possession of burglar’s tools (which are usually defined as items that, in isolation or when collected together, clearly have no purpose other than illegal break-ins).  The justification for these statutes is that they outlaw a type of attempted crime; in other words, there is no reason to possess burglar’s tools except to use them in a burglary.  Spoofers are, I think more ambiguous:  I am not sure they have any legitimate purpose, but they can be used either to (a) frustrate law enforcement surveillance or (b) to facilitate cargo thefts and maybe other types of crimes, as well.  To the extent they fall into category (a), they should be encompassed by my analysis of the legality of outlawing jammers and detectors, which we’ll get to in a moment.  To the extent they fall into category (b), they can be outlawed if they are truly analogous to burglar’s tools, i.e., if they have no independent legitimate use.

As to detectors and jammers, we need to analyze each of them separately.  It seems to me that GPS detectors are very much analogous to radar detectors, in that they do not interfere with the functioning of law enforcement surveillance technology; they simply alert the target of the surveillance so that he or she can take appropriate measures to frustrate the surveillance.  One could, therefore, argue that there is no more reason to outlaw GPS detectors than there is to outlaw radar detectors.  The problem I see with this argument is that radar detectors are used only to detect a very low level of criminal activity, but GPS devices are usually used in investigating more serious crimes.  That could make a real difference in how states answer the two questions I posed above.  Because these detectors frustrate surveillance in investigations targeting more serious crimes, they could be seen as an effort to obstruct justice.  (The same is true of radar detectors, but here the frustration is at a very low level, given the minor criminal activity at issue.)  Indeed, one can argue that this is their whole purpose.  If you accept that view of GPS detectors, the answers to the two questions I posed above are, again, “yes.”

What about GPS jammers?  The analysis I went through in the paragraph above seems to apply to them, too.  And there is an aggravating factor here.  According to what I read on several websites, jamming GPS signals can create a safety hazard for ground vehicles and/or for aircraft.  If that is true, then the use of these devices creates a new, distinct hazard to public safety, and the creation of such a hazard is a matter the criminal law can legitimately address.  I suspect, then, that if GPS jammers begin to be used with any frequency, we will see efforts to outlaw their use at the state and/or federal level. I understand that their use is already illegal in European Union countries. 
 

This Blog

Syndication

Recent Posts

Sponsors

Tags

No tags have been created or used yet.

Archives