The Power of Place: Geolocation Tracking and Privacy

Abstract[1]

Location data tracking is ubiquitous. The tension between privacy and innovation in this space is exacerbated by rapid developments in tracking technologies and data analytics methodologies, as well as the sheer volume of available consumer data. This article focuses on the privacy risks associated with these developments. To the extent that current and proposed privacy law protects location data, such protection is limited to location data that is identified (or in some cases identifiable) to an individual. Requirements generally apply only to the initial data collector; however, recent media accounts and enforcement actions describe a robust secondary market in which (1) identified location data is regularly acquired and used by third parties with whom the individual has no direct relationship, and (2) de-identified or anonymized location data is regularly combined with identified personal data and used by third parties with whom the individual has no direct relationship to compile comprehensive profiles of the individual. These secondary-market practices are not currently addressed by U.S. law. This article proposes that the risks posed by location tracking and profiling are sufficient to warrant consideration of regulatory intervention at the following points: collection from the individual; use by the original data collector; transfer to and among secondary-market participants; identification of anonymized data to a specific individual; profiling of the individual; and decision-making based on profiling.[2]

I. Location Data Tracking Generally

Consumer location is tracked regularly by multiple systems and devices.[3] Many mobile applications (apps) continuously track user location; Facebook, Google, Apple, Amazon, Microsoft, and Twitter all track and use location data.[4]

Individuals often opt into location tracking through personal devices and their apps, such as fitness monitors, smartphones, and GPS trackers, for the purposes of allowing the app to provide them with the underlying service, such as determining distance ran, providing the local weather forecast, and locating and obtaining directions to nearby restaurants.

Business use cases for identified individual location data include providing consumer goods or services (such as roadside assistance) and marketing and targeted advertising.[5] Aggregated location data (i.e., data that is identifiable by distinct data location points but not by individual) can help urban planners alleviate traffic problems, health officials identify patterns of epidemics, and governmental agencies monitor air quality. Commercial uses of aggregated location data include inventory and fleet control, retail location planning, and geofencing. Specified data points may be aggregated over a defined time period and then presented as an overlay to a geographic map. For example, a trucking company can view in real time the locations of its trucks and the demand for trucking services to more efficiently assign routes. Alternatively, the trucking company can geofence its trucks, which means that if a truck goes out of a designated geographical zone, the company will be alerted in real time. Location data is critical to certain types of commercial and public data analytics.

Recent journalistic investigations have revealed that location data is tracked by a wider variety of parties for a greater number of purposes in ways that exceed our understanding or control. The sheer volume of location data tracked, disclosed, and repurposed is tremendous. The widespread availability of location tracking technologies compounds this issue.[6] Furthermore, the use of multiple systems to track location, and the use of data analytics to combine location data with other personal data, enables the both the identification of anonymous data and the compilation of comprehensive and precise profiles of tracked individuals.

Are we at a point yet where place itself acts as a consumer identifier? Unique location tracking patterns can be used to identify the individual; and develop a profile of the individual. A person’s lifestyle, priorities, professional and personal endeavors, and crimes and peccadilloes can all be inferred from continuous location tracking.

The power of place: A person cannot be in more than one place at the same time.

     A. Justification for the Initial Collection of Location Data

Location data is regularly collected by devices, apps, and other online services.

Generally, the basic app model is as follows. An individual downloads a map app in order to get directions. As part of the map app download, the individual agrees that his or her location will be tracked in order to provide personalized directions via the app. The app must know where the individual’s starting point is in order to give accurate directions to the individual’s destination. The individual’s smart phone hardware and the app software use GPS and other tracking technologies to determine the individual’s geographical location: the more accurate and recent the location data, the more accurate the app service.[7]

The wireless carrier transmits this real-time location data to a third-party company (the aggregator), subject to a nondisclosure agreement. The aggregator transmits the location data to the app so that the app can generate the directions to provide to the individual. The location data is tracked and disclosed in order to provide the requested transaction (i.e., directions) to the individual. The sharing of information with third parties is limited to these purposes, and the parties are bound by written nondisclosure agreements not to otherwise use or disclose the individual’s location.

This can be referred to as the initial transaction between the individual and the data collector. The justification for this sharing is that (1) it is necessary to (a) honor the customer’s request for app services and (b) ensure consistency of app usage quality across carriers and devices, and (2) the customer has consented to location tracking as part of his or her enrollment in the app service.

II. The Pandora’s Box of Location Data: The Secondary Location-Data Market

     A. Monetization of Location Data in Secondary Market

The purpose of the initial collection of location data is to enable the data collector to provide a service to the individual; the secondary market purpose is to use that same location data to make conclusions and predictions about the tracked individual. The secondary location data market is used to monetize location data for unrelated purposes, such as enabling a subsequent buyer to compile a profile of the individual and sell access to the individual (whether the individual is identified by name or as part of a data category, like “engaged female retail shopper”). Location data analytics drive a variety of business strategies:

Business data usually contains geographical or location data which mostly goes unused. This data can be as broad as city and country or as specific as GPS location. When this data is placed within the context of big data dashboards and data science models, it allows companies to discover new trends and insights.[8]

The secondary consumer-data market is huge. IBM claims that 90 percent of all consumer data that is currently in circulation was created in the last two years. This industry is expected to generate $350 million dollars annually by 2020.[9] Location data is a big part of that business. The New York Times reported that:

At least 75 companies receive anonymous, precise location data from apps whose users enable location services to get local news and weather or other information, The Times found. Several of those businesses claim to track up to 200 million mobile devices in the United States—about half those in use last year. The database reviewed by The Times—a sample of information gathered in 2017 and held by one company—reveals people’s travels in startling detail, accurate to within a few yards and in some cases updated more than 14,000 times a day.

These companies sell, use or analyze the data to cater to advertisers, retail outlets and even hedge funds seeking insights into consumer behavior. It’s a hot market, with sales of location-targeted advertising reaching an estimated $21 billion this year.[10]

Location tracking data analytics support targeted advertising and marketing for retail and other business purposes. This profiling is intended to individualize the customer experience as much as possible to encourage purchases and loyalty:

[T]he scale of data collected by early adopters of [location tracking] technology is staggering. Location analytics firm RetailNext currently tracks more than 500 million shoppers per year by collecting data from more than 65,000 sensors installed in thousands of retail stores. A single customer visit alone can result over 10,000 unique data points, not including the data gathered at the point of sale.[11]

In addition, the potential combinations and re-use of location data is tremendous:

[B]y combining location data with existing customer data such as preferences, past purchases, and online behavioral data, companies gain a more complete understanding of customer needs, wants and behaviors than is achievable with online data only.[12]

In 2014, Shoshana Zuboff coined the term “surveillance capitalism” to describe how consumer data has become a business unto itself.[13] More recently, Zuboff explained how location data fits in this model:

[There] has been a learning curve for surveillance capitalists, driven by competition over prediction products. First they learned that the more surplus the better the prediction, which led to economies of scale in supply efforts. Then they learned that the more varied the surplus the higher its predictive value. This new drive toward economies of scope sent them from the desktop to mobile, out into the world: your drive, run, shopping, search for a parking space, your blood and face, and always … location, location, location.[14]

Data is generally sold on the secondary market as identified data (which is directly associated with a distinct individual) or as de-identified or anonymous data (which is aggregated and not associated with a distinct individual).

     B. Disclosure of Identified Location Data

          1. Disclosures by Aggregators 

Under the app model, the aggregators receive the individual’s location in order to send it to the app owner for purposes of furnishing the app service. Distribution of this data is much more widespread. Journalistic investigations reveal that aggregators routinely sell location data to a series of parties that are not intermediaries to the initial data transaction, leading to dissemination of location data beyond its intended purpose, and resulting in unrelated third-party access to the individual’s location data.[15]

One such aggregator, LocationSmart, regularly sold continuous cell tower location tracking to Securus Technologies, a prison contractor that provides and monitors calls to inmates. As an ancillary service, Securus “offers [a] location-finding service as an additional feature for law enforcement and corrections officials, [as] part of an effort to entice customers in a lucrative but competitive industry.” This service was used by a variety of law enforcement officials for a wide variety of purposes, including search-and-rescue operations, thwarting prison escapes and smuggling rings, and closing cases.[16]

The relationship between Securus and LocationSmart impacted almost all U.S. cell phone users, was unknown to them, and could not be opted out of:

So how was Securus getting all that data on the locations of mobile-phone users across the country? We learned more last week, when ZDNet confirmed that one key intermediary was a firm called LocationSmart. The big U.S. wireless carriers—AT&T, Verizon, Sprint, and T-Mobile—were all working with LocationSmart, sending their users’ location data to the firm so that it could triangulate their whereabouts more precisely using multiple providers’ cell towers. It seems no one can opt out of this form of tracking, because the carriers rely on it to provide their service.[17]

Another Motherboard investigation showed that wireless carriers also routinely sell assisted or augmented global positioning system (aGPS) location data. aGPS data is more precise location data that is collected for use with enhanced 9-1-1 services to allow first responders to pinpoint an individual’s location with greater accuracy. For example, a cellular call made to the 9-1-1 emergency service that relies solely on GPS satellites might indicate the caller’s location within a given area, such as a building, and it might take several minutes to determine that location. aGPS relies on other external and systems to provide a faster, more precise location, like a floor within a building.

Federal law expressly prohibits the sale of aGPS data.[18] The Federal Communications Commission issued an order in 2017 providing that data included in the National Emergency Address Database, which is collected using Wi-Fi and Bluetooth to locate 9-1-1 callers within a building, may not be used for any other purpose.[19] In addition, the Federal Trade Commission could enforce section 5 of the Federal Trade Commission Act prohibiting deceptive and unfair trade practices against carriers whose privacy policies were inconsistent with this practice.[20]

          2. Privacy Leaks and Security Breaches

In addition to intentional disclosures, LocationSmart exposed this real-time location data through a bug in its website, which enabled users to track anyone without credentials or authorization using a free demo and a single cell phone number:

Anyone with a modicum of knowledge about how Web sites work could abuse the LocationSmart demo site to figure out how to conduct mobile number location lookups at will, all without ever having to supply a password or other credentials.

“I stumbled upon this almost by accident, and it wasn’t terribly hard to do,” Xiao [a security researcher] said. “This is something anyone could discover with minimal effort. And the gist of it is I can track most peoples’ cell phone without their consent.”

Xiao said his tests showed he could reliably query LocationSmart’s service to ping the cell phone tower closest to a subscriber’s mobile device. Xiao said he checked the mobile number of a friend several times over a few minutes while that friend was moving and found he was then able to plug the coordinates into Google Maps and track the friend’s directional movement.[21]

Further, the Securus database was the subject of a data hack that separately exposed personal data. A Motherboard reporter obtained data that had been hacked from Securus’s database:

“Location aggregators are—from the point of view of adversarial intelligence agencies—one of the juiciest hacking targets imaginable,” Thomas Rid, a professor of strategic studies at Johns Hopkins University, told Motherboard in an online chat.

The data hack, which was attributed to a weak password reset feature, revealed personal data of thousands of law enforcement users and inmates.[22]

This means that Securus, acting as an unregulated entity and outside of the scope of its nondisclosure agreements with the wireless carriers, was responsible for innumerable disclosures of identified location data.

Other privacy failures involving identified location data can result in exposure to threats of physical danger.  A  recent privacy failure by a family tracking app (React Apps “Family Locator”) that exposed children’s identified location data for weeks; the very app that parents obtained to protect their children arguably put them at great risk:

Family tracking apps can be very helpful if you’re worried about your kids or spouse, but they can be nightmarish if that data falls into the wrong hands. Security researcher Sanyam Jain has revealed to TechCrunch that React Apps’ Family Locator left real-time location data (plus other sensitive personal info) for over 238,000 people exposed for weeks in an insecure database. It showed positions within a few feet, and even showed the names for the geofenced areas used to provide alerts. You could tell if parents left home or a child arrived at school, for instance.[23]

          3. Access by Unauthorized Third Parties

The same Motherboard reporter was able to identify the exact location of a smartphone using only the phone number and a $300 payment to a bounty hunter in an attenuated process that apparently happens regularly and in violation of the apps’ posted privacy policies and the parties’ written nondisclosure agreements.[24] In the Motherboard scenario, a wireless carrier sold an individual’s location data to an aggregator, that sold it to a skip-tracing firm, that sold it to a bail-bond company, that sold it to an independent bounty hunter. The bounty hunter had no written agreement with anyone and no relationship with the wireless carrier or the individual customer, and neither did its source.[25]

The article’s aftermath included revelations that all of the major wireless carriers sold location data to aggregators that ultimately sold the data to hundreds of bounty hunters.[26] Multiple lawmakers sent the major carriers and aggregators letters requesting an explanation of these location data sharing practices.[27]

The ensuing furor prompted the wireless carriers to commit to stop selling location data to aggregators.[28] The Wall Street Journal reported that Verizon, Sprint, T-Mobile, and AT&T all committed to end agreements with downstream location aggregators, and Zumigo (the initial aggregator in the bounty hunter scandal) cut off access by the intermediary aggregator to whom it sold the location data.[29]

          4. Privacy and Security Risks

These investigations indicate that real-time location data that is identified to a particular individual is regularly monetized and sold to third parties in a manner that is arguably inconsistent with the individual’s consent, the apps’ stated privacy policies, the data collector’s third-party nondisclosure agreements, and applicable law.

In other words, location data identified to a specified individual is routinely collected and sold by a variety of parties for a variety of purposes unrelated to the original transaction that justified the initial location data collection. This results in a myriad of privacy and security risks to the individual. Consider a stalker who tracks his or her victim’s location either by signing up for a free Securus or similar trial or by paying a bounty hunter. The victim may be taking strict precautions to elude location tracking and would not even be aware of this risk. In addition, the more entities that possess the victim’s location data, the greater the likelihood of a privacy exposure or data breach.

     B. Sales of De-identified or Anonymous Location Data

          1. Sales by App Owners

Separately, apps that receive individual user location data from aggregators frequently sell location data to third-party buyers for their own commercial purposes. The data is provided in large sets that do not identify the specific individuals who are tracked.[30] The purpose of the data set is to enable the buyer to identify patterns in location data. Such business use cases may involve allowing buyers to spot trends for investment[31] or marketing purposes.[32]

In this context, the justification for the sale and reuse is that the individual’s personally identifiable information (like phone number or name) is deleted from the data and replaced instead with a unique identifier.

The model is basically as follows. A map app organizes location data for a specified commercial neighborhood over a defined time period to show the number of people who walk through the neighborhood during the time period. This foot traffic may show times of day when foot traffic is greatest and areas in the neighborhood that may attract more or less foot traffic. This data may be sold to a retailer for purposes of deciding whether the neighborhood, or any particular part of it, would be suitable for establishing a brick-and-mortar location. The retailer purchases the data for research and investment purposes. Its interest is in the number and patterns of individuals who walk through the neighborhood.

For these purposes, the identity of the individual is not relevant to the data buyer and is not included in the data set. It is the traffic patterns or trends and not the individual’s identity that gives this data set value.

         2. Re-identification by Unknown Third Parties

Data sets may be used to identify the individual through other means, however.

In order to verify the authenticity of the data points that comprise the data set and facilitate the tracking by the app/seller of the unique location data of an individual, the individual is assigned a unique identifier, and the individual’s unique identifier can remain the same. Presumably, then, buyers could use the unique identifier to track identifiers over time and combine them with other data to identify the individual subject.

Separately, using data analytics, location data can be combined with nonlocation data to ascertain an individual’s identity. For example, the retailer that buys the anonymous data set could note that a single data point or individual goes back and forth from a nearby residential address throughout the day. Matching the individual to his address enables identification of the individual.

A more sensational example of this is the use by law enforcement of DNA information combined with location data to identify suspects in cold cases.[33]

The New York Times, with permission from a school teacher, was able to accurately associate anonymous location data with the individual teacher solely by reviewing four months’ and more than a million phones’ worth of location data and combining that with their knowledge of where she worked and lived.[34] The report posits that:

[t]hose with access to the raw [anonymized] data—including employees or clients—could still identify a person without consent. They could follow someone they knew, by pinpointing a phone that regularly spent time at that person’s home address. Or, working in reverse, they could attach a name to an anonymous dot, by seeing where the device spent nights and using public records to figure out who lived there.[35]

In fact, location data alone may be used to identify consumers in large anonymized data sets.

In 2013, MIT and Belgian researchers: “analyzed data on 1.5 million cellphone users in a small European country over a span of 15 months and found that just four points of reference, with fairly low spatial and temporal resolution, was enough to uniquely identify 95 percent of them.”[36]

As technology has evolved and the use and dissemination of location data has proliferated, reidentification of individuals included in anonymized data sets has been greatly facilitated:

With an increasing number of service providers nowadays routinely collecting location traces of their users on unprecedented scales, there is a pronounced interest in the possibility of matching records and datasets based on spatial trajectories. Extending previous work on reidentifiability of spatial data and trajectory matching, we present the first large-scale analysis of user matchability in real mobility datasets on realistic scales, i.e. among two datasets that consist of several million people’s mobility traces, coming from a mobile network operator and transportation smart card usage. . . .We show that for individuals with typical activity in the transportation system (those making 3-4 trips per day on average), a matching algorithm based on the co-occurrence of their activities is expected to achieve a 16.8% success only after a one-week long observation of their mobility traces, and over 55% after four weeks. We show that the main determinant of matchability is the expected number of co-occurring records in the two datasets. Finally, we discuss different scenarios in terms of data collection frequency and give estimates of matchability over time. We show that with higher frequency data collection becoming more common, we can expect much higher success rates in even shorter intervals.[37]

          3. Privacy and Security Risks

As tracking technologies become further developed and more widely accessible and data analytics become more sophisticated, anonymous data points (particularly when tracked over time) can be used to facilitate identification of the individual.

Consider the private investigation of various retail robberies.[38] If the retailer did not have a suspect’s name, its private investigator could identify possible suspects by:

  1. purchasing from an aggregator anonymized cell phone location data for all individuals near each robbed location during the time of each robbery;
  2. pinpointing unique IDs or data points for all phones present at some or all of the robberies;
  3. requesting extended cell phone location data for the unique IDs or data points from the wireless carriers;
  4. purchasing larger pools of anonymized data from an aggregator and reidentify data points within a given area and timeframe; or
  5. hiring a bounty hunter to track the numbers and locations of the phones tied to the unique IDs or data points.[39]

The City of Los Angeles passed rules requiring scooter companies to provide the per-trip location data of each scooter to city officials within 24 hours of the end of the trip.  Although the rider’s identity is not disclosed to the city and the location data will be treated as confidential by the city, it will be accessible in aggregated form to various city agencies and accessible in per-trip form to law enforcement, subject to a warrant, and to third parties, in response to a subpoena.   Given the sensitivity of location data and the ability of using location data itself to identify individuals, consumer advocates have framed this not as a matter between the scooter companies and the city but as a matter of governmental surveillance and debate between individual citizens and the city:

“This data is incredibly, incredibly sensitive,” said Jeremy Gillula, the technology projects director for the Electronic Frontier Foundation, a San Francisco-based digital rights group.

The vast trove of information could reveal many personal details of regular riders — such as whom they’re dating and where they worship — and could be misused if it fell into the wrong hands, the nonprofit Center for Democracy and Technology told the city in a letter.[i]

De-identified, real-time location data is regularly monetized and sold to third parties for a variety of purposes unrelated to the original transaction that justified the initial location data collection. Location tracking use cases include the following scenarios:

  1. location data point identified to a specific individual;[40]
  2. location data point identifiable to a specific individual;
  3. location data point not identified to the individual;
  4. continuous location tracking identified to a specific individual;
  5. continuous location tracking identifiable to a specific individual;
  6. continuous location tracking not identified to the individual;
  7. development of a profile based on location tracking identified to a specific individual;
  8. development of a profile based on location tracking that is identifiable to a specific individual; and
  9. location data used to compile a profile of an unidentified individual.

As described above, the distinctions among these categories become less relevant in practice, and the risks posed by transfers of anonymized location data may be as great as those posed by sales of identified location data.

III. Location Tracking: Profiling the Individual

Precise tracking of an individual’s location over time can be used to discover information about the individual that may not be otherwise available (consider repeat trips to a bar, the home of a person not the individual’s spouse, or to an oncologist), which when combined with other data, can be used to develop a fairly comprehensive profile of the individual. Even anonymized data profiles can pose these risks to the individual due to the relative ease of reidentifying an individual, as described above.

     A. Data Profiling and Decision-Making

Profiling is done for a variety of purposes; targeted advertising and marketing is the most well-known effort. For example, if an Apple customer is in geographical proximity to an Apple Store, his or her phone could provide ads for Apple TV. These ads may be more successful if the individual were located in a TV store near an Apple Store, or better yet, if the individual were located for several minutes in an Apple Store near the Apple TV demo.

Individual data profiling has become sophisticated and comprehensive, and location data is an integral part of profiling:

A profile is a combination of metrics, key performance indicators, scores, business rules, and analytic insights that combine to make up the tendencies, behaviors, and propensities of an individual entity (customer, device, partner, machine). The profile could include:

  • Key demographic data such as age, gender, education level, home location, marital status, income level, wealth level, make and model of car, age of car, age of children, gender of children, and other data. For a machine, it might include model type, physical location, manufacturer, manufacturer location, purchase date, last maintenance date, technician who performed the last maintenance, etc.
  • Key transactional metrics such as number of purchases, purchase amounts, returns, frequency of visits, recency of visits, payments, claims, calls, social posts, etc. For a machine, that might include miles and/or hours of usage, most recent usage time and date, type of usage, usage load, who operated the product, route of product usage (for something like a truck, car, airplane, or train)
  • Scores (combinations of multiple metrics) that measure customer satisfaction level, financial risk tolerance, retirement readiness, FICO, advocacy grade, likelihood to recommend (LTR), and other data. For a machine, that might include performance scores, reliability scores, availability scores, capacity utilization scores, and optimal performance ranges, among other things
  • Business rules inferred using association analysis; for example, if CUST_101 visits a certain Starbucks and a certain Walgreens, we can predict (with 90% confidence level) that there is an 85% likelihood that this customer will visit a certain Chipotle within 60 minutes
  • Group or network relationships (number, strength, direction, sequencing, and clustering of relationships) that capture interests, passions, associations and affiliations gained from using graphic analysis
  • Coefficients that predict certain outcomes or responses based upon certain independent variables found through regression analysis; for example, a machine’s likelihood to break down given a number of interrelated variables such as usage loads since last maintenance, the technician who performed the maintenance, the machine manufacturer, temperatures, humidity, elevation, traffic, idle time, etc.)
  • Behavioral groupings of like or similar machines or people based upon usage transactions (purchases, returns, payments, web clicks, call detail records, credit card payments, claims, etc.) using clustering, K-nearest neighbor (KNN), and segmentation analysis[41]

Location data analytics are used to make a variety of decisions that may impact the individual. One use case for data profiling is credit-risk analysis. Such data profiles may arguably be considered “consumer reports” governed by the federal Fair Credit Reporting Act (FCRA). As the lines have blurred between online decision making and targeted advertising, and prescreening and marketing (the former are protected by FCRA and the latter are not), it certainly appears as if credit availability depends, in part, on secondary market data that the consumer reporting agencies do not treat as “consumer reports” under FCRA.[42]

Payment-card fraud management can also be enhanced by developing profiles of each cardholder. By combining device location data with transaction histories, fraud detection is more precise:

New technologies . . . merg[e] a broader range of financial data, mobile-phone data, and even social-networking data to better establish the likelihood it’s actually you behind the transactions racking up on your cards or mobile device. Nguyen says that Feedzai’s system can improve fraud detection rates from 47 percent to almost 80 percent. ­Chirag Bakshi, founder and CEO of Zumigo, a company in San Jose, California, that provides location-based mobile services, says his company’s data algorithms reduce fraud losses by at least 50 percent.

“When fraudsters steal your identity, what they can’t do is steal your behavior,” Nguyen says. That, in fact, has long been the principle behind credit card fraud alerts. But a conventional credit card company is relying on information from your past to guess whether each attempted transaction is genuine. Today’s new technologies tap into your mobile phone and its more up-to-date information to see if your current behavior matches your purchase.

“[We can use] a SIM card as a proxy for a person,” says Rodger Desai, CEO of Payfone, which works with banks, mobile operators, and fraud detection companies to assess the legitimacy of a given payment. Payfone builds a profile of a user and tracks more than 400 types of data to create what it calls a persistent identity. Change phone company? Noted. Someone steal your phone or clone it? The company will catch that, too. Even if you’ve canceled your cellular data plan, it has ways of flagging the activity of someone who then tries to use the phone’s Wi-Fi connection.[43]

Data analytics for decreasing fraud are likely welcome to the individual. Once a “persistent identity” is created by profiling the individual’s location and related data, however, there are few limits on how that profile may be used or sold:

Mobile location data firms interviewed for this story stressed their dedication to encrypting data to prevent direct connections to individuals, yet there are no industry-wide accepted practices or U.S. government regulations preventing the use of such data in ways that weren’t originally intended. For instance, data reflecting drinking or drug use arguably could find its way into data models for targeting ads for health insurance plans, or even find its way into formulas used to calculate health or auto insurance rates or job eligibility.[44]

     B. Behavioral Influencing

Use of predictive modelling has been extended to influence behavior:

It works like this: Ads press teenagers on Friday nights to buy pimple cream, triggered by predictive analyses that show their social anxieties peaking as the weekend approaches. “Pokémon Go” players are herded to nearby bars, fast-food joints and shops that pay to play in its prediction markets, where “footfall” is the real-life equivalent of online clicks.[45]

The intrusiveness of such profiles cannot be overstated. Facebook has shown advertisers:

how it has the capacity to identify when teenagers feel “insecure”, “worthless” and “need a confidence boost”, according to a leaked documents based on research quietly conducted by the social network[, which] states that the company can monitor posts and photos in real time to determine when young people feel “stressed”, “defeated”, “overwhelmed”, “anxious”, “nervous”, “stupid”, “silly”, “useless” and a “failure.”[46]

Location data is key to this type of influencing:[47]

The next step—”from flat, to fast, to deep, is psychic,” Friedman believes. “I now know your whole psychographic from your phone. I will just push you your groceries, push you the supplies you need, push you the information you need.”

The use of profiles for behavioral targeting is likely as limitless as the use of profiles for predictive behavior:

Imagine a not-so-distant future where you’re just driving on the highway. Your car is sending real-time data about your performance behind the wheel to your insurance company. And in return, the insurance company is sending real-time driver behavior modification punishments back—real-time rate hikes, curfews, even engine lockdowns. Or, if you behave in the way they like, you get an instant rate discount.

In other words, the insurance company is shaping your behavior right then and there. Would you like that? What does it mean for our entire understanding of free will?[48]

Once the individual’s “persistent identity” is created, however, its uses are not limited. Consider another Facebook scandal: Cambridge Analytica. Cambridge Analytica combined personal user data obtained from a Facebook app developer (in violation of its nondisclosure agreement) with data combined from other sources, including location data, to compile profiles of voters around the world for the purpose of influencing elections using propaganda and direct marketing.[49] Up to 87 million Facebook users worldwide were profiled with the intent of waging “psychological warfare” against and targeting “influence operations” to these users.[50] Cambridge Analytica’s parent company’s reach exceeded “100 election campaigns in over 30 countries spanning five continents.”[51] Cambridge Analytica was a secondary market user of the location data collected from Facebook profiles and from external sources. The Facebook users had no idea that the voter profiles were being compiled or that their location data was being used to identify them for specific political campaigns for the purposes of influencing their votes.

     C. Privacy and Security Risks

Use of location tracking data to create individual profiles is not addressed under current law and poses unique risks. The eventual data buyer that compiles the data profile or identifies an individual in relation to a profile may not be in privity with either the individual or the original data collector. Further, once a profile is created for a specific purpose, there are few limits on using the profile for other purposes:

The fact is that location data is flowing around the digital ecosystem with little control. Many of the firms that have built businesses on using mobile location data for ad targeting gather the data from ad calls made by programmatic ad systems. And audience segments like “frequent quick serve restaurant visitors” could be accessed for ad targeting as easily as they could be excluded from targeting parameters for health insurance ads, for instance.  “Even though data is used just for marketing, there’s no reason to think it will only be used just for that purpose,” said Dixon. “Those formulas—they are data hungry,” she said of data models used by insurance firms or other corporations.[52]

At this point, the uses and distribution of individual profiles based on location data appears limitless, even though the individual has no control over or knowledge of them and may not opt out of data profiling or access or correct data profiles. Moreover, use of such profiles has become increasingly intrusive as secondary market participants seek to monetize their value.

IV. United States Law and Location Tracking

Federal law does not directly regulate location tracking or the collection, sale, or use of personal location data.[53] Location tracking has, however, been the focus of recent significant actions.

     A. FTC Enforcement Actions and Issuances

The Federal Trade Commission (the FTC) has focused on location tracking for several years through reports and a series of enforcement actions under section 5 of the Federal Trade Commission Act regarding unfair and deceptive trade practices (UDAP)[54] and the Children’s Online Privacy Protection Act (COPPA).[55]

          1. Sensitivity of Location Data

In its 2013 FTC Staff Report, Mobile Privacy Disclosures: Building Trust Through Transparency, the FTC stated that location tracking should be preceded by just‐in‐time disclosures made to the individual and subject to the individual’s affirmative express consent. The disclosures should clearly explain how the location tracking is conducted (i.e., one‐time versus persistent collection practices) and for which purposes.[56] In its 2012 Privacy Report, the FTC asserted that the precise location data of an individual should be considered “sensitive” information (similar to children’s data, health, and financial information) and should not be stored beyond the time period necessary for providing the service to the individual that justified the location tracking in the first place. The FTC clarified that affirmative express consent is generally required for location data collection, except when appropriate in context (e.g., when the individual searches for nearby weather or locations).[57]

          2. Misuse by Data Collectors

Uber uses real-time location tracking for the purpose of locating drivers and riders schedule and administer rides. In 2017, the FTC pursued a UDAP enforcement action against Uber for its collection of location data even when the app was not in use and use of such data for purposes other than administering rides:

The FTC entered into a consent order with Uber Technologies, Inc. regarding its use of the so-called “God View” feature of the Uber application software (“Uber App”), which implemented continuous geolocation tracking of all users (drivers and riders) at all times and allowed employee access to such tracking information, regardless of whether or not the users were actively using the Uber App or the Uber ride service.[58] The FTC complaint alleged the following unfair and deceptive trade acts and practices: Uber employees improperly accessed the user geolocation information for purposes other than picking up riders, including allegations that employees accessed the geolocation information of certain riders who were journalists critical of Uber’s business practices for the purposes of conducting “opposition research” on such journalists.[59] Uber subsequently publicized action taken to limit and monitor employee access to such geolocation information but such limits and monitoring were ultimately abandoned by Uber.[60][61]

The name “God View” is apt; real-time location tracking compiles a precise and continuous location record of the individual’s whereabouts indefinitely. (Facebook’s BOLO list (“be on lookout”) recently came under scrutiny. Like God View, BOLO uses Facebook app and website activity to monitor the real-time location of users Facebook has determined pose a credible threat to the company or its officers.[62])

          3. Access to and Use of Location Data by Third Parties

The FTC entered into similar consent orders with other companies that collected personal location data:

  1. A cell phone provider whose Chinese security vendor uploaded firmware on the phones to collect user personal data, including cell phone tower location data (UDAP).[63]
  2. A marketing enterprise platform service provider whose targeted advertising software tracked app user location data (including that of children) and combined such data with aggregated wireless network data to identify an individual user’s precise location for purposes of ad targeting (UDAP and COPPA).[64]

The FTC described the variety of systems and methodologies used to develop the marketing platform at issue in the second action above. The InMobi SDK platform allowed app developers to integrate their apps in the platform for purposes of monetizing their users’ location data by allowing third-party advertisers to target ads to the app users:[65]

So how did InMobi circumvent these protections to track the consumer’s location without consent? By creating its own geocoder database. As explained in more detail in the complaint, InMobi collected information through consumers’ devices that allowed it to map out the real-world latitude and longitude coordinates of Wi-Fi networks. InMobi then monitored the Wi-Fi networks that a consumer’s device connected to (on both Android and iOS), and in many instances, the Wi-Fi networks that a consumer’s device was in-range of (on Android). By collecting the BSSID (i.e., a unique identifier) of the Wi-Fi networks that a consumer’s device connected to or was in-range of, and feeding this information into its geocoder database, InMobi could then infer the consumer’s location. Until December 2015, InMobi used this method to track the consumer’s location even if the application that had integrated the InMobi SDK had never asked the consumer for permission to access location, and even if the consumer had turned off all location services on the device.

          4. Notice-and-Choice Model

In all of these enforcement actions, the UDAP claims against the data collector were based on:

  1. the failure to give clear disclosures to the individual;
  2. the failure to obtain valid consent or failure to honor opt-out; and
  3. collection of location data in conflict with stated privacy policies.

Notice and choice are also key to COPPA compliance. The FTC sent COPPA warning letters in 2018 to two parental tracker manufacturers for failure to give direct notice of the real-time collection of precise location information and obtain verifiable parental consent in connection with the marketing and sale of children’s smart watches.[66] (Recent media scrutiny has focused on privacy vulnerabilities associated with children’s GPS tracking.)[67] The failure to give notice and obtain verifiable parental consent was also the crux of the COPPA claim in InMobi.[68]

This notice-and-choice model focuses on whether the individual understood the extent of the real-time location monitoring and consented to or did not opt-out of it. Based on the foregoing, location tracking that is clearly disclosed and subject to effective consent may be permissible under both UDAP and COPPA.

All of these actions were against the data collector and involved location data that was (1) identified to a particular individual and (2) collected over time. The FTC consistently expressed concerned about the pervasiveness and intrusiveness of the continuous location tracking of the individuals.

Although the enforcement action in InMobi was against the initial data collector, the FTC’s concerns about combining location data with other data for purposes of monetizing the data in ways unrelated to the initial transaction between the individual and the app would also apply to secondary market use of the data. As currently enacted, however, neither UDAP nor COPPA grants the FTC enforcement authority over secondary market participants that are not in privity with the individual.

     B. United States Supreme Court: Carpenter v. United States[69]

Last year, the U.S. Supreme Court decided that law enforcement must have a Fourth Amendment probable cause warrant to obtain an individual’s long-term, real-time location data from the individual’s wireless carrier.[70]

Like the FTC, albeit in a much different context, the Court was struck by (1) the intrusiveness and pervasiveness of continuous location tracking; and (2) the use of location data for purposes unrelated to the justification for the original collection.

The facts and background of Carpenter v. United States are as follows:[71]

This case involved a series of armed robberies and an order under the Stored Communications Act (“SCA”).[72] In 2011, a group of men robbed a series of Radio Shack and T-Mobile stores.[73] A suspect gave Federal Bureau of Investigation (“FBI”) agents the names and cellular phone numbers of several accomplices, including Carpenter. Based on that information, the FBI was able to obtain an SCA court order for Carpenter’s cellular phone records, including geolocation information, during the four-month period in which the robberies occurred. (The type of geolocation information at issue here is specifically cell-site location information, which is tied to the proximity of an individual phone to each of the wireless carrier’s radio antennae.[74])

The SCA prescribes limited circumstances under which the government can compel an electronic communications service provider (“SP”) to disclose user content or data.[75] The government must obtain a warrant, subpoena, or court order under the SCA (“SCA order”) requiring such disclosure without notice to the user.[76] The effect of these SCA requirements is to give the user an expectation of privacy in his SP records.[77]

In response to the SCA order, the FBI “obtained 12,898 location points cataloging Carpenter’s movements—an average of 101 data points per day.”[78] As a result, he was arrested for multiple counts of armed robbery and the federal crime of carrying a firearm. He argued that the FBI’s seizure of the geolocation records “violated the Fourth Amendment because they had been obtained without a warrant supported by probable cause.”[79]

At issue was whether Carpenter had a “reasonable expectation of privacy” in his personal location information, which was entitled to protection under the Fourth Amendment prohibition against “unreasonable search and seizure.” If the answer to the preceding question is “yes,” then access to such records would have required a “warrant supported by probable cause,” rather than the SCA order’s less stringent showing requiring the proffer of “specific and articulable facts showing that there are reasonable grounds to believe that the contents of a wire or electronic communication, or the records or other information sought, are relevant and material to an ongoing criminal investigation.”[80]

In a precursor to Carpenter, the Court considered the applicability of the Fourth Amendment to the use of a GPS tracking device on a suspect’s vehicle.[81] In that case, FBI agents tracked the vehicle movements continuously over a 28-day period. The majority opinion held that this tracking was subject to the Fourth Amendment and relied on a property-right theory in doing so; the suspect had a reasonable expectation of privacy in his vehicle, and the FBI’s intrusion in the undercarriage of the vehicle to place the GPS violated that right.

In his concurring opinion in Jones, Justice Alito focused more on the invasion of privacy resulting from the GPS tracking itself (rather than the placement of the tracker on the vehicle) and explained persistent GPS tracking surveillance as follows:

Prolonged surveillance reveals types of information not revealed by short-term surveillance, such as what a person does repeatedly, what he does not do, and what he does ensemble. These types of information can each reveal more about a person than does any individual trip viewed in isolation. Repeated visits to a church, a gym, a bar, or a bookie tell a story not told by any single visit, as does one’s not visiting any of these places over the course of a month. The sequence of a person’s movements can reveal still more; a single trip to a gynecologist’s office tells little about a woman, but that trip followed a few weeks later by a visit to a baby supply store tells a different story. A person who knows all of another’s travels can deduce whether he is a weekly church goer, a heavy drinker, a regular at the gym, an unfaithful husband, an outpatient receiving medical treatment, an associate of particular individuals or political groups – and not just one such fact about a person, but all such facts.[82]

The Court recognized its unique challenge in Carpenter:

The issue we confront today is how to apply the Fourth Amendment to a new phenomenon: the ability to chronicle a person’s past movements through the record of his cell phone signals. . . . [L]ike GPS tracking of a vehicle, cell phone location information is detailed, encyclopedic, and effortlessly compiled.[83]

The Court focused on the pervasiveness of the information collected, the completeness of the individual profile that may be compiled using such information, and the retrospective use of data collected on an individual who had not been a suspect during the time period of collection.[84] The Court held that the “tireless surveillance” of location data collection and the collection of such data by using the individual’s smart phone, which is such a personal and intimate device as to be considered an extension of the self, “invaded Carpenter’s reasonable expectation of privacy in the whole of his physical movements.”[85]

     C. Proposed Federal Law

There are competing views as to what federal data privacy legislation should look like.

Senator Rubio proposed a Senate bill requiring the FTC to promulgate regulations to impose privacy requirements on internet service providers that would allow individuals access to and the ability to dispute inaccuracies in records relating to the individual. This law would preempt state privacy laws and exempt certain entities covered by other federal privacy laws.[86]

Senator Markey proposed a Senate bill, the CONSENT Act, that would require the FTC to issue regulations requiring: consumer consent for the sale of “precise geolocation information”; disclosures regarding the collection, use, and sharing of such data; and preservation of the anonymity of such data that has been de-identified.[87] This act would protect such data upon collection and from sale in the secondary market.

Senator Wyden has introduced, for discussion purposes, a draft of a much more comprehensive Senate data privacy bill based on the European Union’s General Data Protection Regulation.[88] This act would protect “any information, regardless of how the information is collected, inferred, or obtained that is reasonably linkable to a specific consumer or consumer device.” This definition appears to protect location data that is identified or identifiable. The act applies to “automated decision making” and disclosures to third parties, which may make it applicable to the secondary location data market.

In addition, big data companies and their industry associations have issued guidance on federal privacy legislation; Apple’s proposal is one of the few that recommends regulating data brokers and allowing individuals to access and delete data sold on the secondary market.[89]

Intel has proposed a draft “Innovative and Ethical Data Use Act of 2018,” which would be enforced by the FTC and focuses on “privacy risk” to individuals; this bill would require privacy notices to specify the intended uses of the data and limit the further use and dissemination of data. The collection and use of geolocation data (which the bill states creates “significant privacy risk” to the individual) would necessitate more explicit notice and heightened privacy protections.

This month the U.S. Senate Committee on the Judiciary held a hearing, “GDPR & CCPA: Opt-ins, Consumer Control, and the Impact on Competition and Innovation,” which squarely addressed location tracking, with Senator Josh Hawley questioning Google’s senior privacy counsel, Will DeVries, about Google’s location tracking practices:[90]

He wanted to know whether DeVries thought a user would expect that Google tracks “where she goes to work, where her boyfriend lives, where she goes to church” or to the doctor. “Do you think the user expects that? Do you think you’re communicating clearly when a user cannot turn off their location tracking?

DeVries said Google’s use of location tracking is to make its services more effective, not to make money.

It’s “necessary to make services work,” DeVries said. “If we turned those off, your phone wouldn’t work like you’d expect,” adding that the operational aspects of it are complicated

But Hawley wasn’t satisfied with that. 

“It’s not complicated,” he said. “What’s complicated is you don’t allow consumers to stop your tracking of them.”

He continued, “Here is my basic concern: Americans have not signed up for this, they think the products you’re offering are free; they’re not free. They think they can opt out; they can’t opt out. It’s kind of like that old Eagle’s song, ‘You can check out any time you like, but you can never leave.’ And that’s a problem for the American consumer; it’s a real problem. And for somebody who has two small kids at home, the idea that your company and others like it will sweep up information to build a user profile on them that will track every step, every movement and monetize that, and they can’t do anything about it, and I can’t do anything about it, that’s a big problem this Congress needs to address.”

V. State Law

State laws vary regarding whether a warrant must be obtained by law enforcement to obtain cell phone location information.[91] All 50 states have unfair and deceptive trade practices laws like the federal UDAP statute (state UDAP).[92] Recently, some state legislatures have begun to focus more specifically on the privacy of individual location data.

     A. Location Data Collection

California’s Consumer Privacy Act (CCPA) (effective July 1, 2020) provides that protected “personal information” includes “geolocation data” that “identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.”[93] The effect of this protection would be to enable consumers to (1) find out what types of location data are being collected and how it is used, and (2) direct companies to (a) delete location data under certain circumstances, and (b) refrain from selling location data to third parties.[94]

     B. Secondary Location-Data Market

Only one state has acted to regulate the secondary data market generally. Vermont enacted the first U.S. law governing data brokers, which was effective January 1, 2019.[95] This statute applies to companies that collect or sell “brokered personal information” regarding an individual that is not a customer of the company (the data brokers). “Brokered personal information” includes: “other information that, alone or in combination with the other information sold or licensed, would allow a reasonable person to identify the consumer with reasonable security.”[96] Arguably, previously anonymous location information that is identified or identifiable to a specific individual would be protected by the statute. Data brokers are, among other requirements, required to maintain information security programs and to register annually with the state; annual registrations should describe the manner of opt-out for individuals if opt-outs of its database are offered.[97] There is no mandatory requirement for data brokers to make disclosures to individuals or provide database opt-outs.

Effective January 18, 2019, New York has regulated the use of secondary market data by life insurers in underwriting, particularly the use of data profiling as a “proxy for traditional medical underwriting.”[98] The Department of Financial Services explained the risk to the individual of external data analytics, including profiling:

First, the use of external data sources, algorithms, and predictive models has a significant potential negative impact on the availability and affordability of life insurance for protected classes of consumers. An insurer should not use an external data source, algorithm or predictive model for underwriting or rating purposes unless the insurer can establish that the data source does not use and is not based in any way on race, color, creed, national origin, status as a victim of domestic violence, past lawful travel, or sexual orientation in any manner, or any other protected class. . . . Second, the use of external data sources is often accompanied by a lack of transparency for consumers. Where an insurer is using external data sources or predictive models, the reason or reasons for any declination, limitation, rate differential or other adverse underwriting decision provided to the insured or potential insured should include details about all information upon which the insurer based such decision, including the specific source of the information upon which the insurer based its adverse underwriting decision

     C. Proposed State Law

Several states are in the process of considering various types of privacy legislation.

The proposed Washington Privacy Act expressly provides that specific location data is a personal identifier.[99] Like the California Act, the Washington Act would give consumers more access to and control over how their identifiable location data is used.

Other states are in the process of considering privacy legislation based on the CCPA, including Hawaii, Maryland, Massachusetts, New Mexico, and Rhode Island.[100] These statutes would also protect location data that is identified or identifiable to an individual. Illinois, New Jersey, and New York are all considering statutes that apply to online services, which may cover location data tracked via app.[101]

New Jersey is considering legislation that would require operators of mobile device apps that collect GPS data to clearly disclose to the user what GPS data is collected, all third parties to whom GPS data is disclosed, and how long the GPS data is retained, and allow the user a meaningful opt-in to certain types of GPS data-sharing.[102]

Oregon is considering new legislation amending its UDAP (the Data Transparency and Privacy Protection Act, “DATPA”) to require “express written consent” from an individual prior to collecting or selling geolocation data.[103] A quick summary provides that:

Additional DATPA provisions require a business entity collecting, analyzing, deriving, selling, leasing, or otherwise transferring an Oregonian’s geolocation or audiovisual information to first disclose intent and methodology, and receive express consent from that individual. Geolocation refers to data displaying the location of a digital electronic device (cellular phone, tablet, etc.) on a map or similar depiction.[104]

The proposed DATPA would require an individual’s consent prior to “the collection, use, storage, analysis, derivation, sale, lease or other transfer” of geolocation information.[105] This would require both the initial data collector and each secondary market participant to obtain consent from an individual prior to using location data to profile the individual or transferring the location data to another party.

VI. Alternatives for Regulation of Location Data

Given the rapidly evolving and seeming limitlessness of location data tracking and usage, regulating specific technologies or types of use may not be practical.

Another regulatory model would require the default for location-based apps to limit collection, use, and disclosure to that which is necessary for the provision of app services. Individual consent could be obtained for marketing by the collector. Recipients of the location data could be directly regulated to limit use to facilitation of app services and to prohibit other use or disclosure. This would prevent sale of the individual’s location to unknown third parties for unknown purposes.

The following proposals are incomplete suggestions; one or more may act as a launchpad for regulatory discussion. A combination of approaches commensurate with the sensitivity of location data and the complexity of its uses may be appropriate.

     A. At the Point of Initial Collection or Use

          1. Ensuring That an Identified Individual Has Notice and Choice

Current laws and enforcement regulate the data collector and focus on location data that is identified to a particular individual. Pending and proposed laws may also protect location data that is personally identifiable.[106]

The current regulatory approach focuses on whether the individual has the right to know how his or her location data is collected, used, and shared, and whether he or she should have the right to opt out of or decline to consent to certain types of sharing. Heightened scrutiny is generally given to disclosures of location data by the data collector for commercial purposes unrelated to (1) the purposes of the initial collection, or (2) the relationship between the collector and the individual (i.e., secondary market usage). Several proposed laws follow this approach.

Enforcement and liability in this context may depend on the following issues: whether the privacy policy of the data collector and related settings are clear; whether the user consent is effective; whether the purpose of subsequent sharing by the data collector aligns with the disclosures or consents; and whether adequate nondisclosure agreements and security measures are in place to protect the location data from both disclosure and uses exceeding the data collector’s original notice.

Similarly, the digital marketing industry group “Data Marketing & Analytics” (DMA) has guidelines for direct, mobile location-based marketing that advise marketers to comply with the Telephone Consumer Protection Act[107] and “inform Individuals how location information will be used, disclosed and protected so that the Individual may make an informed decision about whether or not to use the service or consent to the receipt of such communications. Location-Based information must not be shared with third-party marketers unless the individual has given Prior Express Consent for the disclosure.”[108]

The consent model can be problematic in this context. Google’s consent practices for location tracking have come under recent scrutiny:

  1. If a user turns off “Location History” for Google services, this action does not stop location tracking, but only halts the user’s ability to view his or her location data going forward.[109]
  2. In order to stop location tracking by Google, the user must go to a separate setting, “Web & App Activity,” and opt out of tracking.[110]

The two settings are not in proximity to one another and do not cross-reference each other.

At issue is whether the following are deceptive: (1) the text of first setting (“Location History”); (2) the text and location of the second setting (“Web & App Activity”) (which is not in proximity to or cross-referenced with the first); and (3) the default setting for both is real-time location tracking. Google argues that its disclosures are clear and that user consents to location tracking are valid.[111]

France recently fined Google $57 million on the basis of this practice for violation of the General Data Protection Regulation’s requirements regarding clear disclosures and user consent.[112] In the United States, the FTC is under pressure to investigate Google for these practices under UDAP and an existing consent order with Google.[113] States attorneys general are beginning to consider pursuing state UDAP enforcement actions against Google.[114]

The City of Los Angeles recently sued The Weather Channel (TWC) app on the basis of “fraudulent and deceptive” trade practices. TWC app user location information was sold to advertisers and marketers for purposes of serving app users advertising targeted to their location.[115]

The TWC app location consent prompt states that location access will be used to provide personalized local weather. The consent does not reference marketing or that the location tracking would continue even when the app was not in use:

For years, TWC has deceptively used its Weather Channel app to amass its users’ private, personal geolocation data—tracking minute details about its users’ locations throughout the day and night.[116]

The notice-and-choice model may not be workable:[117]

The free and informed consent that today’s privacy regime imagines simply cannot be achieved. Collection and processing practices are too complicated. No company can reasonably tell a consumer what is really happening to his or her data. No consumer can reasonably understand it. And if companies can continue to have their way with user data as long as they tell users first, consumers will continue to accept the unacceptable: If they want to reap the benefits of these products, this is the price they will have to pay.

There are also deficiencies in the app development models espoused by several Big Tech companies in which app developers are required to provide notice and choice, but without actual oversight by the platforms making the apps available to individuals. An exhaustive study of pre-installed Andoid apps showed a lack of supervision by Google over the location data and other collection activities of apps that come automatically loaded on each Android device (which are often not susceptible to deletion by the individual).[118]  Instead, Google apparently relied on privacy and security requirements and tools provided to the app developers, much like Facebook apparently relied on disclosure limitations imposed on its app developers, all without actually overseeing the privacy practices of the app developers.

          2. Substantive Limits on Collection

One difficulty with consent in this context is that the data collector does not know what all possible uses of the location data might be. Indeed, collection of location data often seems excessive in comparison to the purpose of collection. The effect of over-collection of continuous location data has been to motivate alternate data usage and monetization.

As an alternative or complement to the notice-and-choice model, substantive limits could be placed on how much and what types of location data may be collected at the outset. This model would look toward the collector as a gatekeeper. For example, a collector could be limited to the collection of location data only as necessary to provide the service for which the location data is collected. That would limit the initial exhaustive volume of location data.

     B. At the Point of Transfer

          1. Ensuring That Individual Location Data Is Anonymous

If location data is identified or identifiable to a specific individual when collected, it may be entitled to protection under current and proposed privacy law. In order to transfer such data, the individual’s consent or failure to opt out may be required. Location data that is properly anonymized would generally be excludable from the definition of personal information under applicable law and not subject to the notice-and-choice model.

If rapidly evolving tracking technologies and data analytic methodologies enable an actual location to be used to identify a unique individual, however, then arguably unique location data is personal information. The key in this context would be identifiability, which would be dependent on the privacy and security measures taken to ensure the anonymity of the data and the likelihood of risk or reidentification of the location data to an individual.

Location tracking use cases include the following scenarios:

  1. location data point identified to a specific individual;
  2. location data point identifiable to a specific individual;
  3. location data point not identified to the individual;
  4. continuous location tracking identified to a specific individual;
  5. continuous location tracking identifiable to a specific individual;
  6. continuous location tracking not identified to the individual;
  7. development of a profile based on location tracking identified to a specific individual;
  8. development of a profile based on location tracking that is identifiable to a specific individual;
  9. location data used to compile a profile of an unidentified individual.

As described above, the distinctions between these categories become less relevant in practice.

Is regulation of data collectors sufficient to address these privacy risks? Can de-identified location data be rendered truly anonymous?

If the pervasiveness and intrusiveness of continuous location tracking indicates that location data should be subject to heightened privacy rights (as in the Jones concurring opinion, Carpenter, and the relevant FTC actions and guidance), should location tracking data be regulated regardless of whether the data is identified to a particular individual?

The analysis is even more complicated when reidentification is accomplished by a secondary market participant. In that event, who would be liable for privacy violations? Would liability rest with the data collector that did not ensure true anonymity, or with the secondary market user that was not in privity with the individual? Would it matter if data collection predated the specific technology or methodology that facilitated later identification, whether by the data collector or another party?

           2. Restricting and Prohibiting Transfer

The transfer of location data could be prohibited other than as necessary to provide the underlying service to the individual and/or for any secondary market purpose. The transferee’s use could be similarly limited. The goal would be to limit the monetization of location data in the secondary data market. As demonstrated in the wireless carrier aggregated data scenarios described above, this approach is susceptible to abuse by the parties.

     C. Upon Individual Profiling Based on Location Data

Creating individual profiles using location data poses unique risks to the individual. Precise tracking of an individual’s location over time can be used to discover information about the individual that may not be otherwise available (consider repeat visits to a casino, the home of a person not the individual’s spouse, a visit to Planned Parenthood, repeat visits to an oncologist), which when combined with other data, can be used to develop a fairly comprehensive profile of the individual.

Arguably, if a comprehensive profile is generated by the original data collector and is identified or identifiable to a specific individual, the combined data may be protected under applicable privacy law, but in the secondary market, the eventual data buyer is neither subject to current privacy regulation nor in privity with the individual or the original data collector. This includes:

  1. compilation of a data profile or adding data to the profile;
  2. identifying an individual in relation to a previously anonymous profile;
  3. resale of the profile;
  4. use of the profile to make decisions impacting the individual; and
  5. use of the profile to influence the individual’s behavior.

Very few of these activities are impacted by current U.S. privacy regulation, and none of them fit the current notice-and-choice model.

Consider again the Facebook/Cambridge Analytica scandal. Although Facebook is subject to UDAP for its collection of the data, the app developer that accessed it and the profiler that purchased it were unregulated; the risks of resale and unauthorized use were not addressed by current U.S. law.

New York’s letter to life insurance companies (described above) highlights the risks to individuals posed by secondary market usage and profiling in insurance underwriting. These same risks are present in credit marketing and underwriting, and development of use cases for data profiling is likely to explode in the same manner that monetization of data has, meaning that we cannot determine all of the possible use cases at a given point in time.

Data profiles are used daily to make decisions regarding the individual, without regard to whether the individual knows that there is a profile or that the profile is being used to make a decision affecting the individual; this precludes regulation solely through individual choice or consent. In that event, data profiling decisioning and targeting based on profiles may be better regulated directly.

Regulators could prohibit profiling using location data altogether or by secondary-market participants. Like limiting collection, use, and transfer, this would have the impact of diminishing the separate value and monetization of location data.

     D. Imposing a Duty of Care on Market Participants

Collectors, users, profilers, and third parties could be regulated directly to owe the individual a duty of care in collecting, profiling, sharing, and using location data and to have other direct obligations to the individual. In this way, privity could be created between each individual and the market participants and the risk of loss or error could be shifted from the individual to the market participant that caused the harm.  This approach has precedent in the federal regulation of consumer reporting agencies and disclosers and users of consumer reports but would be much more comprehensive and complicated in scope.

VI. Conclusions

To the extent that current and proposed privacy laws protect location data, such protection is limited to location data that is identified (or in some cases identifiable) to an individual. Requirements generally apply only to the initial data collector.

As the technological lines blur between identified and identifiable, and identifiable and not identified or anonymized, the distinctions between the categories may become less relevant. This complicates the regulatory analysis.

Moreover, recent media accounts and enforcement actions reveal a robust secondary market:

  1. identified location data is regularly acquired and used by third parties with whom the individual has no direct relationship;
  2. de-identified or anonymized location data is regularly re-identified; and
  3. location data is routinely combined with other types of personal data and used by third parties with whom the individual has no direct relationship to compile comprehensive profiles of the individual and make decisions about the individual or attempt to influence behavior of the individual.

These secondary-market practices are not currently addressed by United States law.

Profile development and decisioning or influencing based on data analytics, whether relying solely on location data or combining location data with other data, is a distinct business from the initial transaction between the individual and the data collector, and poses unique risks to the individual not present during the initial collection. These secondary market uses are also complex.

If an individual can be identified by location and further characteristics or acts may be attributable directly to the individual by virtue of his or her geographical movements, then discussions of privacy regulation should include location tracking. If parties removed from the initial transaction between the individual and the data collector subsequently reidentify data to an individual or develop a profile of the individual, or make decisions regarding the individual, then consideration should also be given to whether regulation of the initial transaction and limitation of the use and disclosure by the data collector and its service providers adequately address the risks posed by the secondary location data market. The current notice-and-choice model alone is inadequate to close this Pandora’s Box.

The power of place: location tracking and location data profiling are big business. Each poses distinct privacy risks to the individual and remain largely unregulated in the United States.


[1] The author thanks Dr. Peter Alan Jezewski for his editing contributions.

[2] Although potentially applicable to a wider variety of personal data, this article focuses solely on location data. In addition, “data profiling” can refer to the process of reviewing source data to ensure its accuracy and integrity. In this article, the term is used to describe the process of characterizing an individual using data related to the individual.

[3] The presentation materials for the Future of Privacy Forum’s class, “Location Data: GPS, Wi-Fi, Spatial Analytics,” gives an excellent overview of the types of systems, hardware, and software that are involved in location tracking. Future of Privacy Forum, Sources of Data: Mobile Sensors, Wi-Fi Analytics (Nov. 27, 2018).

[4] Security Baron, The Data Big Tech Companies Have On You (Or, At Least, What They Admit To), Security Baron, Sept. 30, 2018.

[5] These types of services are often referred to as “location-based services.” See D. Oragui, 7 Examples of Location-Based Services Apps, The Manifest, Sept. 28, 2018.

[6] Consider that you are in a store; your real-time location can be collected via your smartphone using all of a combination of the following systems: GPS (via satellite); cell tower proximity; Wi-Fi networks; Bluetooth or beacons; LED; and audio. The smartphone has distinct hardware and software to facilitate location tracking through this variety of external systems. In addition, the apps on the smartphone have their own software to facilitate collection. This data can include your precise latitude, longitude, and altitude, including location within a building. These systems are ubiquitous, and much of the technologies are relatively inexpensive.

[7] Longitudinal data may show the individual’s movements during a specified timeframe, which would be helpful for a fitness tracker that calculates distance achieved and calories burned. Alternatively, the data tracked over time may be of a particular geographical location if a certain type of data traffic is more relevant than a single individual’s location; for example, understanding the number of measles cases with a month in a specific county allows public health planners to treat and prevent the spread of the disease.

[8] Y. Vilner, Location Analytics and Retail—Friends At Last, Hackernoon.com, Oct. 26, 2017.

[9] AlternativeData.org, Industry Stats, ALT. DATA.

[10] J. Valentino-DeVries, N. Singer, M. Keller, & A Krolik, Your Apps Know Where You Were Last Night and They’re Not Keeping It Secret, N.Y. Times, Dec. 10, 2018 [emphasis supplied].

[11] T. Costa, How Location Analytics Will Transform Retail, Harv. B. Rev. (Mar. 12, 2014).

[12] Id.

[13] S. Zuboff, A Digital Declaration, FAZ.net, Sept.9, 2014.

[14] J. Naughton, “The goal is to automate us”: welcome to the age of surveillance capitalism, The Guardian, Jan. 20, 2019.

[15] The secondary data market is not limited to location data and is a topic in its own right. Discussion of that market here focuses on the risks involved when location data is sold on the secondary market, although some of these concerns may be general to other types of secondary-market data.

[16] J. Valentino-DeVries, Service Meant to Monitor Inmates’ Calls Could Track You, Too, N.Y. Times, May 10, 2018.

[17] W. Oremus, The Privacy Scandal That Should Be Bigger Than Cambridge Analytica, Slate, May 21, 2018.

[18] K. Bode, What A-GPS Data Is (and Why Wireless Carriers Most Definitely Shouldn’t Be Selling It), Motherboard, Feb. 7, 2019.

[19] J. Brodkin, Selling 911 location data is illegal—US carriers reportedly did it anyway, ARS Tech., Feb. 13, 2019.

[20] Id.

[21] B. Krebs, Tracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site, Krebs on Security, May 17, 2018.

[22] J. Cox, Hacker Breaches Securus, the Company That Helps Cops Track Phones Across the US, Motherboard, May 16, 2018.

[23] J. Fingas, Family tracking app leaked real-time location data for weeks. It would have let intruders spy on a child’s whereabouts., ENGADGET, Mar. 24, 2019.

[24] J. Cox, I Gave a Bounty Hunter $300. Then He Located Our Phone, Motherboard, Jan. 8, 2019.

[25] Id.

[26] J. Cox, Hundreds of Bounty Hunters Had Access to AT&T, T-Mobile, and Sprint Customer Location Data for Years, Motherboard, Feb. 6, 2019.

[27] E&C Republicans, Letters to Zumingo, Microbilt, T-Mobile, AT&T, Sprint, and Verizon on Location Sharing Practices (Jan. 16, 2019).

[28] J. Cox, AT&T to Stop Selling Location Data to Third Parties After Motherboard Investigation, Motherboard, Jan. 10, 2019.

[29]             D. Fitzgerald & S. Krouse, T-Mobile, AT&T Pledge to Stop Location Sharing by End of March, Wall St. J., Jan. 11, 2019.

[30] For convenience’s sake, this data is referred to as “anonymous” and “anonymized,” although in practice there is an entire spectrum of de-identification, and the assumptions made in this article may vary depending on the level of de-identification technologies employed. See K. Finch, A Visual Guide to Practical Data De-Identification, Future of Privacy Forum, Apr. 25, 2016.

[31] R. Dezember, Your Smartphone’s Location Data is Worth Big Money to Wall Street, Wall St. J., Nov. 2, 2018.

[32] S. Ghosh, Location Data and The Growing Role in Marketing and Advertising Campaigns, Martech Series, June 8, 2018.

[33] S. Mervosh, Jerry Westrom Threw Away a Napkin Last Month. It Was Used to Charge Him in a 1993 Murder, N.Y. Times, Feb. 17, 2019.

[34] J. Valentino-DeVries, N. Singer, M. Keller & A Krolik, Your Apps Know Where You Were Last Night and They’re Not Keeping It Secret.

[35] Id. [emphasis supplied].

[36] L. Hardesty, How hard is it to “de-anonymize” cellphone data?, MIT News, Mar. 27, 2013.

[37] D. Kondor, B. Hashemian, Y. de Montjoye & C. Ratti, Towards matching user mobility traces in large-scale datasets, Ieee Trans. on Big Data (Abstract), Sept. 24, 2018.

[38] This fact set is based on the scenario in Carpenter v. U.S., discussed below.

[39] The focus of this article is on the privacy implications of commercial location data tracking. Many of these practices are used by law enforcement as well, but addressing the need for balance with public safety and law enforcement purposes is a topic in its own right and beyond the scope of this article.

[40] L. Nelson, L.A. wants to track your scooter trips. Is it a dangerous precedent?, L.A. TIMES, Mar. 15, 2019.

[41] B. Schmarzo, Best Practices for Analytics Profiles, Infocus, July 8, 2014 [emphasis supplied].

[42] E. Mierzwinski & J. Chester, Selling Consumers Not Lists: The New World of Digital Decision-Making and the Role of the Fair Credit Reporting Act, Suffolk U. L. Rev. 46 (2013).

[43] Laursen, Who Are You?, MIT Tech. Rev. (Jan. 2015) [emphasis supplied].

[44] K.Kaye, Why the Industry Needs a Gut-Check on Location Data Use, AD AGE, Apr. 26, 2017.

[45] S. Zuboff, Surveillance capitalism’ has gone rogue. We must curb its excesses., Wash. Post, Jan. 24, 2019.

[46] S. Levin, Facebook told advertisers it can identify teens feeling ‘insecure’ and ‘worthless’, The Guardian, May 1, 2017.

[47] J. McKendrick, Information Technology Enters A ‘Psychic’ Stage, Forbes, Mar. 12, 2019.

[48] NPR Interview with Shoshana Zuboff, ‘We Are No Longer The Customers’: Inside ‘The Age of Surveillance Capitalism’, WBUR, Jan. 15, 2019.

[49] A. Chang, The Facebook and Cambridge Analytica scandal, explained with a simple diagram, Vox, May 2, 2018.

[50] Id.

[51] D. Ghoshal, Mapped: The breathtaking global reach of Cambridge Analytica’s parent company, Quartz, Mar. 28, 2018.

[52] K. Kaye, Why the Industry Needs a Gut-Check on Location Data Use, Ad Age, Apr. 26, 2017.

[53] Wireless carriers must certify that they do not use assisted GPS information other than for enhanced 9-1-1 purposes. 80 FR 45897 (08/03/2015). 47 U.S.C. § 222 protects “customer proprietary network information” (CPNI), which includes location data when a wireless customer makes or receives a call; it does not currently protect location data tracked via phone when calls are not being made. See EPIC, CPNI: Mobile Location Data as CPNI.

[54]             Section 5 of the Federal Trade Commission Act (the FTC Act), 15 U.S.C. § 45, prohibits “unfair or deceptive acts or practices in or affecting commerce.” The FTC regularly prosecutes enforcement actions in the privacy and cybersecurity context under the FTC Act.

[55] The Children’s Online Privacy Protection Act, 15 U.S.C. §§ 6501–6505 and implementing regulation 16 C.F.R. pt. 312 (COPPA), generally require operators of website or online services that collect information from children under 13 years old to give parents clear notice of the collection practices and obtain “verifiable parental consent” to such collection.

[56] FTC Staff Report, Mobile Privacy Disclosures: Building Trust Through Transparency (Feb. 1, 2013).

[57] FTC Report, Protecting Consumer Privacy in an Era of Rapid Change (Mar. 26, 2012).

[58] Complaint at 3, In re Uber Tech., Inc., File No. 1523054 (FTC Feb. 2017).

[59] Id. at 2.

[60] Id. at 2–3.

[61] P. Boshell, Survey of Developments in Federal Privacy Law, 74 Bus. Law. 1, 193 (Winter 2018/2019) [emphasis supplied].

[62] S. Roderiguez, Facebook uses its apps to track users it thinks could threaten employees and offices, CNBC, Feb. 14, 2019.

[63] In re BLU Products, Inc., File No.1723025, Decision and Order (FTC Apr. 2018).

[64] United States v. InMobi Pte Ltd, Case No. 3:16-cv-3474, Stipulated Order for Permanent Injunction and Civil Penalty Judgment (ND Ca June 2016).

[65] N. Sannappa & L. Cranor, A deep dive into mobile app location privacy following the InMobi settlement, FTC, Aug. 9, 2016.

[66] FTC Letter to Gator Group Co., Ltd (Apr. 26, 2018); FTC Letter to Tinitell, Inc. (Apr. 26, 2018).

[67] L. Mathews, Your Child’s GPS Watch Could Be Exposing Their Location In Real-time, Forbes, Feb. 5, 2019.

[68] InMobi, at 2.

[69] Susan Freiwald & Stephen Smith, The Carpenter Chronicle: A Near-Perfect Surveillance, 132 Harv. L. Rev. 205 (Nov. 9, 2018) (providing a thorough history of federal legislative and judicial authorities regarding modern surveillance, including GPS and cell phones).

[70] Carpenter v. United States, 585 U.S. __­_­ (2018).

[71] P. Boshell, Survey of Developments in Federal Privacy Law, at 198.

[72] 18 U.S.C. § 2703.

[73] Carpenter, 585 U.S. at ___.

[74] Id. at ___.

[75] Id. at ___.

[76] Id.

[77] Warshak v. United States, 631 F.3d 266 (6th Cir. 2007).

[78] Carpenter at ___.

[79] Id.

[80] 18 U.S.C. § 2703(d) (2018).

[81] U.S. v. Jones, 565 U.S. 400 (2012).

[82] Jones, 565 U.S. at 413 (Alito, J., concurring).

[83] Carpenter at ___.

[84] Id. at ___.

[85] Id. at ___ (emphasis supplied).

[86] S. ____, American Data Dissemination Act, 116th Cong. (2019).

[87] S. 2639, Customer Online Notification for Stopping Edge-provider Network Transgressions Act, 115th Cong. (2018).

[88] S._____, The Consumer Data Protection Act (2018).

[89] D. Abril, This Is What Tech Companies Want in Any Federal Data Privacy Legislation, Fortune, Feb. 21, 2019.

[90] A. Carson, At hearing, US Senate wants answers on location tracking, opt-in consent, The Priv. Advisor, Mar. 13, 2019.

[91] ACLU, Cell Phone Location Tracking Laws by State.

[92] National Consumer Law Center, Unfair & Deceptive Acts & Practices.

[93] Cal. Civ. Code § 1798.140(o)(1)(G) (2018).

[94] Cal. Civ. Code §§ 1798.100(a), (b); 1798.105(b); 1798.110; 1798.115; 1798.120(b); 1798.130; and 1798.135

[95] 9 V.S.A. § 2430 (2019).

[96] Id.

[97] 9 V.S.A. § 2447 (2019).

[98] NY Dept. Fin’l Servs. 2019 Circular Letter No. 1.

[99] Washington Privacy Act, S. 5376, 66th Leg., Reg. Sess. (Wa. 2019).

[100] J. Cedarbaum, D. Freeman & L. Lichlyter, States Consider Privacy Legislation in the Wake of California’s Consumer Privacy Act, Wilmer Hale, Feb. 20, 2019.

[101] Id.

[102] An Act concerning certain mobile device applications and global positioning system data, S. 4974, 218th Leg. (NJ 2019).

[103] Proposed H.B. 2866, Data Transparency and Privacy Protection Act, 80th Leg., Reg. Sess. (Or. 2019).

[104] S. Pastrick, CUB Protects Oregonians’ Digital Privacy By Way Of HB 2866, Or. Cub, Feb. 5, 2019.

[105] Proposed H.B. 2866 at § 1(2)(a).

[106] U.S. Senator Ron Wyden’s draft Consumer Data Protection Act includes a do-not-track provision that would allow the individual to opt out of “personal information” sharing. Wyden has said that the bill would allow individuals to know what location data is being collected and to opt out of collection. Senator calls for regulation that would force tech companies to offer “do not track” option, CBS News, Jan. 10, 2019.

[107] Data Marketing & Analytics, Mobile Marketing Guidelines for Ethical Business Practice.

[108] Id.

[109] R. Nakashima, Google clarifies location-tracking policy, AP News, Aug. 16, 2018.

[110] E. Dreyfuss, Google Tracks You Even if Location History’s Off. Here’s How to Stop It, Wired, Aug. 13, 2018.

[111] A. Griffin, Google stores location data “even when users have told it not to”, Independent, Aug.14, 2018.

[112] \M. Rosemain, France fines Google $57 million for European privacy rule breach, Reuters, Jan. 21, 2019.

[113] E. Birnbaum, Consumer groups urge FTC to investigate Google over location tracking, The Hill, Nov 27, 2018.

[114] T. Romm, Google’s location privacy practices are under investigation in Arizona, Wash. Post, Sept. 11, 2018.

[115] F. Navarro, Popular weather app may be selling your location data, Komondo, Jan. 7, 2019.

[116] M. Locklear, LA sues Weather Channel app owner over “fraudulent” data use, Engadget, Jan. 4, 2019.

[117] Editorial Board, Our privacy regime is broken. Congress needs to create new norms for a digital age, Wash. Post, Jan. 5, 2019.

[118] P. Day, P. Dave, Study shows limited control over privacy breaches by pre-installed Android apps, REUTERS, Mar. 25, 2019

Revised Section 13(3) of the Federal Reserve Act

On December 23, 2018, after significant market turmoil, Treasury Secretary Steven Mnuchin issued a statement that he had completed calls with the nation’s six largest banks and that those banks had reported that “markets continue to function properly,” and that they had sufficient liquidity to fund themselves.[1] This unexpected statement generated considerable commentary as to why the conversations had occurred in the first place[2] and was the first time since the dramatic events of the 2008–09 financial crisis that the government had focused on particular institutions and their liquidity. It thus seems appropriate to consider an important “weapon” in the U.S. government’s arsenal when responding to significant liquidity events in the financial sector, and what happened to that weapon in the Dodd-Frank Act.

The weapon is section 13(3) of the Federal Reserve Act, which has permitted emergency lending to bank and nonbank companies by the Board of Governors of the Federal Reserve System (the Federal Reserve).[3] This article begins with consideration of section 13(3)’s enactment during the Great Depression and its history since, and then turns to some of its principal uses during the financial crisis. It concludes with a discussion of how the Dodd-Frank Act amended this provision and how, as amended, section 13(3) fits into the post-crisis regulatory scheme.

As an initial matter, section 13(3) was not part of a “banking bill” per se, and it was controversial even in 1932. It came about as part of the Hoover administration’s response to the Great Depression, which was focused primarily on a new government enterprise, the Reconstruction Finance Corporation (RFC), created in early 1932.[4] The RFC responded to a series of bank failures in December 1931, and among other things, was a source of credit to banking institutions that were not members of the Federal Reserve as well as commercial enterprises like railroads.[5] Economic conditions continued to decline notwithstanding the RFC, and Congress continued to legislate, enacting the first Glass-Steagall Act: the Banking Act of 1932. This statute, inter alia, permitted the Federal Reserve, “in exceptional and exigent circumstances,” to authorize emergency “advances” to member banks at a penalty rate of interest when satisfactorily secured.[6] Even this measure, however, was considered a temporary measure, originally expiring after one year.[7]

As 1932 continued, President Hoover wished to expand financing by the RFC. His desires were surpassed by House Speaker John Nance Garner of Texas, who in May 1932 introduced legislation to expand the RFC’s lending authority “to any person.”[8] Despite President Hoover’s objections that this legislation authorized loans “on any conceivable security and for every purpose,” a bill containing expanded RFC authority passed the House and Senate on July 9, 1932.[9] The expanded authority of the RFC concerned not only President Hoover, but also Federal Reserve Board member Charles Hamlin and Senator Carter Glass, the father of the Federal Reserve System, likely because it introduced competition to the Federal Reserve’s lending powers.[10] Two days later, President Hoover vetoed the Garner bill, and Senator Glass introduced what became section 13(3) as an amendment to an appropriations bill that would ultimately become the Emergency Relief and Construction Act of 1932.[11]

The amendment expanded Federal Reserve authority at the expense of the RFC. It drew heavily from the expanded Federal Reserve authority to make advances to member banks in “exceptional and exigent circumstances” contained in the first Glass-Steagall Act, but was still somewhat limited in scope:

In unusual and exigent circumstances the Federal Reserve Board, by the affirmative vote of five members, may authorize any Federal Reserve Bank . . . to discount for any individual, partnership or corporation, notes, drafts, and bills of exchange of the kinds and maturities made eligible for discount under other provisions of this Act when such notes, drafts and bills of exchange are indorsed and otherwise secured to the satisfaction of the Federal Reserve Bank.[12]

In other words, the amendment permitted lending by discount only on such paper that was eligible for discount under other sections of the Federal Reserve Act—at the time, short-term paper like commercial paper.[13] The amendment was satisfactory to President Hoover and became law as Federal Reserve Act section 13(3) on July 21, 1932.[14]

Section 13(3) immediately met with a narrow Federal Reserve interpretation; the Federal Reserve initially took the position that the term “corporation” in the statute did not include nonmember banks and trust companies.[15] Moreover, section 13(3) was essentially an orphan provision: Federal Reserve Banks extended very few section 13(3) loans during the Depression itself, and it was only with the FDICIA in 1991 that Congress removed (having considered the stock market crash of 1987 and the need for broker-dealer liquidity) the limitation in section 13(3) that paper to be discounted had to be the same type of paper that was otherwise eligible for discount at a Federal Reserve Bank.[16]

As is well known, section 13(3) saw great use during the financial crisis as a means of providing much-needed liquidity. Section 13(3) was the source of authority for Federal Reserve lending in connection with the JPMorgan-Bear Stearns acquisition and the support of American International Group (AIG), and it was the source of authority for such broader programs as the Term Securities Lending Facility (TSLF), Term Asset-Backed Securities Loan Facility (TALF), and Commercial Paper Funding Facility (CPFF).[17] The Federal Reserve’s use of section 13(3) is now widely regarded as extremely successful in maintaining financial stability.

Congress responded to the Federal Reserve’s use of section 13(3) by narrowing that authority in the Dodd-Frank Act. Such lending must now be made in connection with a “program or facility with broad-based eligibility,” cannot “aid a failing financial company” or “borrowers that are insolvent,” and cannot have “a purpose of assisting a single and specific company avoid bankruptcy” or similar resolution.[18] In addition, the Federal Reserve cannot establish a section 13(3) program without the prior approval of the secretary of the Treasury.[19]

Revised section 13(3) could be used to create facilities like the alphabet facilities of the financial crisis mentioned above, but the intent of the revisions was to preclude loans like those to JPMorgan/Bear Stearns and AIG. The Dodd-Frank Act instead takes a prophylactic approach to single financial company issues or issues raised by a number of financial companies contemporaneously affected—the enhanced capital and liquidity prudential standards of section 165, resolution planning, limitations on the exercise of default rights in qualified financial contracts, and prepositioning of capital and liquidity where necessary to advance going-concern value at the operating subsidiary level. However, there is most certainly a significant limitation of the ability of the Federal Reserve—or any government agency for that matter—to inject emergency liquidity in a time of crisis: for example, liquidity available in the first instance under the Orderly Liquidation Fund authority in Title II of Dodd-Frank is limited to 10 percent of the total consolidated assets of a failing financial company.[20]

The result, then, although differing significantly in the details, is not different in character from the way that section 13(3) was originally conceived—emergency lending power must have restraints. Certainly, just as in 1932, a much less-constrained approach to Federal Reserve lending does raise policy concerns—among them the ability of the government to pick and choose among failing firms, damage to the credibility of the Federal Reserve should an emergency loan fail, and what open-ended emergency lending means for the taxpayer. It may be time, however, to take a fresh look at revised section 13(3) and determine how to balance those issues against the historical reality that financial panics and crises usually come unexpectedly, and often for reasons not considered threatening at the time. The prophylactic approach that the Dodd-Frank Act takes in guarding against and preparing for large-firm failure may be only as successful as the ability of future policymakers to anticipate a significant crisis. Nor has the issue of financial firm interconnectedness disappeared in the years after 2008. As in 1932–1933, Congress will not necessarily act with dispatch in a crisis. It was fortunate, after all, that before coming to Washington, former Chairman Bernanke had focused on the Great Depression in his academic life.


[1] Damien Paletta & Josh Dawsey, Treasury Secretary Startles Wall Street with Unusual pre-Christmas Calls to Top Bank CEOs, Washington Post, Dec. 23, 2018.

[2] Id.

[3] 12 U.S.C. § 343(3).

[4] See Parinitha Sastry, The Political Origins of Section 13(3) of the Federal Reserve Act, FRBNY Economic Policy Rev., Sept. 2018, at 15.

[5] See id. at 16–17.

[6] This provision was contained in new section 10B of the Federal Reserve Act. See id. at 18.

[7] See id. at 18 n.144.

[8] See id. at 19.

[9] See id. at 20.

[10] See id.

[11] See id. at 20–21.

[12] See id. at 23.

[13] See id.

[14] See id.

[15] See id. at 25.

[16] Pub. L. 102-242, Title IV, § 473, 105 Stat. 2386.

[17] See Sastry, supra note 4, at 3.

[18] 12 U.S.C. § 343(3).

[19] Id.

[20] 12 U.S.C. § 5390(n)(6).

Examining Technology Bias: Do Algorithms Introduce Ethical & Legal Challenges?

An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil’s behavior. 

A.M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460.

Computer scientists have been experimenting with artificial intelligence for decades. In 1950, Professor Alan Turing predicted that by the year 2000, a computer would be able to win his Imitation Game 70% of the time—by sounding like a human to another human. At that early date, Professor Turing knew that the main roadblocks to AI were storage space and speed. Nearly 70 years later, we have had a significant increase in both, along with a significant increase in large data sets that permit a broad range of experimentation. And as he predicted we have indeed constructed machines that can play and win the Imitation Game—at least, until they trip up and sound like the bots they are.

While there are many definitions of artificial intelligence, a distinguishing feature is the type of instructions humans provide to the machine. When we type numbers and functions into a calculator, we are providing step-by-step instructions and we know precisely what was done to obtain each output. We can even double-check the calculator ourselves. When a machine “learns,” it takes actions with data that go beyond merely calculating or following explicit instructions. For example, one important task that computers perform is grouping or clustering words, numbers, documents, images or other objects in very large data sets. This clustering effort is somewhat similar to playing numerous simultaneous games of Sesame Street’s “One of These Things Is Not Like the Other.” Once these data points are clustered, we can then make much more powerful inferences about the data that would not be possible if we had to examine or chart or graph individual data points. This technology allows us to say that certain contract provisions are like other contract provisions, for example, by looking at similarities in words. It allows us to teach a computer about previously diagnosed CT scans in order to use those inferences to detect illnesses in new CT scans.

Machine learning, neural networks and other types of artificial intelligence undertake such complex computational tasks that we often must in turn undertake substantial work to evaluate the results. This is one of the many potential problems Professor Turing anticipated in 1950. So once we have given up on understanding “quite what is going on inside,” how can we evaluate whether the computer did what we wanted? This is the new problem presented by the burgeoning use of advanced technology both in the practice of law and in the products and services produced by clients of legal service providers: how do we examine advanced technology for compliance with legal rules? What standards do lawyers have to meet when using or advising on advanced technology?

Ethical Framework for Lawyer Use of Machine Learning Technology

A lawyer has a duty under Rule 1.1 of the ABA Model Rules to provide “competent representation to a client,” which means that the lawyer must demonstrate the requisite knowledge, skill, thoroughness, and preparation reasonably necessary for the representation. The ABA and many states have recognized that a lawyer’s duty of competence extends to the lawyer’s substantive knowledge of the areas of law pertinent to the representation and the tools used to provide legal services to the client. The lawyer has a duty of technological competence to the extent that technology is used to represent the client. The lawyer can fulfill this duty if the lawyer possesses the requisite technological knowledge personally, acquires the knowledge, or associates with one or more persons who possess the technological knowledge. See New York County Ethics Op. 749 (2017); see also ABA Commission on Ethics 20/20 Report (“in order to keep abreast of changes in law practice in a digital age, lawyers necessarily need to understand basic features of relevant technology”).

In addition, lawyers must understand the benefits and risks associated with technology. ABA Model Rule 1.1, Cmt. 8. Lawyers have an affirmative duty (1) to be proficient in the technology they use in the representation of a client; and (2) to consider technology that may improve the professional services the lawyer provides to his or her clients. With respect to the first duty, lawyers must have sufficient proficiency with the technology they use in their practice to ensure that they are using the technology effectively to serve their clients’ interests, and they must supervise any nonlawyers who assist them in the use of this technology to ensure that they are acting consistent with the lawyer’s professional obligations. Id.; see also ABA Model Rule 5.3; see, e.g., In Re Seroquel Products Liability Litig., 244 F.R.D. 648 (M.D. Fla. 2007) (“Ultimate responsibility for ensuring the preservation, collection, processing, and production of electronically stored information rests with the party and its counsel, not with the nonparty consultant or vendor.”). With respect to the second duty, lawyers have an ethical responsibility to consider whether the client may be better served if assisted by emerging technology, including tools that rely on machine learning.

Lawyers should be aware of machine learning bias in their AI tools as part of their exercise of technological competence. AI tools based on machine learning rely on the assumptions that determine the algorithm’s decision-making. Incomplete inputs, inadequate training, incorrect programming – in addition to the machine’s own elaborations of the initial inputs – can create biases that render the tool an inaccurate and ineffective tool for the client’s purposes. In turn, the lawyer’s use of an inaccurate and ineffective tool could cause the lawyer to fail to fulfill his or her duty of competence. Indeed, where the AI tool produces results that are materially inaccurate or discriminatory, the lawyer risks not only violating the duty of competence under Rule 1.1, but may unwittingly engage in conduct that violates Rule 8.4 (d) (engaging in conduct that is prejudicial to the administration of justice) or Rule 8.49(g) (unlawfully discriminating in the practice of law).

Examples of Algorithmic Bias

With artificial intelligence, we are no longer programming algorithms ourselves. Instead, we are asking a machine to make inferences and conclusions for us. Generally, these processes require large data sets to “train” the computer. What happens when we use a data set that contains biases? What happens when we use a data set for a new purpose? What happens when we identify correlations that reinforce existing societal norms that we are actually trying to change? In these instances, we may inadvertently teach the computer to replicate existing deficiencies — or we may introduce new biases into the system. From this point of view, system design and testing needs to uncover problems that may be introduced with the use of new technology.

We have seen instances of algorithm bias arise in many places, including racially-disparate risk classification in software used by criminal judges to evaluate recidivism risks, in ads that are presented to different racial and gender groups and within so-called “differential” pricing that sometimes offers better pricing to certain people. Even when we don’t see potential evidence of discrimination based upon protected categories, we are jarred by events such as the recent revelation that a “glitch” in the software supporting Wells Fargo’s mortgage modification efforts improperly denied relief to hundreds of families and cost over 400 their homes.

Moving Toward Algorithmic Rules and Standards

As a result of the growing awareness of the possibility of bias in algorithms guiding AI, we are now seeing efforts to provide guidance to deal with the problem. The most important of these comes from the EU’s General Data Protection Regulation (GDPR) in Article 22, which allows data subjects the right to object to the results of automated decision-making, to opt out of such systems, and to demand an explanation as to how the algorithms work. In fact, despite the opposition of privacy experts, the influential European Data Protection Board (formerly known as the Article 29 Working Party) has interpreted Article 22 as barring any automated decision-making that lacks a human review element. New York City, in an ordinance passed last year, has established a task force to examine the issue.

Meanwhile, industry groups, government entities and international organizations have articulated standards that may generate some consensus around audit standards and further legislation. The Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) group’s principles are an excellent and brief example of the developments in this area. Their five principles – responsibility, explainability, accuracy, auditability and fairness – and the related social impact statement for these principles, provide a responsible structure for designing algorithmic systems.

Special Purpose National Banks: The New Frontier of Banking

Numerous fintech companies (i.e., those entities that (1) have nontraditional or limited business models, (2) do not take deposits, (3) are not insured by the Federal Deposit Insurance Corporation (FDIC), and (4) rely on funding sources different from those relied on by insured banks) are currently debating whether they want to go through the rigmarole of becoming a special purpose national bank (SPNB). Becoming a SPNB is an extraordinary undertaking, and on July 31, 2018, the Office of the Comptroller of the Currency (OCC) released its Supplement to the Comptroller’s Licensing Manual (the Supplement), which describes the key factors the OCC will consider in evaluating charter applications to become a SPNB.

Set forth below is a punch list of key points of which any fintech company should be cognizant prior to launching into the SPNB application process.

The OCC strongly encourages potential applicants to engage with the OCC well in advance of filing a charter application, and such potential applicant should contact the Office of Innovation. In fact, the OCC Licensing Department actually will determine whether an entity should even submit a draft application before filing a formal application.

In determining whether to file a formal application to become a SPNB, perhaps the most important factor for any fintech company to always bear in mind is that as a national bank (i.e., a SPNB), they will be subject to the laws, rules, regulations, and federal supervision that apply to all national banks. As such, the filing procedures for an SPNB will be substantially the same as those of any other national bank, including being made available for public comment.[1] Moreover, companies seeking a charter as an SPNB will be expected to (1) make a commitment to financial inclusion, and (2) develop and adhere to a contingency plan that includes options to sell, wind down, or merge with a nonbank affiliate, if necessary.

If a fintech company files a charter application, the OCC will consider the applicants (1) business model, (2) governance structure, and (3) risk profile.[2] In addition, potential applicants should be prepared to discuss: (1) its proposed activities, (2) the market analysis supporting its business plan, (3) its capital and liquidity needs, (4) its contingency plan for periods of financing stress, and (5) how it will demonstrate a commitment to financial inclusion.

Once an application is filed, the OCC seeks to make a decision on a complete and accurate application within 120 days after receipt. The OCC grants approval of a charter application in two steps: (1) preliminary conditional approval and (2) final approval. The OCC will issue a final approval once it determines all key phases of organizing the bank have been completed, all requirements and conditions for final approval have been met, and the organizers have received any other necessary regulatory approvals. The OCC will also impose assessments on a SPNB as a condition of approval. After final approval, the OCC will supervise the SPNB as it does all other national banks, and like all de novo institutions, newly chartered SPNBs will be subject to rigorous ongoing oversight to ensure that the bank’s management and board of directors are properly executing their business strategy, and the bank is meeting its performance goals.[3]

Although becoming a SPNB is indeed an extraordinary undertaking, for many it may be worth the effort, and there are law firms that can assist every step of the way.


[1] For details on filing an application, see 12 C.F.R. § 5 and the “Charters” booklet of the Comptroller’s Licensing Manual.

[2] For details on the review of applications, see 12 C.F.R. § 5 and the “Charters” booklet of the Comptroller’s Licensing Manual.

[3] Key supervisory considerations for SPNBs are highlighted in Appendix A to the Supplement.

The Parties Are Different, but the Song Remains the Same: Yet Another Attack on Real-Time Bidding

On January 28, 2019, the Panoptykon Foundation filed a complaint with the Polish Data Protection Authority against IAB Europe on behalf of an individual, alleging that OpenRTB, the widely-used real-time bidding (“RTB”) protocol promulgated by IAB Tech Lab,[1] violates numerous provisions of the General Data Protection Regulation (the “GDPR”). The complaint recycles many of the same arguments made to the Irish Data Protection Commission and the UK Information Commissioner’s Office in 2018; we analyzed these other arguments in a previous article.

The complaint argues that IAB should be considered a data “controller” under the GDPR for all processing activities undertaken through OpenRTB.  As discussed below, such a contention would dramatically (and improperly) expand the definition of “controller” and should be rejected by regulatory authorities.

Controller Framework

The GDPR regulates “processing activities” (i.e., discrete operations performed on personal data).  An entity is either a data “controller” or a data “processor” with respect to such processing activities. In other words, the determination of an entity as controller or processor must be analyzed in relation to whatever processing activities are in question.

All processing activities have at least one controller. A controller determines the “purposes” and “means” of such processing activities, either alone or jointly with other controllers.  The “purpose” is why a processing activity is being carried out and the “means” are how that processing activity will be carried out.

Although the definition of “controller” has been interpreted broadly, guidance and case law require that an entity, alone or with others, have a certain level of factual “influence” on the purpose and means of processing to be considered to have “control.” Indeed, the Article 29 Working Party (the “WP”) provides that “[b]eing a controller is primarily the consequence of the factual circumstance that an entity has chosen to process personal data for its own purposes.” (Emphasis added). 

The WP further explains that determination of the means “would imply control when the determination concerns the essential elements of the means.” (Emphasis added).

Determination of the “means” therefore includes both technical and organizational questions where the decision can be well delegated to processors (as e.g. “which hardware or software shall be used?”) and essential elements which are traditionally and inherently reserved to the determination of the controller, such as “which data shall be processed?”, “for how long shall they be processed?”, “who shall have access to them?”, and so on. (Emphasis added).

Thus, although determining the purpose of a processing activity automatically renders an entity a “controller,” an entity determining the means of processing is considered a controller only when such determination concerns the essential means.

The Jehovah’s Witnesses Case

The complaint relies primarily on one case from the European Court of Justice (“ECJ”) to support its claim that IAB is a controller with respect to all processing activities undertaken through OpenRTB: Case C-25/17 (the “Jehovah’s Witnesses case”). The complaint also cites to Case C-210/16, which we discuss and distinguish in our previous article linked above.

In the Jehovah’s Witnesses case, Jehovah’s Witnesses members (i.e., preachers) went door-to-door to convert others to their faith.  The members wrote notes on their visits, such as the names of the people they visited, their addresses, and summaries of their conversations.  The Data Protection Supervisor claimed that the Jehovah’s Witnesses religious community (the “JWC”) was a controller in relation to the notes taken by their members during this door-to-door preaching.

The JWC contended that it was not a data controller because it did not determine the purposes and means of processing, alleging (1) the JWC did not formally require the collection of notes by its members and (2) the JWC did not have access to the members’ notes. 

The ECJ, along with the Advocate General, analyzed the JWC’s potential role as a data controller in relation to the specific processing activity in question: members’ note taking when door-to-door preaching. The ECJ held that the JWC “organized and coordinated” the preaching to such a level that it defined the purposes and means of processing in the context of that preaching jointly with its members.  The Advocate General emphasized that the JWC: (1) “gave very specific instructions for taking notes;” (2) allocated areas among the members to better organize the preaching and increase the chances of converting individuals; (3) kept records on how many publications the members disseminated and the amount of time they spent preaching; and (4) kept a register of individuals who did not want to be visited.

Analysis of the Complaint

First, the complaint alleges that “IAB is one of the two leading actors…in the market of behavioural advertising which organises, coordinates and develops the market by creating specifications of an API…that is utilised by companies that participate in [OpenRTB] auctions in ad markets…Those specifications are accompanied by the rules of their application [in the IAB Transparency and Consent Framework].[2]

The complaint does not claim that IAB is the controller of any specific processing activities. Instead, it claims that IAB is the controller of all processing activities undertaken through OpenRTB that relate to “behavioural advertising” because it is a “leading actor” that “organizes, develops and coordinates the market.” In other words, it is stating that IAB’s control over the market is similar to the JWC’s control over its preachers in that the IAB co-determines what personal data shall be processed and why for all participants within OpenRTB and, by extension, the multi-billion dollar behavioral advertising industry.

The complaint’s reliance on the “organized and coordinated” language within the Jehovah’s Witnesses case ignores all context in which it was used. “Organization and coordination” was only relevant because it resulted in determination of the purpose and means of processing within that particular fact pattern. In other words, the JWC’s “organization and coordination” of the members’ note-taking activity amounted to such tight control that it was viewed as instructing them to process personal data on its behalf for a discrete purpose and, thus, determining the “purpose and means of processing” as a joint controller.

However, the level of control in the Jehovah’s Witnesses case is readily distinguishable. The JWC instructed its members (i.e., its preachers) to go door-to-door for the discrete purpose of expanding the JWC’s membership (i.e., converting others to its faith). To ensure the effectiveness of the activity, the JWC “organized and coordinated” the activity by assigning members to different areas, keeping track of how long they preached and how many publications they distributed, and providing detailed instructions on what notes to take (i.e., what personal data to process).

Conversely, IAB’s promulgation of the OpenRTB standard does not result in the IAB instructing or direction another business as to what personal data it shall in fact process or transfer through the protocol, or for which purposes. Unlike preachers answering to a centralized authority, entities do not engage in processing activities over OpenRTB at the behest of, or for a shared purpose with, IAB.

A standard’s primary objective is to define technical requirements for interoperability among various systems. As such, it allows entities to carry out activities more efficiently through a common “language.” However, this development of a communication medium by which processing activities may be carried out must be differentiated from the processing activities themselves. The purpose for initiating processing and each subsequent operation relating thereto (e.g., collection, transmission to downstream partners) is determined at the business level. IAB’s defining of the protocol’s structure provides the technological capability for entities to carry out such activities, but it is not, in itself, a processing activity or “purpose.” The alternative would create a virtually unlimited definition of “controller.”  Likewise, IAB’s theoretical ability to lessen the processing capability of the protocol – such as eliminating the “device identifier” field – also cannot be a criterion by which control is decided in this context. If such ability were acted upon, it would only limit the range of processing activities capable within that technology (activities that organizations need not carry out in the first place). This attenuated “control by exclusion” leads to bizarre results and is not what the GDPR intended. For example, any provider that releases technology capable of a range of processing activities would become a controller of all such activities if it ever puts limits on that capability.

Second, the complaint also alleges that “IAB has full control of how the behavioural advertising market within…[OpenRTB] is designed and operates, so it decides at its own discretion how the processing of personal data is to be carried out, e.g., by determining the elements that must be included in the so-called bid request, i.e., a request for submission of bids in an ad market.”  In other words, the complaint argues that IAB, through OpenRTB, decides the essential means of all processing activities carried out under the technical protocol.  

When analyzed against the aforementioned elements highlighted by the WP, IAB does not determine the “essential means” of the processing activities carried out over OpenRTB:

  • Which data shall be processed?
    • IAB does not decide what data will be processed when entities carry out processing activities via OpenRTB. The complaint conflates the capability to allow entities to process personal data with deciding what data shall be processed for any given processing activity.
    • The Panoptykon Foundation and related parties argue that because almost all bid requests sent via OpenRTB contain at least a device identifier, and OpenRTB documentation recommends device identifiers be included in bid requests, IAB decides that entities shall process device identifiers. Such an argument is unavailing. Technology providers routinely recommend that personal data be processed for added business value (e.g., SaaS platforms). No one has seriously argued that these providers automatically become joint controllers by virtue of doing so. The decision to process such data is left to the autonomy of the user.
  • For how long shall the data be processed/when should the data be deleted?
    • IAB does not mandate retention periods. Such a decision is solely within the business’s own legal judgment for what satisfies the storage limitation principle for its particular processing activities.
  • Who shall have access to the data?
    • This decision is, again, controlled entirely by the entities using OpenRTB and differs dramatically depending on the context in which advertising may be transacted (e.g., open auction, private marketplace, programmatic, or direct).

Third, according to the complaint, IAB is a controller because it has general knowledge that processing is happening through OpenRTB:

[T]he JWC, which collects information on its members (as opposed to information on persons visited by its members), becomes, by creating guidelines and maps, a joint controller of personal data of persons visited by its members despite the fact that it does not establish any direct interaction with those persons and has no access to such data. This instance can be directly compared with IAB, as its role is also to provide guidelines and specifications to companies that participate in auctions in ad markets. Like the community analysed by the CJEU, IAB has a general knowledge of the fact that processing is carried out and knows its purposes (matching ads in RTB model), as well as it organizes and coordinates activities of its members by way of management of… [OpenRTB].

Although it is correct that an entity does not need direct access to data to be a controller, it still needs a level of control sufficient to determine the purposes and means of processing.  Much of the above quote is a restatement of the complaint’s previous arguments examined herein.

However, there is one slightly different argument presented: “IAB has a general knowledge of the fact that processing is carried out and knows its purposes…” and thus is a controller. This “general knowledge” standard that the complaint proposes further ignores all context and nuance from the Jehovah’s Witnesses case. Clearly, every company that has a general knowledge that processing is being carried out through its technology, and is aware of the purpose for which its carried out, cannot be a joint controller or else virtually every technology company (e.g., standards bodies, SaaS platforms, and software companies) would be a joint controller.

The CJEU’s mention that the JWC was aware of its members’ processing of personal data (i.e., note taking) was to demonstrate that excessive formalism (i.e., an express written statement telling individuals to process data) should not be required when an entity has such a level of control that it, practically speaking, instructs others to carry out processing of personal data for a defined purpose. As mentioned above, providing guidelines and technical specifications to companies, or simply knowing that personal data is being processed through OpenRTB, does not amount to the level of control contemplated by the GDPR or prior case law to become a joint controller.  

Finally, the complaint contends: “The argument that [OpenRTB] created by IAB is only a technical protocol which does not obligate particular companies to process data is ill-advised and not true. [OpenRTB] enables and facilitates the processing and dissemination of data as the protocols that are connected with [Open RTB] include certain fields that are so designed that they trigger transfers of data, including sensitive data….”

Notwithstanding such allegations, OpenRTB does not obligate companies to process personal data. None of the required fields to send a bid request per OpenRTB documentation contain personal data. Fields at issue in the complaint, such as the description of a URL’s content, geolocation, device identifier, and user agent string, are optional. This optionality makes sense because OpenRTB is used for activities outside of “behavioral advertising,” such as digital out-of-home and contextual advertising, and in such cases personal data is often irrelevant or not processed.

As mentioned above, the complaint conflates providing entities the capability to process personal data, or recommendations to process personal data (e.g., bid request examples and recommended fields within OpenRTB documentation) for added business value, with having the requisite control to decide what data shall be processed by a particular entity.  

Conclusion  

It seems evident that behavioral advertising critics desire IAB to amend OpenRTB so that no personal data is capable of being transmitted at all. In their singular focus to bring about this result, these parties advocate expanding the definition of controller so broadly as to render almost all companies in every industry as controllers.

If they succeed, although they may accomplish their own goals, they may also stop (or diminish) the willingness of technology providers and standards bodies to engage in the European market.  Here, we should heed the Advocate General’s warning in the FashionID opinion that excessively expanding the scope of what constitutes a controller will create such a lack of clarity that it “…crosses into the realm of actual impossibility for a potential joint controller to comply with valid legislation.”


[1] The Complaint is filed against IAB Europe; however, IAB Tech Lab, rather than IAB Europe, promulgates the OpenRTB standard. For convenience, we use the term “IAB” throughout this article.

[2] All quotes taken from the complaint have been translated from their original Polish.

Avoiding Monetary Penalties After OCC Enforcement Orders

In recent years, the OCC has aggressively used its cease and desist authority to address a variety of supervisory problems, including unfair or deceptive acts or practices, Bank Secrecy Act/anti-money laundering, and safety and soundness. As a result, there are a sizeable number of consent cease and desist orders that are in place against OCC-supervised institutions. Although the OCC has issued consent orders against banks of all sizes, the largest institutions have been disproportionately affected, and many of them remain under longstanding consent orders.

Unfortunately, the issuance of a consent order does not necessarily resolve a bank’s supervisory issues with the OCC. In a series of high-profile cases last year, the OCC assessed civil money penalties against banks that were already subject to consent orders of various durations. Civil money penalties (CMPs) levied against five major banks in 2018 approached $800 million. All of these actions are for compliance breakdowns, and most of them involve deficiencies with respect to BSA/AML compliance, which continues to be a perennial issue. The size of these penalties alone demonstrates that the OCC will act forcefully to ensure that timely corrective action is taken, and that compliance with existing consent orders will be vigorously enforced.

The assessment of a CMP following the issuance of a consent order is not unusual. In fact, it has long been a common practice for the OCC, especially in BSA/AML cases, to bifurcate its decision with respect to a penalty assessment from the remedial consent order. There can be various reasons for this, but two common reasons are to allow the OCC’s CMP process to be informed by the bank’s record of compliance with the consent order and the results of any “lookback” reviews, and to coordinate the OCC’s penalty action with any other agencies that are taking a concurrent action.

Action Plans

Consent orders typically require banks to develop a number of action plans to remedy the violation or unsafe or unsound practice that gave rise to the order. Although prepared by the bank, the action plans must be submitted to the agency for a determination of supervisory nonobjection. The plans should contain detailed action items that serve as a roadmap for bringing the bank back into compliance with the applicable legal and regulatory requirements. The OCC expects such plans to include specific deadlines for completion of each action item, and those deadlines effectively become the applicable deadlines under the consent order. Thus, a failure to achieve timely compliance with the action items in the plan will cause the bank to be deemed in noncompliance with the consent order itself. This is significant because not only is a violation of a consent order a basis for a CMP assessment in and of itself, but it starts the clock running for determining the number of applicable violation days for purposes of calculating the maximum penalty the agency can assess.

Consequently, banks must carefully consider the elements of their action plan and set realistic deadlines for completion of each action item. Banks that are subject to consent orders must have not only a project team in place that can execute the action plan, but governance over the entire process to ensure that it stays on track. Although the bank can request the extension of a deadline, such requests must be well supported and are not freely granted.

Although action plans can be amended if new problems are identified, this can result in the consent order remaining in place for an extensive period of time, especially if new violations or deficiencies are cited at subsequent examinations. In general, the longer a consent order is in place without the bank achieving compliance with all of its articles, the greater the chances that the agency will assess a CMP, in addition to the bank being subject to the restrictions and consequences of the consent order for a longer period of time.

Conclusion

The OCC’s 2018 CMP cases underscore the possibility that the agency may assess a CMP even after the bank stipulates to issuance of a consent order, and the likelihood of a CMP assessment is greater if the bank is deemed in noncompliance with the order. In order to mitigate the likelihood of a CMP, it is imperative that banks subject to a consent order make development of a remedial action plan a top priority, establish effective governance mechanisms to oversee implementation, and dedicate staff and resources to execute the plan and achieve compliance in a timely fashion.

Clash of the Titans: Federal Versus State Interests in Bank Partnerships

There is slow-moving, high drama happening in Colorado between the Federal Deposit Insurance Corporation (FDIC) and the administrator of the Colorado Uniform Consumer Credit Code (UCCC). This refers, of course, to the litigation filed by the Colorado UCCC administrator against Avant and related parties, and Marlette Funding and related parties (the Partners) (Zavislan v. Avant of Colorado LLC, 2017cv30377 (District Court City & County of Denver, Mar. 9, 2017); Meade v. Marlette Funding LLC d/b/a Best Egg, 2017cv30376 (District Court City & County of Denver, Mar. 3, 2017)). This drama intensified last fall when the regulator amended its complaint, originally filed in March 2017, to add national bank defendants. In these cases, the national bank defendants—Wilmington Trust, N.A. and Wilmington Savings Fund Society, FSB—act as the trustee for trusts established to hold bank-originated loans or receivables that are sold by the banks after origination.

At the heart of the litigation is who the “true lender” is on the loans made by the banks. The Partners assert that the banks involved in the partnership are, in fact, the lender. Conversely, the Colorado administrator asserts that the alleged partnership is a mere sham—a way for nonbanks to avoid state laws by taking advantage of the powers of banks to export interest and interest fees from their home states or states where they have a branch. This power, or “rate exportation,” is extended to banks because banks are given special treatment under federal and state law. It takes more to become a bank than to simply be a licensed lender. Banks are subject to rigorous oversight not only by their state regulator if said bank is state-chartered, but also by a federal regulator, such as the FDIC, which it turns out has spent a great deal of time thinking about how its member banks work with the Partners. Additionally, state regulators, some more than others, despise that rate exportation exists because states would prefer to exercise control over all depository entities, sometimes asserting consumer protection as the ostensible basis for their desire to control the banks.

Although the FDIC is not named as a defendant in the Colorado litigation, the Colorado UCCC administrator is taking direct aim at the banking agency’s guidance to its member banks who engaged in the partnership space. The FDIC has discussed third-party involvement with its member banks in numerous publications, including the Winter 2015 issue of Supervisory Insights in which it offers guidance to participants in the bank partnership space. The FDIC notes that some marketplace lending companies operate though a cooperative arrangement with a partner bank. The FDIC describes the arrangement thusly:

In these cases, the bank-affiliated marketplace company collects borrower applications, assigns the credit grade, and solicits investor interest. However, from that point the bank-affiliated marketplace company refers the completed loan applications to the partner bank that makes the loan to the borrower. The partner bank typically holds the loan on its books for 2–3 days before selling it to the bank-affiliated marketplace company.

In July 2016, the FDIC went beyond the discussion in Supervisory Insights and issued proposed guidance for its member banks that work with the Partners to originate loans, including vendors involved in bank partnerships, supplementing the many financial institution letters the FDIC has issued on this topic. The FDIC requested comments on its proposed guidance that outlines the risks that may be associated with third-party lending, as well as the expectations for a risk-management program, supervisory considerations, and examination procedures related to third-party lending. The proposed guidance, which has never been finalized, describes third-party lending as an arrangement in which a bank relies on an outside source to perform a significant aspect of the lending process, such as originating loans for third parties, originating loans through third parties or jointly with third parties, and originating loans using platforms developed by third parties. The draft guidance supplements and expands on previously issued guidance and would apply to all FDIC-supervised institutions that engage in third-party lending programs.

In its publications on this topic, the FDIC has walked through factors a bank should examine before entering into a partnership with a nonbank entity. It directs its member banks to consider the partner’s compliance with applicable federal law, consumer protection requirements, anti-money laundering rules, and fair-credit obligations, as well as applicable state laws such as licensing or registrations necessary to engage in the partnership. The FDIC also asks its member banks to consider whether the partnership will meet the FDIC’s safety and soundness requirements. Specifically, member banks should consider the following questions:

  • What duties does the bank rely on the marketplace lending company to perform?
  • What are the direct and indirect costs associated with the program?
  • Is the bank exposed to possible loss, and are there any protections provided to the bank by the marketplace lending company?
  • What are the bank’s rights to deny credit or limit loan sales to the marketplace lending company?
  • How long will the bank hold the loan before sale?
  • Who bears primary responsibility for consumer compliance requirements, and how are efforts coordinated?
  • Is all appropriate and required product-related information effectively and accurately communicated to consumers?
  • What procedures are in place to prevent identity theft and satisfy other customer identification requirements?
  • What other risks is the bank exposed to through the marketplace arrangement?

In its complaints against Avant and Marlette Funding, the administrator trots out several facts as allegedly indicating that the bank is not the true lender in the partnership. Some questions fintech companies and their partner banks may ask themselves in light of the Colorado litigation include the following:

  • Did the partner pay an implementation fee? What was the amount of the implementation fee?
  • Does the partner pay the bank’s legal fees and expenses related to the partnership? Does the partner pay the expenses and legal fees that the bank incurs to negotiate the partnership? Does the partner pay the bank’s legal fees when the bank is sued over the partnership?
  • Does the partner bear all the expenses incurred in marketing the loans?
  • Does the partner pay all the costs of determining which loan applicants will obtain loans, including paying employees to evaluate loan applications, purchasing credit reports, and paying wire transfer and ACH costs for money transfers in connection with the loans?
  • Does the partner decide which applicants get loans, applying lending criteria agreed to by the partner and the bank?
  • Did the partner or the bank develop and implement the processes used by the partner to identify qualifying loan applicants?
  • Is the partner responsible for ensuring that the partnership complies with all applicable federal and state laws?
  • Who developed and implemented policies for the partnership, which were used to ensure compliance with laws such as the Bank Secrecy Act, the Truth in Lending Act, and others?
  • Who is responsible for all communications with loan applicants and borrowers, including providing adverse action notices or loan agreements?
  • Who is responsible for all servicing and administration of the loans, even before the bank sells the loans to the partner?
  • Who has the right to consumer information? If an applicant is denied credit, can the partner solicit the consumer for other credit products? Is the bank permitted to use the information from applicants or borrowers? If so, how?
  • Who bears the risk of loss of principal if a borrower defaults?
  • Is there a collateral account? Does it secure the purchase of the loans by the partner? Where is the collateral account held? How much money must be in the account?
  • What does the purchase price for the loans include? Does it only include the amount advanced to the borrower, or does it include other amounts?
  • How are the loans sold? With or without recourse?
  • Does the partner indemnify the bank from and against claims arising from the partnership?
  • How are the loans funded? Does the partner fund the loans? Does it raise money from institutional investors to fund the loans?
  • Who shares in the profit of a paid-off loan? What is the percentage of the distribution of profit?

The “rent-a-bank” or “true lender” theory advanced by the Colorado administrator does not derive from a statute or regulation. Rather, it derives from case law and brute disdain that rate exportation exists. The lawsuit represents a most serious affront to the power of banks to both export interest rates and to hire partners to help them do it. There is, in fact, no statute or regulation in Colorado that prohibits a bank from engaging the services of a partner to aid them in their lending activities.

Indeed, the FDIC noted in its Supervisory Insights that banks can manage the risks posed by potential partnerships through proper risk identification, appropriate risk-management practices, and effective oversight of the nonbank partner. Virtually all documents that memorialize partnerships will contemplate and address these questions and, importantly, the FDIC does not dictate what the answers should be; rather, the FDIC is concerned about the risks vendor relationships pose to its member banks that engage in third-party relationships as an exercise of their bank power.

As of this writing, there has yet to be any litigation that addresses the tension between the directives of the FDIC to its member banks and the concept of “true lender,” although the Colorado litigation certainly raises questions. The FDIC and its member banks should not shy away from this discussion, which raises significant public-policy issues over who gets to claim the mantle of consumer protection, and what consumer protection should look like. Many consumers in Colorado no doubt want loans originated by banks through partnerships because they are often both faster and less expensive than available alternatives. In addition, an argument can be made that a bank acts as a true lender when it exercises the authority granted to it by the FDIC, its primary federal regulator. In other words, a bank exercising its power to hire a partner does not magically stop being a bank by exercising this power. It is likely that as pressure continues to tighten on bank partnerships though litigation such as the Colorado lawsuits and legislative efforts to curtail bank powers, banks involved in such partnerships and their partners will assert their power to engage partners consistent with the FDIC’s guidance in response.

Canada Supreme Court Rules That Privacy is Not An “All-or-Nothing Concept”

While considering the specific criminal charge of voyeurism, Canada’s Supreme Court of Canada recently confirmed that privacy is not an ”all-or-nothing concept,” and being in a public or semi-public space does not automatically negate all expectations of privacy with respect to observation or recording.

This case involved Mr. Ryan Jarvis, an English teacher at a high school who used a camera concealed inside a pen to make surreptitious video recordings of female students (particularly focusing on their faces, upper bodies and breasts) while they were engaged in ordinary school-related activities in common areas of the school. The students were unaware they were being recorded, and a school board policy in effect at the relevant time expressly prohibited the type of conduct engaged in by the accused. Mr. Jarvis was charged with voyeurism under s. 162(1)(c) of the Canadian Criminal Code (where a person surreptitiously observes or makes a visual recording of another person who is in circumstances that give rise to a reasonable expectation of privacy, if the observation or recording is done for a sexual purpose).

At trial, Mr. Jarvis admitted he had surreptitiously made the video recordings but the trial judge acquitted him because he was not satisfied the recordings were made for a sexual purpose. The Court of Appeal concluded that the trial judge had erred in law in failing to find that the accused made the recordings for a sexual purpose, but upheld the accused’s acquittal on the basis that the students were not in circumstances that give rise to a reasonable expectation of privacy since they were recorded in a “public” space (public areas of their high school which had security cameras recording them). 

The Supreme Court allowed the appeal (and entered the conviction) with the majority confirming that the students recorded by the accused were in circumstances that give rise to a reasonable expectation of privacy for the purposes of s. 162(1) of the Criminal Code. The Court found that “circumstances that give rise to a reasonable expectation of privacy” are circumstances in which a person would reasonably expect not to be the subject of the type of recording that had occurred, while considering the entire context in which the observation or recording took place. Significantly, the Court found that privacy is not an “all-or-nothing concept,” and whether an observation or recording would generally be regarded as an invasion of privacy depends on a variety of factors. The Court set out a non-exhaustive list of considerations, which include: (1) the location the person was in when she was observed or recorded, (2) the nature of the impugned conduct (whether it consisted of observation or recording), (3) awareness of or consent to potential observation or recording, (4) the manner in which the observation or recording was done, (5) the subject matter or content of the observation or recording, (6) any rules, regulations or policies that governed the observation or recording in question, (7) the relationship between the person who was observed or recorded and the person who did the observing or recording, (8) the purpose for which the observation or recording was done, and (9) the personal attributes of the person who was observed or recorded. 

Considering the overall context, the Court found that there can be no doubt that the students’ circumstances give rise to a reasonable expectation that they would not be recorded as they had been, especially considering that they were (i) teenage students at a high school; (ii) recorded by their teacher in breach of the relationship of trust; and (iii) in contravention of a formal school board policy that prohibited such recording. The Court confirmed that individuals “going about their day-to-day activities – whether attending school, going to work, taking public transit or engaging in leisure pursuits…reasonably expect not to be the subject of targeted recording focused on their intimate body parts (whether clothed or unclothed) without their consent.” In recording these videos, the accused acted contrary to the reasonable expectations of privacy that would be held by persons in the circumstances of the students when they were recorded.


Lisa R. Lifshitz

What To Do Next With Biometric Information in Illinois?

With the Illinois Supreme Court’s recent decision in Rosenbach v. Six Flags Entertainment Corp., the floodgates have opened for class actions in Illinois against businesses that collect biometric information from employees or customers. In Rosenbach, the Illinois Supreme Court ruled that alleged procedural violations of Illinois’s Biometric Information Privacy Act (“BIPA”) are enough, without alleging actual injury to an individual, to bring an action under the law. Although the details of that decision can be relevant to specific situations, —you need to know what to do now in light of this new ruling, particularly if your company currently is collecting biometric information from customers or employees, or considering doing so in the near future.

If your company has been collecting biometric data:

  • Initiate a rapid internal audit to determine how your company, or any agent or contractor you hire, is using biometric data for any reason (e.g., security for facilities or devices, time clock or other employment verification, or marketing to consumers).
  • Once you understand the scope of biometric data collection, implement BIPA’s requirements, which include: (1) informing an individual that his or her biometric information is being collected or stored; (2) informing the individual of the purpose of the collection, storage and/or its use, along with how long such information will be collected, stored, or used; and (3) receiving a written release from the individual to collect the information.

Since the Rosenbach ruling, we have seen a quick and significant increase in the number of BIPA class action lawsuits filed. If your company is currently facing a lawsuit over an alleged BIPA violation, consider taking the following steps: 

  • Remove the case to federal court, if possible. Based on Supreme Court precedent and a recent decision from an Illinois federal court, defendants facing these class actions may be able to challenge a plaintiff’s standing to bring suit based solely on a procedural violation of the statute where no actual harm has occurred.
  • Identify sources of either express or implied consent for the collection of biometric information. For example, employees may have received notice from an employee handbook about collection of their biometric data.  
  • Assert class action defenses related to typicality and commonality. Typicality is meant to ensure that the named plaintiff’s claims have the same essential characteristics as the claims of the entire class. If proof of the named plaintiff’s claims would not necessarily prove all of the proposed class members’ claims, the plaintiff fails the typicality requirement.  Commonality requires plaintiffs to demonstrate that the class members have suffered the same injury, meaning that they were affected by the same violation of the same statute. This emphasis on dissimilarities between plaintiffs will illustrate whether there are any class-wide commonalities.

Finally, companies considering biometric data collection in Illinois should:

  • Prepare explicit disclosures and documents for written consent as required by BIPA.
  • Determine whether the collection of biometric data is truly necessary for the business, given the strict requirements of BIPA and increase in the number of lawsuits. If this data is necessary, collect as little as possible and consider whether it can be captured and not retained.
  • Avoid collection of biometric data in Illinois. Some companies have begun altering their behavior in Illinois to adhere to the law. For example, Nest, a maker of smart thermostats and doorbells, sells a doorbell with a camera that can recognize visitors by their faces. However, Nest does not offer that feature in Illinois because of BIPA.
  • Keep an eye on legislative developments. Many other states have considered biometric privacy legislation over the years, but only Texas (in 2009) and Washington (in 2017) have passed such laws. But that may change soon. In the first few weeks of 2019 alone, legislators have already introduced new bills in Arizona, Connecticut, New Hampshire, New Mexico, New York, Oregon, and Washington. These initiatives have the potential to introduce a conflicting national patchwork of regulations.
  • In Illinois, there is currently a bill (SB3053) pending before the Illinois legislature to amend BIPA. The bill proposes to exempt private entities from BIPA’s requirements under a number of circumstances, including (1) if the biometric information is used “exclusively for employment, human resources, fraud prevention, or security purposes,” (2) if the company “does not sell, lease, trade or similarly profit” from the biometric information, or (3) if the company protects biometric information at least as securely as it secures other sensitive information.

Why the China-U.S. Tariff War Will Fizzle Out

This article is adapted from the recent Asia Ascending: Insider Strategies for Competing with the Global Colossus, published by the American Bar Association.


President Trump surprised many observers in August 2017 when he abruptly ordered the Office of the United States Trade Representative (USTR) to initiate an in-depth investigation into China and the need for protecting intellectual property (IP). The USTR confirmed that 40 million jobs in America were at risk because they were, either directly or indirectly, attributable to IP industries. The president utilized section 301 of the Trade Act of 1974 as the tool to target what he viewed as China’s unjustified actions harming certain U.S. industries. Interestingly, the president used section 301 in the same way it had been utilized two decades earlier to target Japanese trade practices.

Six months later, after it was announced that China’s 2017 trade deficit with the United States had exceeded $375 billion, the issue came to a very public boil. The trade deficit with China now dwarfs the size of the trade deficit with Japan during the 1990s. The Trump administration’s reaction to the trade deficit was to impose high tariffs on some Chinese goods with the threat of higher tariffs on a broader range of goods if significant changes were not implemented. As we all know, China retaliated.

Although unlikely to resolve quickly, I predict this high-profile clash—“trade war” may be too strong a label—will fizzle out over time. There are two key factors influencing the current trade dispute between the United States and China: economics and politics.

First, consider the economic factors. Although neither China nor the United States is willing to publicly admit it, both countries are so closely intertwined economically that neither can afford to engage in an all-out trade war without triggering severe fiscal consequences on themselves and the overall global economy. Examining the major components that make up China’s $375 billion deficit with the United States, it appears that $77 billion is comprised of Chinese-made computer sales, and another $70 billion comes from the sale of cell phones assembled in China. Although these electronic products are sourced from China, many are sold throughout the United States under product names that are not necessarily recognized by U.S. consumers as originating in China. Add in over $50 billion in shoes and clothing coming from China, and these three categories alone comprise half of the current Chinese trade deficit with the United States. It is difficult to imagine U.S. consumers willingly sacrificing their computers, mobile phones, and inexpensive clothing in exchange for a lower trade deficit with China. Most U.S. consumers do not care about the trade deficit; it just isn’t important to them in their daily lives. However, it is a fact that prices for these goods will skyrocket, and their availability will become a real challenge, if a trade war continues throughout 2019. From China’s perspective, there is no question that losing the U.S. market for just these three categories of products will leave a vast hole in China’s export capacity and harm Chinese manufacturers.

Now look at it from the standpoint of U.S. exporters. Of the $130 billion of U.S. exports sold to China during the same period, $16 billion was for U.S.-built aircraft; $10 billion was for American automobiles; and another $13 billion was for soybeans. Although the Chinese obviously consider all of these U.S.-made products as important and desirable, none are irreplaceable. Airbus SE, Europe’s Aerospace Consortium, would be delighted to dethrone Boeing as an aircraft supplier to China’s growing aviation market. Automobiles coming from the United States could be easily substituted by Japanese and European models. In addition, China blocked imports of U.S. soybeans in retaliation as the trade dispute escalated (although China has since backed off somewhat and promised to buy U.S. soybeans, but this can still change). In short, although the Chinese would definitely be hurt during the first six to twelve months of an escalating trade war, in the end China can and will find alternative purchasers for its exported products, albeit at lower prices. Without question, U.S. companies now exporting to China and with active Chinese operations will continue to feel pressure from Chinese authorities.

Next, think about the implications of the politics of a trade war between China and the United States. Much of the ongoing trade confrontation depends on the political wills of Donald Trump and Xi Jinping. As I describe in my recent book Asia Ascending, Xi Jinping today is the single most powerful political leader in the world. This is due to the unprecedented power and influence Xi has amassed over China during the last half decade. The best way to understand this is to picture China as a three-legged stool: the first leg is the Communist Party, which makes all policy and governmental decisions; the second leg is the centrally controlled Chinese economy; and the third leg is the immensely influential Chinese military. Few Americans understand or appreciate that Xi Jinping is China’s first leader since Mao to have complete mastery of all three legs of the stool. During the 19th Party Congress, Xi extended his existing term for at least another five years. Even more important, Xi has no identified successor(s), and is thus likely to rule over China for a very long time. So although China would definitely suffer during an extended trade war with the United States, Xi Jinping has the ability to sit back and play a long waiting game until a settlement is negotiated. On the other hand, if President Trump expects to run for re-election in 2020, he can expect to encounter significant criticism and political opposition if the current trade war extends beyond the next six months. The president is under tremendous pressure to arrive at an acceptable accommodation with China as he attempts to solidify his political base for re-election.

The bottom line is that because China is truly such a centrally controlled economy, it is in a better position than the United States (with its current highly politicized atmosphere) to hold out during a trade war. In addition, as the owner of almost 20 percent of the U.S. public debt, China can play the “no longer underwriting” card and begin to sell off its current U.S. debt holdings, which would inevitably drive up interest rates and accelerate a downturn in the U.S. economy.

Without question, China presents many serious challenges to the United States that must be proactively addressed. These challenges include China’s efforts to gain or purchase Western intellectual property by any means possible and China’s ongoing manipulation of its currency to its own benefit. These are problems that must be resolved between China and the United States over time. The United States can and should continue to use sections 301 and 201 of the Trade Act of 1974 as one approach to make China more open; however, an all-out trade war is not the right way—at least for now.