Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Monday, April 15, 2024

Hospital's use of Meta's Pixel, despite promise to keep data private, plausibly deceptive

Mekhail v. North Memorial Health Care, --- F.Supp.3d ---- , 2024 WL 1332260, No. 23-CV-00440 (KMM/TNL) (D. Minn. Mar. 28, 2024)

Mekhail alleged that North’s use of a piece of hidden software on its websites (a pixel developed by Meta) surreptitiously tracked, collected, and monetized various aspects of her online activity, including sensitive medical information protected by law. Although she alleged violations of the federal and Minnesota wiretap statutes and the Minnesota health records statute (which all survived the motion to dismiss), I’ll focus on claims under the Minnesota consumer fraud statute, the Minnesota deceptive trade practices statute, and common law claims of invasion of privacy and unjust enrichment.

Mekhail alleged that North’s public-facing website, which publicly offers information about medical issues and the health care resources provided by North, and its password-protected “patient portal,” which contains personal medical information, including patient records, appointment booking, and test results, both used the pixel to surreptitiously track, collect, and transmit her online activity, including page views, clicks, search terms, and so forth. This information was then allegedly collated by Meta and eventually used to craft targeted advertising to Mekhail related to her web activity.

Minnesota Consumer Fraud Act: The MCFA prohibits the “act, use, or employment by any person of any fraud, false pretense, false promise, misrepresentation, misleading statement or deceptive practice, with the intent that others rely thereon in connection with the sale of any merchandise.” Mekhail has failed to allege a misrepresentation in connection with merchandise, as required by the statute. The alleged misrepresentation was North’s statement that it “protect[s] health and medical information as required by federal and state privacy law.” At oral argument, counsel offered the theory that the “exchange of data” between Mekhail and North represented an intangible good or commodity, but the complaint only referred to North’s medical services. And Mekhail didn’t allege that there was a misrepresentation made by North in connection with its provision of any medical services. She alleged a misrepresentation related to data privacy, “but North is not in the business of providing data privacy services.”

The Minnesota Unfair and Deceptive Trade Practices Act  prohibits the use of “deceptive trade practices” in the course of business, vocation, or occupation, which include “caus[ing] likelihood of confusion or of misunderstanding as to ... certification of goods or services,” “engag[ing] in (i) unfair methods of competition, or (ii) unfair or unconscionable acts or practices,” and “engag[ing] in any other conduct which similarly creates a likelihood of confusion or misunderstanding.”

North allegedly made numerous statements that it protected patients’ medical privacy and health data. North disputed that anything shared with Meta was protected health data and also argued that some of allegedly deceptive statements are linked to the Privacy Policy, which (allegedly) states that North “may disclose information to third parties who act for us or on our behalf.” But that wasn’t enough at the pleading stage to overcome the allegations of the complaint.

Article III standing: MUDTPA’s only remedy was injunctive relief for a “person likely to be damaged by a deceptive trade practice.” This showing of likely future harm that is seemingly “indistinguishable from Article III’s threat-of-future-harm requirement for injunctive relief.”

Mekhail alleged that there were two likely future harms: where new data is taken from her by the Pixel, and where the data already taken by the Pixel is used in newly harmful ways. This first scenario was “in obvious tension” with the fact that she was, by her own allegation, a “former patient” of North. However, she argued that she could become a patient again, especially in an emergency situation. This was somewhat tenuous, but nonetheless,

there are real and undeniable scenarios in which Ms. Mekhail, despite her best efforts, becomes a patient again of North. And it is not clear to the Court that Ms. Mekhail could ever truly quantify the likelihood of such a scenario. After all, a medical emergency, like that contemplated in the pleadings, can arise as real and immediately as tomorrow or, with any luck, may never occur. It is simply not within Ms. Mekhail’s capacity to plead the kind of concrete likelihood typically required by our standing cases.

In addition, because she was once a patient, North allegedly has records of past treatment and appointments. Thus, she may have to use the patient portal even if she does not return as a patient. “If she needs to obtain or review her own medical records from North using the portal (surely the quickest and least burdensome way) she would once again be exposed to harm from the allegedly deceptive practices.”

But the second theory was stronger: “her data, already collected by the Pixel, remains beyond her control and may be used in harmful ways.” The complaint sufficiently pled a likelihood of future harm, if not a likelihood of future deception. To find no standing would deprive federal plaintiffs of the remedy the statute set out. Nonetheless, she would need to do more to actually obtain injunctive relief.

Invasion of privacy based on publication of private facts and intrusion upon seclusion: There wasn’t a sufficiently public dissemination of her health data for the first theory. But an intrusion by North cannot be plausibly alleged because Mekhail conceded that it was Meta (or Meta’s Pixel), rather than North, that made the interception.

Unjust enrichment claims survived.

Thursday, June 29, 2023

Transatlantic Dialogue Workshop, Institute for Information Law (IViR), Amsterdam Law School Part 2: Data Access

Impulse Statement: Christophe Geiger: Relevance to © exceptions and limitations—access to © protected work is important for this work. Research organizations have exception in © Directive and also are vital to DSA, so we must look at both. Only digital coordinator-approved researchers are allowed access, with some limited exceptions similar to fallback provisions in DSM Directive art. 4.

Impulse Statement: Sean Flynn: Data protection can be seen as protecting right to privacy but can interfere with right to research. Need balancing/narrow tailoring. Duty to protect: duty to regulate third parties—protecting both privacy rights and researchers in data held by third parties. Duty to promote right of society to benefit from research—similar to duty to create libraries—use the idea to check if we’re balancing rights correctly, regulating appropriate third parties, creating institutions to implement rights.

Europeans were less generous in concepts of “educational”/ “scientific” research than his US perspective—formal research organizations may be required. Journalists in some key categories: are they involved in scientific research? Consumer organizations?

Senftleben: Subordinated to goals of the DSA—research has to be about systemic risk (or mechanisms used by platforms to control systemic risk), which interferes with the freedom of research. If we want researchers to understand what is going on, you have to open up the data silos anyway. Thus there would have been more than enough reason to include a provision opening up data for research in general—trust the research community to formulate the questions. Not reflected in provision. Para. 12 opens up a bit b/c it goes outside the vetted researcher dynamic, but systemic risk defines what can be done with the data.

Keller: the provision formally sets out a really dumb procedure: the researcher formulates the data request without any contact w/platform, gets approval from authority, then goes to platform, which has to respond in 2 weeks. Unlikely to be a format/type of query that is immediately possible to collect, and the platform can only object on 2 enumerated grounds. So the workaround is to create a more dynamic feedback process so researchers can ask for what platforms can actually give. Hopefully an entity set up to deal w/GDPR issues can also check whether the researcher is asking for the right data/what the parameters should be. Hangs on reference to “independent advisory mechanisms” to prevent the process actually described in the DSA from happening.

Elkin-Koren: Example of studying society, not just digital platforms: studying health related factors not caused by platforms but for which platforms have tons of data. Basic/exploratory research where you don’t know the specifics of data you want or specifics of research question but would benefit from exploring what’s there. The key innovation of the DSA is turning research from a private ordering Q into one of public ordering.

Quintais: you have to be careful about who you invite into your research—if the researcher is from outside the jurisdiction they may have to be excluded from the data.

Leistner: one strategy is to interpret research as broadly as possible; another is to ask whether the exception is exclusive. NetDGZ used to have a broader scope; can a member state choose to keep/provide a new exception for research purposes, maybe it is at liberty to do so—there’s no harmonization for general access to data for research purposes. Maybe that is necessary, and it would have to transcend the various IP rights, including © and trade secrets.

Keller: note that having platforms store data in structures amenable to researchers also makes them more attractive to law enforcement. Plus, researchers are likely to find things that they think are evidence of crimes. National security claims: NATO actually indicated that it wanted to be considered a covered research organization. In the US there’s a very real 1A issue about access, but the Texas/Florida social media cases include a question about transparency mandates—not researcher access like this but not unrelated. Also 4A issues.

Comment: No explicit consideration of IP in grounds for rejection but third-party data leads to the same place.

Van Hoboken: Bringing different parts of civil society together for platform accountability for VLOPs; data access is the way to bring in researchers on these risks/mitigation measures. If this provision didn’t exist, you’d have VLOPs doing risk audits/mitigation measures but no way to compare. Some requests will be refused if the platforms say “this isn’t really a risk.” Platforms may also have incentives to deny that something is a mitigation measure to avoid research access. Mid-term value—won’t work fast and maybe will ultimately be defeated.

Goldman: What are Internet Observatory’s experiences w/benefits & threats by House Republicans?

Keller: serious internet researchers among the many academic researchers in the US targeted by various far right people including members of Congress and journalists with good relations w/Elon Musk, targeted as Democratic elite censorship apparatus: allegedly by identifying specific tweets as disinformation, they contributed to suppression of speech in some kind of collusion w/gov’t actors. About 20 lawsuits; [Goldman: subpoenas—information in researchers’ hands is being weaponized—consider this as a warning for people here. His take: they’re trying to harm the research process.] Yes, they’re trying to deter such research and punish the people who already did it, including students’ information when students have already had their parents’ homes targeted. Politicians are threatening academic speech b/c, they say, they’re worried about gov’t suppressing speech.

Goldman: consider the next steps; if you have this information, who will want it from you and what will they do with it? A threat vector for everyone doing the work.

Keller: relates to IP too—today’s academic researcher is tomorrow’s employee of your competitor or of the gov’t; researchers are not pure and nonoverlapping w/other categories.

Elkin-Koren: is the data more secure when held by the platform, though? Can subpoena the platform as well as the university.

Goldman: but you would take this risk into account in your research design, though.

Van Hoboken: At the point this is happening, you have bigger democratic problems; in Europe we are trying to avoid getting there and promote research that has a broader impact. But it’s true there are real safety and politicization issues around what questions you ask.

Goldman: bad faith interpretation of research: the made up debate over

RT: Question spurred by a paper I just read: is the putative “value gap” in © licensing on UGC platforms a systemic risk? Is Content ID a mitigation measure?

[various] Yes and no answers. One: © infringement is illegal content, so you could fit it in somewhere, but to create a problem, it would have to go beyond the legal obligations of Art. 17 b/c there’s already a specific legal obligation.

Keller: don’t you need to do the research to figure out if there’s a problem?

Yes, to study effects of content moderation you need access; can get data with appropriate questions. Could argue it’s discriminatory against independent creators, or that there is overfiltering which there isn’t supposed to be. But that’s not regulated by Art. 17.

Catch-22—you might have to first establish that Content ID is noncompliant before you can get access.

Frosio: you might need the data to test whether there is overblocking. [Which is interesting—what about big © owners who say that it’s not good enough & there’s too much underblocking? Seems like they’d have the same argument in reverse.]

Would need a very tailored argument.

Quintais Follow-up: had conversations with Meta—asked for data to assess whether there was overblocking and their response was “it’s a trade secret.”

Samuelson: Art. 40 process assumes a certain procedure for getting access. One question is can you talk to the platforms first despite the enumerated process. Some people will probably seek access w/o knowing if the data exists. There’s an obligation to at least talk to the approved researchers. But the happy story isn’t the only story: platforms could facilitate good-for-them research.

A: the requirements, if taken seriously, can guard against that—have to be a real academic in some way to be a vetted researcher; reveal funding; not have a commercial interest; underlying concept: the funder can’t have preferred access to the results. Platforms can already fund research if they want to.

Flynn: Ideological think tanks?

A: probably won’t qualify under DSA rules.

Samuelson: but the overseers of this access won’t be able to assess whether the research is well-designed, will they?

A: that’s why there’s an inbetween body that can make recommendations. They propose to provide expertise/advice.

Leistner: Art. 40 comes with a price: concentration of power in Commission, that is the executive and not even the legislature. Issues might arise where we are as scared of the Commission as US folks are of Congress at the moment. That doesn’t mean Art. 40 is bad, but there are no transparency duties on the Commission about what they have done! How the Commission fulfills this powerful role, and what checks and balances might be needed on it, needs to be addressed.

Paddy Leerssen: Comment period: US was the #1 country of response b/c US universities are very interested in access. Scraping issues: access to publicly accessible data/noninterference obligations. How far that goes (overriding contracts, © claims, TPMs) is unclear. Also unclear: who will enforce it.

Conflict with open science/reproducibility/access to data underlying research. Apparent compromise: people who want to replicate will also have to go through the data request process.

Leistner: but best journals require access to data, and giving qualified critics access to that underlying data—your agreement with Nature will say so.

Tuesday, January 25, 2022

does disparaging a company cast its principal in a false light?

Chaverri v. Platinum LED Lights LLC, 2022 WL 204414, No. CV-21-01700-PHX-SPL (D. Ariz. Jan. 24, 2022)

Plaintiffs (Mito Red) sell red-light therapy products online, in competition with Platinum (which uses the Volkin defendants’ marketing services). Platinum allegedly hired the Volkin defendants to “engage in a strategic defamation campaign online designed to ruin Plaintiffs’ professional reputation and to divert Plaintiffs’ customers away from their products and to Platinum’s competitive products.”

Among other things, Mito Red alleged that blog posts/video such as “Mito Red Light Therapy Scam: What Are They Lying About?” misrepresented their status as neutral reviews or critiques when in fact they were not, and that Platinum told customers that Mito Red “fabricates statistics, uses different LEDs than claimed, and that the lights are cheap and/or low quality knockoffs of Platinum’s lights.”

The statement that “Leaders come first and then all the followers. Mito Red here is the follower” was puffery. Likewise, Mito Red didn’t sufficiently plead falsity as to a blog post that said that Mito Red claims to have up to a three-year warranty even though other parts of its website “say[ ] otherwise,” and that as a result, customers “might just get scammed” out of redeeming their warranties based on “loopholes” on Mito Red’s website. Though the complaint alleged that Mito Red’s warranty terms are clearly stated on its website, that didn’t address the arguably falsifiable part of the statement—that parts of the Mito Red website cut back on the three-year warranty—and the rest was puffery because uncertain terms like “might” and subjective terms like “scam” and “loophole” were generic and vague.

Statements that “Mito Red literally ripped off [Platinum’s] design” and that “[Mito Red] literally took the framing construction of the Platinum LED lights and just changed the logo on the side. Other than that, it’s the exact same as far as a construction standpoint” were, however, sufficiently alleged to be falsifiable given the use of the word “literally” and the reference to specific product characteristics. “Hopefully, from a legal perspective [Mito Red] will get caught,” required more analysis: it came after “a section of the video in which the narrator alleges that Mito Red’s products use three-watt bulbs, which are less powerful than the five-watt bulbs Mito Red says it uses.” Relying on an earlier case with similar “hope” language, the court found it plausible that the statement could be understood as a statement of fact that Mito Red was acting criminally, making it actionable.

But this was “imprecise, generic, and vague” and thus puffery: “The design of the Mito Red Lights devices is not unique either, they mostly take the designs of their competitors’ devices and then use that in their own devices. And they are not providing the customers with anything new with an act like that.”

For other statements about the wattage/irradiance of Mito Red’s products, it was not conclusory to allege that Platinum’s statements were false because the products were truthfully advertised as five watts: that alleged falsity even if there could be a factual dispute over measurement.

The same results followed for the defamation claims.

Interestingly—and it seems to me wrongly—the court likewise refused to dismiss false light invasion of privacy claims brought by Chaverri, even though he was never named, because “statements made about Mr. Chaverri’s business certainly concern him and are about business matters for which he was directly responsible—a fact reasonably discerned from his role at Mito Red. It is plausible, from the facts alleged in the SAC, that the statements created a false implication about Mr. Chaverri even though he was not expressly mentioned.” A false light claim requires “a major misrepresentation of the plaintiff’s character, history, activities, or beliefs, not merely minor or unimportant inaccuracies.” “[A]llegations of negative reviews by a competitor suffice to plausibly state a claim for false light in this case.”

 


Wednesday, January 19, 2022

False advertising-based antitrust claims against Facebook survive motion to dismiss

Klein v. Facebook, Inc., 2022 WL 141561, No. 20-CV-08570-LHK (N.D. Cal. Jan. 14, 2022)

Once in a blue moon, a false advertising-based antitrust claim survives a motion to dismiss in a circuit that imposes a list of excessive requirements on such claims.  That time has come for Facebook. Consumers and advertisers adequately alleged that Facebook has monopoly power in social network/social media (consumers) and social advertising markets. Though I’ll detail the advertising-based claims below, I will also note that the court did dismiss claims based on Facebook’s “Copy, Acquire, Kill” strategy as untimely. Advertiser claims based on Facebook’s Network and Bidding Agreement with Google also survived, while the court dismissed consumers’ unjust enrichment claims with leave to amend.

Plaintiffs successfully alleged that “Facebook acquired and maintained monopoly power by making false representations to users about Facebook’s data privacy practices.” The complaint pled a lot of specifics about how much consumers cared about privacy; how much Facebook advertised its privacy practices as better than they were; and how bad they actually were.

False advertising can violate the Sherman Act if a monopolist’s representations about its own products or its rivals’ products “were [1] clearly false, [2] clearly material, [3] clearly likely to induce reasonable reliance, [4] made to buyers without knowledge of the subject matter, [5] continued for prolonged periods, and [6] not readily susceptible of neutralization or other offset by rivals.”

Falsity: There’s a lot of detail I’m skipping, but in essence, Facebook knew that users wanted privacy and advertisers wanted users not to have privacy, so it concealed the extent of its data use, allowing it to “beat out companies that were truthful about their user data practices or did not collect and sell user data.” “Indeed, Facebook’s initial success in the Social Network and Social Media Markets arose directly from competitors’ failure to keep users’ data private,” particularly Myspace’s. Representative Zuckerberg quote (of which there are many): “I founded Facebook on the idea that people want to share and connect with people in their lives, but to do this everyone needs complete control over who they share with at all times.” Meanwhile, it was collecting and selling user data to third parties in ways that did not match its public representations. E.g., it used Beacon to track users who clicked “No, Thanks” to purportedly opt out; provided user data—and the data of users’ friends—to third party developers despite claiming in multiple fora that “Facebook does not give advertisers access to people’s personal information”; etc. etc. Even after the 2011 FTC settlement, it deceptively tracked users and gave data to third party developers.

The complaint also alleged in detail how these deceptions helped FB obtain and maintain monopoly power. For example, it defeated Google+ in part because of privacy concerns, along with network effects. In fact, “Facebook realized that it could not allow users to find out about Facebook’s privacy practices while Google+ was a viable alternative,” e.g. an executive stating that “it would be unwise to remove privacy protections because ‘IF ever there was a time to AVOID controversy, it would be when the world is comparing our offerings to G+.’” The executive stated that FB could remove those protections after “the directive competitive comparisons begin to die down.”

This “clear[]” falsity was alleged with sufficient particularity. Analogizing to securities fraud, the court required clear falsity to be a material misrepresentation/omission that was capable of objective verification, as opposed to puffery. “Indeed, the Ninth Circuit’s statement that misrepresentations are anticompetitive only if they are ‘clearly false’ and ‘clearly material’ mirror the basic requirements of a securities fraud claim.” Likewise, Rule 9(b) pleading requirements provided a structure for identifying the requisite clarity. Although several of the representations identified were puffery (“[k]eeping the global community safe is an important part of our mission – and an important part of how we’ll measure our progress going forward”), many were not, specifically representations that FB wasn’t sharing private information with third parties; statements about the Beacon tool; and statements that FB didn’t use cookies to collect users’ data for commercial purposes.

Were the claims timely? Non-original observation: If techniques are used to obtain monopoly power, that seems inherently in tension with requiring claims to be brought very quickly. Anyway, the claims weren’t time-barred on the face of the complaint. The limitations period is four years, but the “period of limitations for antitrust litigation runs from the most recent injury caused by the defendants’ activities rather than from the violation’s inception.” To qualify as an “overt act,” the act that restarts the limitations period must satisfy “two criteria: 1) It must be a new and independent act that is not merely a reaffirmation of a previous act; and 2) it must inflict new and accumulating injury on the plaintiff.” (The argument that each misrepresentation about privacy is a mere reaffirmation seems inherently in tension with the big claim of big tech that competition is "only a click away," since continued belief in the representations is necessary to avoid that click.)

The relevant date here was December 3, 2016, and the consumer plaintiffs adequately alleged at least two false representations after then. First, on February 2, 2017, Facebook stated in an SEC filing that Facebook provides only “limited information to [third party application developers] based on the scope of services provided to us.” Second, in March 2018, Zuckerberg called the Cambridge Analytica incident a “mistake,” pledged to take action against “rogue apps,” and stated that “[w]e have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you.” These were adequately alleged to be clearly false, since the 2017 statement “would have given reasonable users the impression that Facebook was not providing third party applications with private information,” whereas Facebook had in fact provided users’ private information to numerous third party applications, including applications for which users were not registered. “For example, although Cambridge Analytica had only 270,000 users, Cambridge Analytica ‘was able to access the personal data of up to 87 million Facebook users.’” Zuckerberg’s statement likewise would have given reasonable users the impression that Cambridge Analytica was a “rogue app” and that Facebook had not been systematically providing users’ private information to third party application developers, but at least 10,000 applications had been able to access similar data for the entire period since the FTC settlement.

FB argued that its false statements after 2016 were mere reaffirmations of a previous strategy, not new and independent acts.  But an act is not a reaffirmation “simply because the defendant has previously committed the same type of act as part of a unified anticompetitive strategy.” The Ninth Circuit has clearly held that “if a defendant commits the same anticompetitive act multiple times, each new act restarts the statute of limitations for all the acts.”

The complaint also sufifciently alleged that the new false representations allowed Facebook to maintain a “critical mass of users” “by convincing users that Facebook was protecting their data.” After all, “improperly prolonging a monopoly is as much an offense against the Sherman Act as is wrongfully acquiring market power in the first place.”

Further, the consumers adequately alleged that the false statements were “ ‘not readily susceptible of neutralization or other offset by rivals.’ ”  From the existing cases, the court derived a perfectly understandable principle that technical product aspects that are difficult for customers to confirm are “not readily susceptible of neutralization.” When “any customer who tried to obtain the defendant’s services could discover that this representation was false,” by contrast, the falsity was readily capable of neutralization. Plaintiffs successfully alleged that the deceptive privacy practices could not have been revealed “by anybody without significant technical expertise.” Indeed, plaintiffs pled that “even sophisticated third parties, such as developers and search engines, cannot access user data without Facebook’s permission, let alone determine what Facebook is doing with user data.”

While FB argued that other firms “could have improved their own policies, or called attention to Facebook’s supposed misstatements,” it didn’t explain how rival firms could have known that Facebook’s statements were false when Facebook made them. “[T]here was no publicly available information that Facebook’s rival could have consulted to determine whether Facebook’s representations about its data privacy practices were true.”

Clearly material: FB argued that the consumers didn’t explain how “Facebook’s alleged misrepresentations prevented other well-resourced firms—like Google or Snapchat—from competing effectively.” Plus, there were other “competing theories for Facebook’s success,” “including Facebook’s ‘realness,’ which is alleged to be Facebook’s ‘distinguishing feature.’ ” But the Ninth Circuit has set out a comprehensive test for whether false advertising can violate the Sherman Act, see above, and alternative explanations aren’t part of the test where materiality is present. Both securities fraud and Lanham Act cases extensively address materiality, and the court used them as guidance: materiality means likelihood of influencing consumer decisions, so “clearly material” requires plaintiffs to show that “customers would consider the representation important in deciding whether to use the defendant’s product or that the representation was likely to influence customers to use the defendant’s product.”

Plaintiffs did that. For example, consumer surveys showed the importance of privacy, and FB’s own statements repeatedly recognized that users would not use Facebook unless Facebook promised privacy protections. E.g., Zuckerberg explained that the reason “Facebook became the world’s biggest community online” was that Facebook “made it easy for people to feel comfortable sharing things about their real lives.” Under these circumstances, it was “more than plausible” that users would have considered these representations important in determining whether to use Facebook.  

There was no requirement that the falsity be the “but-for” or sole cause of consumer behavior, as FB argued. And indeed, FB’s argument ignored that privacy was the foundation of its purported alternative causes—the consumers alleged that FB’s representations about its data privacy practices were essential to creating Facebook’s “realness,” starting with its initial limitation to people who could verify that they were part of college communities.

Causal antitrust injury: The consumers alleged that Facebook’s monopolization of the Social Network and Social Media Markets harmed users because, without competition, Facebook can extract additional “personal information and attention” from users. A cognizable antitrust injury includes harm to a plaintiff’s “business or property.” Consumers adequately alleged that their “information and attention” had sufficient material value to constitute harm to “property,” given that those things have material value to advertisers. “In other words, users provide significant value to Facebook by giving Facebook their information—which allows Facebook to create targeted advertisements—and by spending time on Facebook—which allows Facebook to show users those targeted advertisements.” Indeed, FB’s revenue per user in the US in 2019 was over $41, making the material value of consumers’ information and attention undeniable. Even without FB’s own estimates, the consumers identified other companies willing to pay users for information and attention.

And consumers adequately alleged causation: Had FB not eliminated competition in the social markets, they would have been able to “select a social network or social media application which offers consumers services that more closely align the consumers’ preferences, such as with respect to the content displayed, quantity and quality of advertising, and options regarding data collection and usage practices.” In more competitive markets, some companies pay users for their data. For example, “[w]hen consumers agree to use Microsoft’s ‘Bing’ search engine and allow Microsoft to collect their data, Microsoft ... compensates consumers with items of monetary value.” Plus, with more competition, FB itself would plausibly have collected less data as part of the bargain: The fact that FB acted more hesitantly when G+ was around was indicative of that.

Relatedly, consumers’ request for injunctive relief was not barred by laches, given the timeliness of the claim. FB argued laches because its 2011 FTC settlement was public. But when claims are timely, “the strong presumption is that laches is inapplicable.” Moreover, FB failed to explain why consumers would know, because of the 2011 settlement, that FB continued to deceive them thereafter.

 

Wednesday, September 22, 2021

Netflix prevails over claims by lawyers that they were misportrayed in money laundering film

Mossack Fonseca & Co., S.A. v. Netflix Inc., 2020 WL 8509658, No. CV 19-9330-CBM-AS(x) (C.D. Cal. Dec. 23, 2020)

MFSA brought trademark dilution and false advertising claims against Netflix for its portrayal in the film “The Laundromat.” (It’s about money laundering.) No. (Libel/false light claims aren’t addressed in this decision; see below.)

Rogers governed the false advertising claim. There was artistic relevance because the film is about MFSA and the Panama Papers, so the use of the mark was relevant to the film. And using a mark without the owner’s authorization does not explicitly mislead consumers about the source or content of the film. Gordon v. Drape Creative, Inc., 909 F.3d 257 (9th Cir. 2018), is not to the contrary, because Netflix used the mark in a different context, as opposed to using it exactly the same way the plaintiffs do. “Plaintiffs use their mark in the offshore shell company finance industry, whereas Defendant used Plaintiffs’ mark in a film.” Plus, and also distinguishable from Gordon, the mark appears in several scenes of the film, “and is therefore only one component of Defendant’s larger expressive work.” This was not explicitly misleading.

MFSA also argued that the trailer made false statements, because it “portrays the Plaintiffs as criminals and/or in the false light of criminality in the provision of their services as overseas lawyers.” But they failed to identify any false statement in the trailer for the Film. And use of MFSA’s logo in the trailer was also protected by Rogers.

Trademark dilution/tarnishment. Among the problems, Netflix’s use of the MFSA logo was noncommercial because it had some artistic relevance to the film. (Not precisely the full reason, but really I can’t blame the court for cutting some corners on a claim this terrible.)

Mossack Fonseca & Co., S.A. v. Netflix Inc., 2020 WL 8510342, No. CV 19-9330-CBM-AS(x) (C.D. Cal. Dec. 23, 2020)

Special motion to strike the state-law claims of libel/false light invasion of privacy. “The Laundromat” is allegedly “based on” investigative journalist Jake Bernstein’s book entitled Secrecy World: Inside the Panama Papers Investigation of Illicit Money Networks and the Global Elite. It “tells the story of the documents known as the Panama Papers ... leaked in 2015,” which “revealed how Panamanian law firm Mossack Fonseca illegally funneled money for the wealthy in Panama and worldwide.”

Plaintiffs initally failed to authenticate internet stories reviewing the film, e.g., the description: “When a widow gets swindled out of insurance money, her search for answers leads to two cunning lawyers in Panama, who hide cash for the super rich.”

The film was disseminated in a public forum, and it covered a public issue/an issue of public interest. The burden shifted to MFSA to show a probability of success on their claims.

They didn’t.

The Court finds no reasonable viewer of the Film would interpret the Film as conveying “assertions of objective fact,” particularly given the statement at the beginning of the Film “BASED ON ACTUAL SECRETS” which sets the stage and the disclaimer at the end of the Film that states the Film is fictionalized for dramatization and is not intended to reflect any actual person or history.

Even assuming a reasonable viewer would view the Film as statements of actual fact, the Film does not portray Plaintiffs as directly involved in the murders, drug cartels, and other criminal activity committed by their clients as referenced in the Complaint.

And the complaint admitted that some of the offshore entities created by Plaintiffs “appears to have been utilized by some [end users] for criminal activity including, but not limited to, money laundering, tax evasion, bribery and/or fraud.” So the film’s portrayal of persons for whom MFSA created shell companies as engaging in criminal activity was not false. Fonseca and Mossack were also criminally charged, so depicting them as being arrested and jailed wasn’t false. There was no reason to allow them discovery.

 

Tuesday, August 17, 2021

data breaches can lead to a potpourri of claims

In re Blackbaud, Inc., Customer Data Breach Litig., No. 3:20-mn-02972-JMC, MDL No. 2972, 2021 WL 3568394 (D.S.C. Aug. 12, 2021)

Query whether this kind of case will come out differently as TransUnion v. Ramirez gets further assimilated into the law.

Blackbaud (good name!) “provides data collection and maintenance software solutions for administration, fundraising, marketing, and analytics to social good entities such as non-profit organizations, foundations, educational institutions, faith communities, and healthcare organizations.”

It stores both PII and Protected Health Information from its customers’ donors, patients, students, and congregants. Plaintiffs “represent a putative class of individuals whose data was provided to Blackbaud’s customers and managed by Blackbaud,” thus they weren’t direct customers of Blackbaud.

In early 2020, “cybercriminals orchestrated a two-part ransomware attack on Blackbaud’s systems,” copying plaintiffs’ data and holding it for ransom. The cybercriminals then attempted but failed to block Blackbaud from accessing its own systems. “Blackbaud ultimately paid the ransom in an undisclosed amount of Bitcoin in exchange for a commitment that any data previously accessed by the cybercriminals was permanently destroyed.” [Um. That commitment seems … hard to believe?]

Plaintiffs alleged that the attack resulted from Blackbaud’s “deficient security program” and failure to comply with industry and regulatory standards. Its forensic report found that “names, addresses, phone numbers, email addresses, dates of birth, and/or SSNs” were disclosed in the breach but allegedly improperly concluded that there was no credit card data taken. Plaintiffs also alleged that Blackbaud failed to provide them with timely and adequate notice of the attack and the extent of the resulting data breach. In its July 2020 disclosures, Blackbaud asserted that the cybercriminals did not access credit card information, bank account information, or SSNs. But its September 2020 Form 8-K with the Securities and Exchange Commission said that SSNs, bank account information, usernames, and passwords might have been taken. This litigation followed.

This opinion addresses certain statutory claims, highlighting variation around the country in both specific data breach and general consumer protection claims.

California Consumer Privacy Act :

The CCPA

provides a private right of action for actual or statutory damages to “[a]ny consumer whose nonencrypted and nonredacted personal information ... is subject to an unauthorized access and exfiltration, theft, or disclosure as a result of the business’s violation of the duty to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information[.]”

Blackbaud argued that it was not a “business” regulated by the Act. Short answer: it was adequately alleged to be one.

California Confidentiality of Medical Information Act: One plaintiff plausibly alleged that her “medical information” was disclosed during the attack, and that Blackbaud plausibly qualified as a “medical provider” under the CMIA despite its lack of direct contact with her.

Florida Deceptive and Unfair Trade Practice Act: Monetary recovery requires “(1) a deceptive act or unfair practice; (2) causation; and (3) actual damages.” Blackbaud’s alleged bad practices were failing to adopt reasonable security measures and adequately notify customers and Plaintiffs of the data breach; misrepresenting that certain sensitive PII was not exposed during the breach, that it would protect Plaintiffs’ PII, and that it would adopt reasonable security measures; and concealing that it did not adopt reasonable security measures. However, the Florida plaintiffs failed to sufficiently allege actual damages, which under FDUTPA are “economic damages related solely to a product or service purchased in a consumer transaction infected with unfair or deceptive trade practices or acts.” A plaintiff may not recover for “damage to property other than the property that is the subject of the consumer transaction.” Here, Blackbaud’s data management software was “the property that is the subject of the consumer transaction,” not the data itself. And these plaintiffs didn’t allege damage to that property, only to their own bank accounts, emotional well-being, and data.

However, the Florida plaintiffs did state a claim for injunctive relief, since FDUTPA makes “declaratory and injunctive relief available to a broader class of plaintiffs than could recover damages,” as long as a plaintiff is “a person ‘aggrieved’ by the deceptive act or practice.” Plaintiffs alleged that Blackbaud’s misrepresentations and omissions about its security efforts and the scope of the Ransomware Attack “prompted them to take mitigation efforts out of fear that they were at an increased risk for fraud or identity theft.”

New Jersey Consumer Fraud Act: Blackbaud argued that its services weren’t within the scope of the NJCFA because it sells services to sophisticated businesses and entities, not the general public. The NJCFA prohibits a person from using an “unconscionable commercial practice, deception, fraud,” or the like “in connection with the sale or advertisement of any merchandise or real estate.” Merchandise is defined as “any objects, wares, goods commodities, services or anything offered, directly or indirectly to the public for sale.” New Jersey courts have said that the law’s applicability “is limited to consumer transactions which are defined both by the status of the parties and the nature of the transaction itself.” Although the NJCFA does not define “consumer,” New Jersey courts have interpreted the term to mean “one who uses economic goods and so diminishes or destroys their utilities.” A plaintiff does not qualify as a “consumer” if they do not purchase a product for consumption. Thus, the NJ plaintiffs weren’t “consumers” entitled to the protection of the NJCFA. Nor were donations to the entities that transacted with Blackbaud enough. Donors are not “consumers” under the NJCFA because they are “not being approached in their commonly accepted capacity as consumers” and a donation “involves neither commercial goods nor commercial services.” Plaintiffs didn’t allege that they purchased or used Blackbaud’s services, knew Blackbaud existed, or perceived that Blackbaud managed their data.

New York General Business Law § 349: This requires a consumer-oriented practice, which occurs if it has “a broader impact on consumers at large,” or “something more than a single-shot consumer transaction or a contract dispute unique to the parties.” However, GBL § 349 does “not impose a requirement that consumer-oriented conduct be directed to all members of the public[.]” Unsurprisingly, the allegations here adequately established consumer-oriented conduct.

Privity isn’t required under GBL § 349, so it was irrelevant that the NY plaintiffs weren’t direct consumers of Blackbaud. Section 349(h) specifically empowers “[a]ny person who has been injured by reason of any violation of this section” to bring an action. GBL § 349(h). “The critical question, then, is whether the matter affects the public interest in New York, not whether the suit is brought by a consumer or a competitor.”

Pennsylvania Unfair Trade Practices and Consumer Protection Law: The UTPCPL provides a private cause of action to “[a]ny person who purchases or leases goods or services primarily for personal, family or household purposes and thereby suffers any ascertainable loss of money or property, real or personal, as a result of the use or employment by any person of a method, act or practice declared unlawful” by the Act. “It is the plaintiff’s burden to prove justifiable reliance in the complaint.” Again unsurprisingly, the Pennsylvania plaintiff failed to sufficiently allege reliance on Blackbaud’s misrepresentations and omissions. She instead alleged that she was “required to provide her PHI to her healthcare provider as a predicate to receiving healthcare services[,]” and didn’t allege that she knew that Blackbaud maintained her data or was that she was exposed to representations Blackbaud made to her or her healthcare provider. Her allegation that she “would not have entrusted her Private Information to one or more Social Good Entities had she known that one of the entity’s primary cloud computing vendors entrusted with her Private Information failed to maintain adequate data security” was merely conclusory. Courts sometimes presume reliance, but only in cases involving life-threatening defects.

South Carolina Data Breach Security Act: The provision plaintiffs sued under covered only entities that “own[] or licens[e] computerized data or other data that includes personal identifying information,” requiring them to notify South Carolina residents in the event of a data breach; Blackbaud didn’t own or license the data; its possession was insufficient. True, a separate provision of the law required someone “maintaining computerized data or other data that includes personal identifying information that the person does not own” to notify the owner or licensee after a data breach, but plaintiffs didn’t assert claims under that provision.  

Thursday, August 05, 2021

IPSC Panel 12 – Identity, Data, and Privacy

Dustin Marlan, The Dystopian Right of Publicity

Privacy problems (surveillance) are often analogized to the dystopia of 1984; ROP problems stemming from infinite transferablility can be analogized to Brave New World (1932). A state of unfreedom that is apparently chosen and pleasurable (though enforced by drugs and conditioning). This is relevant to the extent that everyone has a ROP. ROP is also criticized when applied to use of celebrity personae in expressive works. Is that a preference for amusement over discourse? There are only about 18 celebrity personality cases/year. That’s not nothing and litigated cases aren’t everything, but wants to focus on publicity interests of average citizens: the pleasurable servitude problem. Risk of identity loss means that “everyone belongs to everyone else,” as the slogan used in Huxley’s book goes. Class action ROP lawsuits against social media: result was broader consents in TOS. Voluntary relinquishment of identity control in return for the benefits of social media. Commodification of identity as a prerequisite for social media access. Social, political problem; social networks get monopolies over human capital.

Proposal: clickthrough policies designed to educate the public, maybe choices. 1A shouldn’t be a barrier to regulation b/c the use for endorsement is commercial speech.

Rothman: Does not agree that ROP is the coined opposite of the right of privacy, nor that it should have a purely economic and commercial focus. See her book. Also in her count there are 100s of ROP cases/year—order of magnitude more.

RT: Suggestion: Read Ashley Mears, Very Important People, on pleasurable exploitation and its relation to commodification and anti-commodification norms. Doesn’t have policy discussion itself, but has implications for solutions where individual relations seem pleasurable. Discussion seems indifferent to hidden data use; endorsement is almost literally the tip of the iceberg of individual data use. Proposal seems pretty weak tea; disclosure won’t work if they can still condition access on agreement.

A: On disclosure: Wants to be realistic about what could happen.

Wu: seems more unwitting [without thinking about it one way or another] and unavoidable transfer than pleasurable transfer. But in context of social media, the pleasure is inextricable from the agreement [and it’s not surprising that the agreement would then be experienced as, at least, not a problem].

Bita Amani, Authoring Identity: Copyright, Privacy, and Commodity Dissonance in the Digital Age

Emerging threats to capacity for self-authorship seem greater than in the past—here, algorithmic errors may generate disruptions in identity construction. Personal experience with multiple Bita Amanis with related interests. This has led to problems both with health care (corrected), misattribution of credit (interviews), and database connections as if they were all the same author.

Why should we care? Misappropriation; interference w/connection b/t author and text. Moral rights as a solution? Not clear. Peter Doig, well-known artist: denied creating a particular work; the owner claimed it was Doig’s work, and had to defend his identity to assert it wasn’t his work. The plaintiff pointed to style indicia of it being his. (Doig won; it was another guy named Peter Doige.) Privacy may have untapped potential for dealing with these misattributions, especially false light. Even true facts can be actionable; defamation is not required for intrusion on privacy/public disclosure of embarrassing facts. (Comes out of case involving nasty divorce where one party posted videos involving the kids on YT.)

Victoria Schwartz, Joint Privacy

[picking up kid; interesting project using ideas of joint authorship as a lens on issues of privacy that arise when people create information (or even just have information, as w/DNA) together and so sharing one’s own information or life story necessarily implicates others.]

Uri Y. Hacohen, User-Generated Data Network Effects

Network effects are key to current tech companies, whether via reviews, userbase, or otherwise. AI increases the power of network effects—making Google’s predictive results better. Many problems, including price discrimination, manipulation. Possible changes: changing liability regime so that they are more liable depending on what they know (e.g. that the user is a child); simple payment requirements to pay for harm [Pigouvian tax, I think]; management rights for users. Does not want to break up (at least as first solution) b/c that just means more entities with the data creating privacy and security problems, but we may not have a choice.

Felix Wu: Amazon, Google, and FB actually had different core businesses and if they’re all gulping this data then we have oligopoly, not monopolies. Even in a world of perfect competition, wouldn’t they be competing for who can manipulate best?

A: for FB, more data = more problems; right now they aren’t sharing as much as they might.

Wu: Some would say that innovation from the scale isn’t worth it; give up the marginal benefits and limit the size.

Wednesday, February 24, 2021

Global Advertising Lawyers Alliance (GALA) Webinar – “Hot Topics in Advertising Law in North America”

I always enjoy these and recommend the free GALA webinars to those interested in advertising law; I joined in progress due to some technical difficulties on my end.

Joseph Lewczak: FTC v. Teami ($15 million settlement, all but $1 million suspended), where there were other bad things like fighting cancer claims and also nondisclosure by influencers like Cardi B. FTC does not want disclosure below the “more” expansion link, if any; it has to be above so anyone will see it even if they don’t seek out more info.

Kelly Harris: In Canada, Competition Bureau brought enforcement action against FB for misleading privacy representations even though it’s a free service. New bill: regulating online programmers like Netflix, though UGC will be excluded (but might be included if commissioned for or developed by the service). Regulator will impose “conditions of service,” though not quite traditional broadcaster licensing.

Jose Antonio Arochi: Mexico doesn’t have specific regulations. Twitter reviews for Sephora where consumers were demanding money for allegedly expired products and saying they couldn’t get refunds from Sephora. Apparently Consumer protection agency called Sephora to clarify the situation—there was no litigation.

Melissa Steinman: Shop Safe Act introduced trying to stop fakes in ecommerce; didn’t go through (attempt to create contributory liability for platforms) but will be reintroduced, so keep an eye out. Theme for this year: platform liability.

Reviews: Vitamins Online v. Heartwise: Manipulation of reviews actionable under Lanham Act, including manipulating “helpful” votes and giving people free stuff for positive reviews.

Maryland: First ever digital advertising tax, on gross receipts. Vetoed by governor but overridden; lawsuit brought by platforms like FB and Google—wait and see. NY, DC, WA are considering similar taxes so it’s a trend to watch.

Harris: In Canada, the provinces regulate consumer agreements online. Certain procedural requirements: must be able to see & save a copy of the disclosures/contract w/in 15 days, via email receipt for example. Certain practices are limited: unilateral changes of material elements like price. Failure to comply: right to rescind; damages, including on class basis and class actions in Canada are rising, especially Quebec and B.C. Competition Bureau is very interested in digital economy. Drip pricing (adding fees after initial disclosure) is an area of significant interest: StubHub, TicketMaster, car rental companies that charge “environmental” fees. Substantiation of “regular” price claims is also a big issue.

Arochi: Again, Mexico has nothing specific to online shopping, just consumer protection and COFEPRIS (Mexican FDA), which does regulate advertising. Suspended 34,000 webpages during pandemic of people trying to publicize products that are health-related or make health claims. Permits for certain products are required in advance: health related, supplements, food/beverage, pesticides, alcohol/tobacco. Also new disclosures for high-fat etc. foods with big labels on the front of the package.

Jeff Greenbaum: Don’t assume that online disclosures are clear and conspicuous, even if “everyone is using them.”

Harris: Canada: disclosures can clarify but can’t correct a misleading main claim or contradict the main claim. One click away is likely low risk of regulatory enforcement, but ensure disclosures travel across platforms and ensure consistency in disclosures in multiple places and/or media: that was at issue in recent self-regulatory competitor challenges. This is an issue of coordinating teams that might be in charge of different media.

Arochi: Mexico enforcement is more likely to target different products that become a problem. There aren’t as many cases day by day and that lack of emphasis from the authorities affects behavior.

Lewczak: consider that disclosures need to be fit to medium and consumer’s consumption thereof: disclosure in YT video description may not be enough. Not a lot of US action on sweepstakes. Covid concerns: don’t be tone deaf; giving away cruises, event tickets, and other in person prizes can be risky and generate bad PR. Don’t require physical presence for entry or award of prizes. Do your rules have a force majeure type limit that allows covid-related flexibility? Avoid unintended sweepstakes with attempted charitable giveaways to doctors, restaurant workers, etc.; may require disclosures and charitable registration: Draper James teachers giveaway. Loot boxes are on the horizon.

Harris: Winner of contest must complete test of skill; cases vary on what’s enough, but 4-part, multi function math question with a time limit. You can do it on entry or just for the winner; depends on structure of promotion. Also: no forcing purchase to enter, but can say, “submit an original essay.” Quebec: registration requirements (doesn’t apply below a certain monetary threshold, and to non-advertising promotions like a contest for employees) + French language availability. A minimum disclosure is required in all advertising, adequate and fair disclosure: number and value of prizes and other material facts—entry dates, eligibility requirements, geog. distribution of prizes if any. Can be difficult depending on how contest structured.

Arochi: Interior Ministry and Consumer Protection Agency require permits for some sweepstakes/contests. TV contest for example requires a specific agency permit. Chance-based contests may not need a permit. Division of authority may not be clear so may have to ask both agencies and then pick one to apply to.

Steinman: Lots of US action on country of origin. NPRM, July 2020 on Made in USA claims, codifying current enforcement policy and adding ability to seek civil penalties: need all or virtually all of manufacture, or component parts/ingredients, to make Made in USA and related claims. This can include use of flags, eagles. But can use qualifiers like “made in USA of domestic and foreign components.” “Designed in US” can also work. California has a 5% foreign content requirement. FTC also challenged “Danish cookies” that weren’t made in Denmark.

FTC v. Williams-Sonoma: $1 million penalty and prohibition on unqualified US origin claims without being able to substantiate them. FTC v. Chemence, Feb. 2021: $1.2 million for violation of existing order, highest monetary judgment ever for Made in USA case. Made in US: final assembly/processing and all significant processing in the US, and all or virtually all ingredients/components are made/sourced in the US. Assembled in US: product is last substantially transformed in the US, its principal assembly takes place in the US, and US assembly operations are substantial.

Harris: Made in Canada standards are similar: last substantial transformation in Canada; at least 51% of total direct costs of producing/manufacturing occurred in Canada, and accompanied with appropriate qualifying statement (e.g. made in Canada with imported parts). Moose Knuckles parka, 2016, lacked qualifying statement (made with Canadian and imported components); settled for $750,000 donation. Product of Canada: like made in Canada, but all or virtually all of the total direct costs (98%) must be Canadian.

Arochi: Mexico has one of the highest numbers of Appellations of Origin; more than 8 processes for obtaining certification for GIs. Hecho in Mexico is a certification; must be (majority) produced in Mexico, not precisely corresponding to AOs or GIs, but permit coming from Mexican government.

Greenbaum: Environmental marketing: Little FTC enforcement but some states have enacted more stringent requirements or made Green Guides into enforceable rules. Mattero v. Costco: class action over Costco’s “environmentally responsible” claims for detergent: were claims sufficiently qualified/were other benefits communicated: court denied motion to dismiss. New administration and revision of Green Guides may be an opportunity for FTC to change its approach.

Harris: Canada is similar; no specific green marketing laws, just Competition Act/provincial statutes. Federal guidance on green claims like recyclable exists, and self-regulatory code/guidance specific to environmental claims. Ongoing consumer class actions regarding pesticide in supposedly “organic” medical cannabis. All 2020 self-regulatory consumer complaints were upheld, including against a joke about benefits of saving water, because water scarcity is a serious issue and implication that product could help was found misleading—humor, puffery defenses rejected. Also home fragrance claimed to have “natural” ingredients—some ingredients were natural, but no evidence that all scent components were. Exaggeration of environmental benefits also were challenged. Grain Farmers of Ontario: depicted farms and farmers under stress, food supply shortages, empty grocery stores: condemned as inappropriate fearmongering.

Arochi: Also enforced by consumer protection agency (PROFECO) and COFEPRIS. CONAR is the self-regulatory body.

Taste and cultural concerns:

Lewczak: BLM and #MeToo—but not clear that any regulator or self-regulator will do anything. Major TV networks have their own guidelines against violence, antisocial behavior, oversexualization, stereotyping. Third party organizations also complain: PETA for animals, MADD for alcohol, other rights groups. Frida Mom’s ads showing reality of postpartum recovery rejected from 2020 Oscars for being too graphic—at least get some PR benefit from that.

Harris: significant Canadian regional differences. Claims likely understood more literally by regulators. Supreme Court of Canada uses the “credulous, hurried and inexperienced” standard. Can’t demean, denigrate, disparage: one complaint can bring you before Ad Standards. Canadianisms to watch out for: mostly metric except for height and weight of people; Celsius for weather. French exists outside Quebec. Spelling is different: colour, behaviour, honour, centre, etc.

Arochi: Spanish is the official language. Regional differences are significant; a federation with 31 states and Mexico City. 10th most populated country in world, most Spanish speakers. Measurements are always metric/Celsius for weather. Can start claims before consumer protection agency without disclosing identity, which allows competitors to bring claims strategically.

Covid enforcement

Steinman: FTC recorded more than 130,000 complaints in first half of 2020; issued more than 300 warning letters with 95% compliance rate; has brought some cases against covid treatments. Even Purell received a warning letter. Also price gouging cases. Quality King raised prices for Clorox etc several times and was forced to disgorge profits + penalty; 3M has also been active against mask resellers (or counterfeiters). Privacy is also a hot topic: CCPA in California is now effective [or as Eric Goldman might say, it’s in effect]. First class action under this has been filed, against Ring (plaintiffs include people who were hacked which they found out when someone talked to their daughter).

Harris: Canada is seeing new rights, Consumer Privacy Protection Act—against automated decisionmaking, deidentified data; data portability/erasure; Quebec is also updating his regime.

Arochi: New food labeling law in Mexico, against use of cartoons on foods with excess fat etc. Black stamps on products that qualify; also new guidelines on medical marijuana.

Wednesday, February 17, 2021

IP writing competition for law students

 From NYIPLA: The New York Intellectual Property Law Association (NYIPLA) is currently accepting submissions for the Hon. William C. Conner Intellectual Property Law Writing Competition. We would highly appreciate being able to work collaboratively with Harvard to offer this opportunity to your law students.  The award information is listed below and further information can be found on our website: https://www.nyipla.org/nyipla/ConnerWritingAwards.asp.

Award Name

Hon. William C. Conner Writing Competition

Award Provided by

New York Intellectual Property Law Association (NYIPLA)

Deadline

Sunday, February 28, 2021

Number of Awards

One (1) first place award in the amount of $1,500 and one (1) runner-up award in the amount of $1,000.

Provider Website URL

www.nyipla.org

About NYIPLA

The New York Intellectual Property Law Association serves as a vehicle to promote the development and administration of intellectual property interests. NYIPLA strives to educate the public and members of the bar in this particular field and continually works with foreign associations to harmonize the substance and interpretation of international conventions for the protection of intellectual property. Today, the NYIPLA exceeds 1,500 intellectual property attorneys practicing throughout the United States and abroad. The Association has a combined total of twenty-four active Committees and Delegates, whose scope covers all aspects of intellectual property law and practice and related topics, including alternative dispute resolution, legislative oversight and amicus briefs, meetings and forums, and continuing legal education.  

About the Award

The Hon. William C. Conner Writing Competition was established to recognize exceptionally written papers that are submitted by law students and is presented each year at the Annual Meeting and Awards Dinner. The competition is open to students enrolled in a J.D. or LL.M. program (day or evening). The subject matter must be directed to one of the traditional subject areas of intellectual property, i.e., patents, trademarks, copyrights, trade secrets, unfair trade practices, antitrust, and data security/privacy issues. Entries must be submitted electronically by Sunday, February 28, 2021, to Richard Brown, [email protected].

For Eligibility and Submission Requirements Visit

https://www.nyipla.org/nyipla/ConnerWritingAwards.asp

Contact

Lea Tejada

E-mail Address

[email protected]

Contact Phone Number

(201) 461-6603

Fax Number

(201) 461-6635

Mailing Address

2125 Center Avenue, Suite 406, Fort Lee, New Jersey 07024