Friday, June 30, 2023

Transatlantic Dialogue Workshop, Institute for Information Law (IViR), Amsterdam Law School Part 5: Beyond the DSA

Chair: João Quintais

Samuelson: Joel Reidenberg’s Lex Informatica is a foundational text worth revisiting. Riffs off of the concept of law of trade; what happened was that people engaged in inter-area commerce made up sales law through their practices. Informal rules became law; he was thinking that Lex Mercatoria was a metaphor for Lex Informatica, where similarly we need to think about new tools. Commission has tried to invent these new tools.

Proposed AI Act disclosure of data—if you don’t want us to cough up every URL on the internet, what do you want? “We used Common Crawler”? What is the purpose for which disclosure is sought? Whether you want a map the size of the territory depends on the goal—is it collective licensing? [Would that even help get the money to the right people? I guess that’s rarely the big concern of people demanding collective licensing.]

Eleonora Rosati: EU Parliament wants to get AI Act to finish line in 2023. Goal: framework for trustworthy AI. Continues on lines of transparency/disclosure. But also can’t exist w/o thinking of other frameworks; shows how fragmentary EU law is. Consider: deepfakes and training data. Original EC proposal provided training material disclosure, but didn’t clarify what permission was needed (if any). Now refers to “without prejudice to applicable © rules.” No mention of whether permission is required for deepfakes.

Justin Hughes: you can have deepfakes about floods and tornadoes, not just about people. In effort to address free expression they’ve also added unnecessary bangs and whistles. Current proposal: Deepfakes are defined as things that falsely appear to be authentic or truthful, which requires disclosure, except if they’re evidently created satirical, artistic, or fictional (which seems like it wouldn’t falsely appear authentic or truthful). “Sufficiently detailed” summary of use of training data protected by © is required, but as/more interesting is requirement of generative AI to have adequate safeguards against generation of content in breach of EU law (which means ©). [I assume they also mean CSAM and other things to be named later.] Art. 27 of DSA is recommender system transparency; are they high-risk AI systems w/in the meaning of the AI Act? Yes in Parliament’s version. That means direct overlap in rules. His view: some recommender systems should be prohibited AI, if social media use is addictive.

Sebastian Schwemer: Understand where it comes from—new legislative framework for product regulation. Talk to those who followed the broader process.

Sean O’Connor: training and outputs may need different safeguards. Each has different relationships to ©.

Eric Goldman: dictating how content is published is the fundamental framework of the Act—we’re going for the idea that gov’t will dictate that, which he dislikes extremely.

Quintais: they realized that they hadn’t clearly covered generative AI and panicked and started introducing new rules.

Daphne Keller: Such a mistake to add generative AI—the policy questions around AI for criminal sentencing, whether you get a loan, etc. are so important and deserve attention—would be better to deal with content generation/speech separately. Use in content moderation—deciding what to take down—v. using in recommendation—do you have to guard against addiction in recommendation?

Quintais: Drafters didn’t talk to the people doing the DSA or the overlaps. Depending on what happens next, there might be real overlap.

Matthias Leistner: if you take measures to avoid substantial similarity in the models, you might stave off fundamental challenges to © principles that show up only in case law—no protection for ideas or style, though protection for characters. Taking measures to limit the models might be a good strategy to deal with the long-term danger of loss of those principles. Use of existing works to train is a separate issue.

Quintais: for the first time, have heard © lawyers say there’s a need to protect style—not a good development.

Hughes: doesn’t think that AI output is speech.

Goldman: does. Collect information, organize it, disseminate it. AI does those things which are what makes a publication.

Hughes: expression is by humans.

Goldman: makes a different choice.

Keller: readers have a right to read what they’re interested in.

Niva Elkin-Koren: when I prompt ChatGPT and interact w/it, that is speech.

Hughes: if an algorithm suggests content written by human, there’s still human participation in the underlying creation. Recommendation automation itself shouldn’t be speech b/c it’s not human.

Elkin-Koren: ranking search results should be considered speech b/c it reflects an opinion about how to rank information implemented by code.

Samuelson: explainability as a different factor—if it’s not possible to explain this stuff, generative AI may not have much of a future in Europe. [Of course “the sorting principle is stuff I like” is not really explainable either, even if there is in fact a deterministic physical source in my brain. But scale may make a difference.] “As explainable as possible” might work.

Keep in mind that standard-setting also favors power: who can afford to go to all the meetings and participate throughout. Delegating public authorities to private entities. Different regulatory structures for different entities—when telcos became broadband providers, had to decide where they would be regulated, which is similar to the Qs raised by definitions of covered AI—regulatory arbitrage.

Senftleben: use of collecting societies/levies can be a better regulatory answer than a cascade of opt-out and then a transparency rule to control whether opt-out is honored and then litigation on whether it’s sufficiently explained. If we’re afraid we might lose freedom of style/concepts, telling © owners to accept a right of remuneration is an option.

Matthias Leistner: don’t give in too soon—remuneration already frames this as something requiring compensation if not control, but that’s not obvious. Note that Japan just enacted a very strong right for use for machine learning, and the anime/comics industries didn’t object to it apparently.

Van Hoboken: May need new speech doctrines for, e.g., incorporating generative AI into political speech.

Schwemer: we might want special access to data for purposes of debiasing AI as a uniquely good justification for, e.g., copying for training.

Bernt Hugenholtz: these companies want to move forward, and if they can get certainty by paying off rightsholders they will do so; probably not collective licensing although the societies would like that; they don’t have mandates. Instead firms will get rid of uncertainty through cutting big private deals.

Senftleben: we can give a collective licensing mandate if we choose—the only way to get money to individuals.

Hugenholtz: but levy systems take forever to introduce too. We’ve never had a levy regulation.

Elkin-Koren: Google already has an enormous advantage over newcomers; making everyone who enters pay a levy would kill competition forever. [I also wonder about what a levy would mean for all the individual projects that use additional datasets with existing models to refine them.]

Senftleben: his idea is to put a levy on the output.

Samuelson: but they’re demanding control of the input in the US, not the output (unless it is infringing in the conventional sense).

Frosio: In US it is obvious that training the machine is fair use; not the case in Europe. What do we do? [Some discussion of how obvious this was; consensus is that’s the way to bet although the output will still be subject to © scrutiny for substantial similarity.]

Some discussion of German case holding that, where full copies of books were in US, Germany only had authority over snippets shown in search, and those were de minimis. Frosio: French decision held Google Books violated copyright/quotation right didn’t apply. At some point some countries are going to find this infringing, and there will be a divide in the capacity to develop the tech.

Keller: Realpolitik: if platforms can be compelled to carry disinformation and hate speech, the platforms’ main defense is that they have First Amendment rights to set editorial policy through content moderation and through ranking—this was relatively uncontroversial (though is no longer!). Eugene Volokh thinks that ranking algorithms are more speechy than content moderation b/c former are written by engineers and bake in value judgments; she thinks the opposite. There’s caselaw for both, but Volokh’s version has been embraced by conservatives.

Leistner: why a levy on the output if it’s distant enough to not infringe a protected work? If you have a levy on the input, why? Results don’t reflect inputs/the model itself doesn’t contain the inputs, so people will just train the models outside Europe. So that means that you’d need to attach levies to output, but that’s just disconnected from ©--an entirely new basis.

Dussolier: If the issue is market harm from competition w/an author’s style, a levy is not compensation for that—it harms specific people and if it is actionable it should be banned, not subjected to a levy.

Elkin-Koren: if generative models destroy the market for human creativity, does that mean we pay a levy for a few years and then © ceases to exist? What is the vision here?

Frosio: another question is who is liable: if we focus on output, liability should be on end users—end users are the ones who instruct model to come up w/something substantially similar and publish the output.

Samuelson: a global levy is not feasible; also, most of the works on which the models have been trained are not from the big © owners or even from small commercial entities—it’s from bloggers/people on Reddit/etc—how would you even get money to them? [I mean, I’m easy to find 😊]

Transatlantic Dialogue Workshop, Institute for Information Law (IViR), Amsterdam Law School Part 4: Industry Impact and Industry Relationships

Chair: Daphne Keller: EU heavy compliance obligations + a bunch of other laws coming into effect right as platforms are laying off people who know how to do that—a bumpy road.

Impulse Statement: Rachel Griffin: Technocratic approach to regulation; we associate auditing with clear success metrics (did you make a lot of $ or not) versus these very political issues with no yes/no answers—what does it mean to be successful? Rhetoric of auditing, but even in finance auditing is not that objective; the problems multiply when applied to political speech. “Rituals of verification” substitute for results in lending legitimacy. What goal is the rhetoric and framework of auditing actually serving? “others doing the Commission’s job for it?” Maybe it should be to provide the minimal value of accurate information—not just making up numbers. If so, it would be more helpful to have transparency reports audited rather than risk assessment.

Are we focusing too much on auditing and too little on platforms’ internal risk assessments, which are a precondition to the audits? Realistically, any audit report will take what companies have been doing as their point of departure and give them feedback on improving.

Risk of regulatory capture by corporations. Wants to push back against civil society involvement as a solution—up to a point, but that’s not automatic or easy and has its own limitations. Civil society doesn’t represent everyone equally; it’s prone to corporate capture too.

Impulse Statement: Eric Goldman: Hypotheses about what he thinks will happen, supposed to be provocative but also sincere: Homogenization of services’ practices: companies will watch each other and figure out what satisfies the necessary audiences. Ossification of content moderation processes once blessed: it won’t change further. Cut as many corners as they can: much depends on how regulators push back on that—seen that with GDPR and will see that here given the scope for judgment calls. In the US we would know that doing the minimum would suffice, but the expectation here is that would “prompt a dialogue,” though what happens then is unclear. Many of these provisions will be outdated soon or w/in years—fighting the last war. Will see the weaponization of options—everything we’re doing is put through a partisan filter, and the longer we’re in denial about that the worse things will ultimately get. Raise the costs of the industry, rewarding big and punishing small, so we’ll see a shrinking number of players who offer UGC as a matter of economics. Switch away from UGC to professionally produced content, w/significant distributional effects.

Frosio: We already saw a number of major newspapers eliminating comment sites after liability to monitor comments sections was imposed on them.

Senftleben: A system blessed by audit will continue: is that ok? If European legislator wanted to open spaces for startups, the best thing you can do is make established broad services as boring as they can be to make space for niche services. [That assumes that the system will continue to function as content evolves, which does not track past experience.]

Comment: platform w/UGC component and walled garden component could easily make the conscious decision to grow only the latter—that’s why Spotify is so cautious with podcasts.

Discussion about what it means for content to be UGC—at the request of the recipient of the service. Monetization can still be UGC but some forms of monetization may take it out of UGC when it’s done at the request of the service itself.

Platforms likely to define risk assessment by looking at the minimum they need to do under the audit, so there are feedback loops.

Elkin-Koren: there will also be pressure to move to sites that are currently unregulated: WhatsApp viral distribution has been used in many countries, and it’s under the radar of the DSA. We should also keep an eye out for that. Generative AI may also change this as people don’t access the UGC directly. New paths of access and consumption require new thinking.

Schwemer: platforms/hosting services host content and users provide it. Netflix isn’t covered by the DSA at all—licensed content provided by the producer. Podcasts=interesting case.

[If you need an invitation to provide content, how do you count that? Radio over the internet where they select specific shows to stream, Bluesky? Is the answer how much prescreening goes into the invitation?] Answer: may need to be litigated. Key definition: Whether hosting is at the request of the user or of the service. May depend on targeting of users as well. [I can see how my pitch of content to Netflix doesn’t depend on me having my own Netflix account/being a Netflix “user,” but I wonder how that generalizes.]

Cable started out as super-open infrastructure—you could put your own content into Amsterdam cable from your own rooftop. Then the economics of consolidation took over. Happening on YouTube here—line between UGC and “professional” content are very blurry. Are they asking YT or is YT requesting them to provide content? And requiring licensing from YT providers, including individual users, blurs this further.

Keller: advertisers will also say they don’t want their content next to spam, porn, etc. That has influence over policies, usually restrictively. YT agreed not to consider fair use in content takedowns from a major movie studio—a concession that affected other users.

Samuelson: We have a more direct interest in researcher access than we have to industry reactions: in public they will say “we are doing all we can to comply,” so you have to read the public performance. The private face looks quite different—a lot of hypocrisy, understandably, because you don’t want to appear contemptuous of something even though it’s not well thought through and you don’t think you can really comply.

Keller: then other countries look and say “oh, we can impose this too because they can comply.”

Samuelson: don’t take at face value statements by the big companies. That cynicism is itself a concern for regulators. Another thing under the hood: how are the platforms redesigning their technologies and services to minimize compliance obligations? The easy one to see is eliminating comment sections. We won’t see the contracts b/t platforms and other entities, which is an issue, bypassing regulatory control.

Dussolier: sanitization rhetoric is very different from © licensing. Don’t invest too much copyright thinking into this space.

Matthias Leistner: there is at least one element w/a clear © nexus: data related issues. Inconceivable to subsume fundamental issues like creative freedom behind copyright; this is a systemic risk. If the duties also related to the practice of dealing with data from consumers, couldn’t you at least control for systemic risks in licensing data, e.g., homogenization of content? Or would that carry the idea of systemic risk too far? Journalism that tells people only what they want to hear is a known risk; so are there uses of data which you must not make?

RT: Casey Newton just wrote about Meta’s new system cards (Meta’s own post on this here):

Written to be accessible to most readers, the cards explain how Meta sources photos and videos to show you, names some of the signals it uses to make predictions, and describes how it ranks posts in the feed from there.

… The idea is to give individual users the sense that they are the ones shaping their experiences on these apps, creating their feeds indirectly by what they like, share, and comment on. If works, it might reduce the anxiety people have about Meta’s role in shaping their feeds.

… Reading the card for Instagram’s feed, for example, the signals Meta takes into account when deciding what to show you include “How likely you are to spend more than 15 seconds in this session,” “How long you are predicted to spend viewing the next two posts that appear after the one you are currently viewing,” and “How long you are predicted to spend viewing content in your feed below what is displayed in the top position.”

Note what’s not here: demographics. How did it assess your likelihood of spending more than 15 seconds/watching the next two posts, etc? And did it assess others’ likelihoods differently depending on categories that humans think are relevant, like race and political orientation? By contrast, one source Meta cited in support of these “model cards” was an article that explicitly called for model cards about demographics. (My favorite bit from this Meta page: “the system applies additional rules to ensure your feed contains a wide variety of posts, and one type of content does not dominate. For instance, we’ve created a rule to show no more than three posts in a row from the same account. These rules are tested to make sure that they positively impact our users by providing diverse content that aligns with their interests.” Diversity as completely empty shell!) This is a really clear example of how they’re trying to get ahead of the regulators and shape what needs to be disclosed etc. in ways that are not actually that helpful.

Dussolier: how do we deal with two trusted flaggers, one of which is a conservative Catholic group and one a LGBTQ+ rights organization? You can trust them to represent their own positions, but what does that mean?

Keller: they have to be gov’t vetted and they can be kicked out if they submit too many invalid claims—they’re supposed to be flagging genuine violations of the platform’s TOS. But different gov’ts might approve different entities, which will create conflicts. They don’t have to honor flags. But when it goes to litigation, national courts will interpret TOS in light of fundamental rights, which will lead to potential divergence.

Senftleben: We also don’t have trusted flaggers to support content as permissible.

Keller: risk profiles don’t match statuses in system: Wikimedia is a VLOP but not 4chan or 8chan.

Griffin: who’s going to be doing this trusted flagging? It’s not something that scales very well. Assumes that civil society will be sitting there all day. What is the funding model? The answer is obvious in ©, but not elsewhere.

It’s worse than that, since in © you don’t need to be a trusted flagger b/c the © agreements are broader.

Schwemer: risks of rubber-stamping flaggers’ flags. But might be able to get more insight from transparency. National differences in Europe could be very powerful in who is designated as trusted flagger; potential crossborder effects.

Dusollier: entitled flaggers v. trusted flaggers—© owners are entitled to flag their content claims; is that the same as trusted?

DSA was thinking about security agencies/police forces as trusted flaggers—clearly the plan.

Hughes: will law enforcement agencies want to have to publish what they did and what happened, as contemplated for trusted flaggers? Would rather have side agreement w/Meta. Both pro- and anti-gay forces might be able to fundraise to participate in flagging, so maybe it’s a successful mechanism to generate flags. And putting out a report every year is a positive for them to show what funders’ money is funding.

Leistner: concerned about this—modeled on existence of active civil society w/funding—not in many member states, where there is no culture of funding proto-public functions with private $ (US has many nonprofits because it has low taxes and low public provision of goods). These may be pretty strange groups that have active members. Worst-case scenario: Orban finances a trusted flagger that floods the European market with flags that are required to be prioritized, and flaggers can flag across nations.

Hughes: does have to be illegal content.

Griffin: good point that especially many smaller EU states don’t have that kind of civil society: France and Germany are very different from Malta.

Keller: nobody knows how often flaggers accurately identify hate speech, but every current transparency report indicates that complying with more notices = improvement. We don’t know how many accurate notices v. inaccurate there are.

Quintais: It’s worse b/c of broad definition of illegal content. The definition of trusted flagger is about competence and expertise—you can have competence and expertise without sharing values. If LGBTQ+ content is illegal in one country, not clear how to prevent a trusted flagger from receiving priority throughout EU.

Schwemer: There can also be orders to remove, though they have to be territorially limited to what’s necessary to achieve the objective. Those are not voluntary.

Griffin: Using Poland/Hungary as examples is not fully explanatory. France has a lot of Islamophobic rules and isn’t getting the same pushback.

Transatlantic Dialogue Workshop, Institute for Information Law (IViR), Amsterdam Law School Part 3: Algorithms, Liability and Transparency

Chair: Martin Senftleben

Impulse Statement: Sebastian Felix Schwemer

Recommendation systems; transparency is the approach to recommender systems, which intersects with privacy/data protection. How much can we throw recommendation of information and moderation of information in the same bowl? Algorithmic recommendation/moderation: we’re interested in platforms but there’s a world beyond platforms where automation is an issue, such as the DNS. Keeping in mind that regulatory focus is platforms and VLOPs in terms of algorithmic moderation.

Transparency is a wicked question: who for and how. Not only rules in Art. 14 p. 4 but also a requirement about terms & conditions: balancing of fundamental rights when they use algorithmic content moderation. Affects decisionmaking. Obligation to report on use of automated means for content moderation—for all intermediary service providers, not just VLOPs, including accuracy and possible errors. Relates to Q of benchmarking/how do we evaluate the quality of this decisionmaking? DSA doesn’t have answers to this. Decision quality might look very different across fields: © might have a yes/no answer, but misinformation might be very tricky. Very little information on what kind of human competence is needed.

Impulse Statement: Rebecca Tushnet

Benchmarking: spam detection—interesting that until a political party got interested there was no inquiry into reliability, and still no standard for spam detection quality other than “don’t screen out political fundraising.” Related to the abject quality of real content moderation: it is literally beneath our notice and we have contempt for the people who carry out abject functions.

VLOP definition versus “sites that actually have the problems against which the DSA is directed”—not only fashion sites; nonprofits as a special issue—Wikipedia, Internet Archive, AO3 which does not recommend anything; compare to DMCA Classic and DMCA Plus, where some large entities have repeated © issues and lots of valid notices and others simply don’t—DMCA is a reasonable system for most of them: Etsy has problems, but not ones that make sense to frame in any way as Instagram’s or Parler’s.

DSA’s separation of overarching patterns from individual decisions is good if maintainable, but doesn’t fit easily into US framework—Texas and Florida laws are both indications of what politicized targeting look like and relatively unsurprising in reliance on private claims (though outsized damage awards show the targeting).

Problems with scale: inherent inconsistency. This is usually shorthanded as “we make 100 million decisions a day, so even a tiny error rate means a large absolute number of errors.” But it is more than that: inconsistency and conflicting decisions. We have to accept that—indeed, it will mostly go undetected—but we also have to accept that the existence of conflicting decisions does not mean that either one is wrong. Compare: TM applications—in the US system at least, it is an explicit principle of law that one cannot dispute a registration decision by pointing to others that seem factually similar (or even identical) but went the other way; see also: grading by school teachers.

This is related to the DSA mandatory appeals system, which does look like Texas and Florida. One size fits all for YouTube comments and entire accounts; not a great model—the same degree of due process for everything instead of allowing services to focus only on serious disputes like when someone loses an account. Significant concerns: disproportion in the demographics of who appeals moderation, already well known as an issue—men, English speakers. But inconsistency even w/in categories will also be necessary to live with.

Senftleben: Overarching issues—relationship of different things addressed in DSA—moderation and recommendation: are they comparable/pose similar problems? Are there already new problems unaddressed such as generative AI? Then getting public interest/human rights balancing into the system—who is responsible for checking this is done properly at the platform level. Consistency of decisions across countries, cultural backgrounds, appeals. [To be clear: I don’t think it’s just about cultural backgrounds or demographics: two people w/ the same background will also make different decisions on the same facts and that’s not necessarily a wrong. (Also: ought implies can and what I’m arguing is that consistency cannot be achieved at this scale.)]

Goldman: disparate impact can come from many sources, often impossible to tell what they are. Lots of evidence of disparate impact in content moderation whose causation will be disputed. Humans v. Machines: there’s a cost to having humans in the loop: worker wellness. Regulators just don’t value it and that is a problem in balancing costs and benefits.

Daphne Keller: DSA prohibits inconsistency: you have a right of appeal to resolve hard judgment calls and get to consistency. [But: ought implies can; I would think a civil law system is ok with calling each judgment a tub on its own bottom.]

Leistner: in context of European court system, a small PI injunction will stay locally; there are divergent results already b/c it only goes to European courts in rare circumstances. Thus you have inconsistencies based on the same standards.

Hughes: inconsistency at first decision is not the same thing as inconsistency at the appeal level. The TTAB and PTO try to be consistent. [We disagree about this. They certainly try to have rules, but they also don’t hold that they’re required to treat the same facts the same way—there might be undisclosed differences in the facts or a different record and they don’t try to find those differences, just presume they’re there. This is aided by the fact that different TM applications will, by virtue of being different TM applications, have slightly different features from previous applications—which is also true of stuff that gets moderated. The school discipline cases also show that even when you get to the second level the variety of circumstances possible make “consistency” a hopeless ideal—the “impersonate a teacher Finsta” will play out differently at different schools so the facts will always be differentiable.]

Schwemer: Nondiscriminatory and nonarbitrary which is the DSA standard doesn’t necessarily require consistency in that strict sense.

Keller: suppose you have a rule: I will remove everything the machine learning model says is nudity, knowing it has a 10% error rate.

Van Hoboken: No—there’s a due process reconsideration requirement.

Keller: but the model will give the same result on reconsideration.

Van Hoboken: still not ok. [I take it because that’s not a fair rule to have?]

Keller: so that is a requirement for hiring people.

Rachel Griffin: Analyzing meaning in context is getting better—so will people start saying it’s ok to leave it to the machine?

Niva Elkin-Koren: We assume that a good decision by a court/oversight body on particular facts will immediately translate into the system, but that’s not true. One reason is that there is a huge gap between the algorithmic system running on everything and the decision of a panel on one removal. Translation gap: so we have to ask whether there is bias as well as compliance/error rates. Agree there’s no evidence that human in the loop improves process, but we could encourage regulators/implementors to enhance the opportunities for making humans more efficient—researchers can help with this. Avoiding humans who participate just for rubber-stamping the system itself.

Séverine Dusollier: Inconsistencies: we know as professors that we aren’t completely consistent in grading; we know that morning differs from afternoon—we fight it but it is human nature. There is something that you can’t completely analogize with machine inconsistency—we might have some randomness in both, but we are also informed by bias/political perspective/etc. The machine will also be inconsistent but perhaps in better ways, but what we have to do is rely on social science studies about how they actually show bias entrenched in machine and human decisions. The consequences are not the same: a decision on online platforms has different impacts. [I agree that we don’t know what machine inconsistency will look like; of course the inputs to the machine come from humans!]

Wikipedia doesn’t make recommendations. Sometimes the answer you get from the community is so sexist and misogynistic that it shows it’s still a system that needs intervention. [Agreed, but my point was that it doesn’t have the “algorithms” that people accuse of hurting democracy/spurring anorexia/etc. because it’s not optimized for ad-display engagement. So the mechanisms for addressing the problems will necessarily be different as will definition of the problems.]

Samuelson: Audits are really important to counteract human tendency towards inaction (and perhaps bias in who appeals), but there are no standards! Financial audits work because we have standards for the kind of activities we should and shouldn’t be doing in accounting. Right now there is an emerging new profession for computing audits; but right now the promise exceeds the capability. We need standards for what the audits should look like and what the profession that does the audits should look like.

Van Hoboken: it’s not very clear how to judge moderation especially at the level at which Art. 14 is drafted. Surface specificity, but application to many different types of services means there’s lots to do. Inconsistency is a problem of focus precisely b/c we want to see diversity in approaches to content moderation. We might want “free speech friendly” services in the mix and “safe for kids” ones. Both are good things. There are also different ways to achieve those results. Media pluralism in DSA can be broadened to say there’s value in pluralism generally: DSA doesn’t tell you how to do things. When we add in the human cost of moderation, we should accept that social media companies can’t do it all.

Senftleben: inconsistency is a pathological perspective; pluralism is a democratic one. [Although I was thinking more about whether a breastfeeding photo shows too much nipple, etc.]

Comment: just because a human is in the loop doesn’t mean they can make an easy decision. Legality of © use in the EU, under 27 different legal regimes, with lots of grey zones in parody/creative uses, is not simple for a human. You’d probably have to request a comparative © law professor. What kind of human are we talking about and what are the standards they are to apply?

Senftleben: isn’t the human value the intuition we bring? Failures and all?

Schwemer: in online framework, differentiate b/t different stages in decision process. There is no human involvement/judgment required in initial decisions. Ex post only, once there is an appeal. 2018 Commission recommendation was the blueprint for the DSA, but that talked about “oversight” rather than human review. Oversight relates to design/development/operations but “review” is ex post, which is an important difference. Desirability of human involvement in operations of first moderation, not just ex post redress. There’s a huge cost to the humans involved, which DSA overlooks. AI Act actually mentions something about training and competences of humans, but that relates to oversight of design/development, not operations.

Keller: FB conversation in which appeals resulted in reversals less often than random sampling of removal decisions for review. Transparency report: There’s about 50% success for appeals under FB’s nudity/terrorism policies and a lot lower for harassment/bullying. So our knowledge is mixed: seems unlikely that FB is wrong 50% of the time about nudity.

Big audit firms seem reluctant to sign up for content moderation auditing b/c they’re accustomed to working from standards, which don’t exist. They’re legally accountable for “reasonable” results—they’d have to vouch for way more stuff than they’re comfortable vouching for given the lack of existing standards. This is why civil society groups are more invested in being at the table: they need to be there as standards are developed, not just a conversation b/t Deloitte and Meta.

Elkin-Koren: we use the term audit but the free speech tradeoffs here are different than the tradeoffs involved in financial audits. The purpose is not to show compliance with a standard but to provide us with info we need to decide whether the system is biased against particular values or to decide what values it does and should reflect. It has to be understood as a work in progress [in a different way than financial audits].

Keller: It would be awesome if the DSA allows descriptive audits, but it’s not clear to her there’s room for that, or that there’s room for civil society to participate deeply in analyzing the information.

Samuelson: financial audits work through transparency about what GAAP are—both a procedure and a set of substantive rules. Then people in the auditing business know what they have to comply with. If Meta and Deloitte do this, they’ll do it in house and not publicize it. So another issue the Commission will have to grapple with is oversight of how standards are set.

Comment: DSA really does require an audit to assess compliance w/ due diligence obligations including risk mitigation/fundamental rights. The risk mitigation obligations are so fluffy/vague that this might define in practice what mitigation means. There is going to be a huge flavor of “compliance” here. What are you going to do when 10 audits say that Meta complied w/its obligations and there are appeals that show this might not be the case?

Van Hoboken: audit requirement was specifically linked to risk mitigation approach. In some sense: “We want other people to do our job.” Changes the character of the audit. It’s an institution-building move, European standard for algorithmic transparency—developing capacity to do that. Government-funded research can also help.

Frosio: what about lawful but awful content? Center stage in UK debates. Seems that DSA doesn’t want lawful but awful content filtered out automatically—UK Online Safety Bill seems to have retreated from obligation to remove lawful but awful content, substituted by more protections for children/new criminal offenses/more control for users over what they see in social media.

Art. 14: entry point for terms and conditions, which can restrict lawful but awful content; applies not just to illegal content but content prohibited by terms of service—services have to protect fundamental rights. But if it’s lawful but awful, what is the fundamental right at issue and how is there to be balancing with terms & conditions?

Griffin: fundamental rights are not only freedom of expression. EU regulators are concerned w/child safety, so automated moderation of nudity/pornography/self-harm may be necessary to do that effectively. Unlikely that courts/regulators will disapprove of that. Regulatory priorities are going in the opposite direction.

Frosio: Some of that will be legal/illegal. My question is more general: what should we be do with lawful but harmful content? Should we think it’s ok to block harmful content although it is lawful? What does that mean about what balancing fundamental rights means? Who’s going to decide what harmful content is? At one point, pro-democratic/revolutionary content was “harmful.” [LGBTQ+ content, and anti-LGBTQ+ content, is a vital example. What does “protecting children” mean?]

Schwemer: if an intermediary service provider does screen lawful but awful content, it is restricted in terms and conditions both substantively and procedurally. What about spam? Spam is not illegal in Europe. That would be a case of moderating awful but not illegal content.

Thursday, June 29, 2023

Transatlantic Dialogue Workshop, Institute for Information Law (IViR), Amsterdam Law School Part 2: Data Access

Impulse Statement: Christophe Geiger: Relevance to © exceptions and limitations—access to © protected work is important for this work. Research organizations have exception in © Directive and also are vital to DSA, so we must look at both. Only digital coordinator-approved researchers are allowed access, with some limited exceptions similar to fallback provisions in DSM Directive art. 4.

Impulse Statement: Sean Flynn: Data protection can be seen as protecting right to privacy but can interfere with right to research. Need balancing/narrow tailoring. Duty to protect: duty to regulate third parties—protecting both privacy rights and researchers in data held by third parties. Duty to promote right of society to benefit from research—similar to duty to create libraries—use the idea to check if we’re balancing rights correctly, regulating appropriate third parties, creating institutions to implement rights.

Europeans were less generous in concepts of “educational”/ “scientific” research than his US perspective—formal research organizations may be required. Journalists in some key categories: are they involved in scientific research? Consumer organizations?

Senftleben: Subordinated to goals of the DSA—research has to be about systemic risk (or mechanisms used by platforms to control systemic risk), which interferes with the freedom of research. If we want researchers to understand what is going on, you have to open up the data silos anyway. Thus there would have been more than enough reason to include a provision opening up data for research in general—trust the research community to formulate the questions. Not reflected in provision. Para. 12 opens up a bit b/c it goes outside the vetted researcher dynamic, but systemic risk defines what can be done with the data.

Keller: the provision formally sets out a really dumb procedure: the researcher formulates the data request without any contact w/platform, gets approval from authority, then goes to platform, which has to respond in 2 weeks. Unlikely to be a format/type of query that is immediately possible to collect, and the platform can only object on 2 enumerated grounds. So the workaround is to create a more dynamic feedback process so researchers can ask for what platforms can actually give. Hopefully an entity set up to deal w/GDPR issues can also check whether the researcher is asking for the right data/what the parameters should be. Hangs on reference to “independent advisory mechanisms” to prevent the process actually described in the DSA from happening.

Elkin-Koren: Example of studying society, not just digital platforms: studying health related factors not caused by platforms but for which platforms have tons of data. Basic/exploratory research where you don’t know the specifics of data you want or specifics of research question but would benefit from exploring what’s there. The key innovation of the DSA is turning research from a private ordering Q into one of public ordering.

Quintais: you have to be careful about who you invite into your research—if the researcher is from outside the jurisdiction they may have to be excluded from the data.

Leistner: one strategy is to interpret research as broadly as possible; another is to ask whether the exception is exclusive. NetDGZ used to have a broader scope; can a member state choose to keep/provide a new exception for research purposes, maybe it is at liberty to do so—there’s no harmonization for general access to data for research purposes. Maybe that is necessary, and it would have to transcend the various IP rights, including © and trade secrets.

Keller: note that having platforms store data in structures amenable to researchers also makes them more attractive to law enforcement. Plus, researchers are likely to find things that they think are evidence of crimes. National security claims: NATO actually indicated that it wanted to be considered a covered research organization. In the US there’s a very real 1A issue about access, but the Texas/Florida social media cases include a question about transparency mandates—not researcher access like this but not unrelated. Also 4A issues.

Comment: No explicit consideration of IP in grounds for rejection but third-party data leads to the same place.

Van Hoboken: Bringing different parts of civil society together for platform accountability for VLOPs; data access is the way to bring in researchers on these risks/mitigation measures. If this provision didn’t exist, you’d have VLOPs doing risk audits/mitigation measures but no way to compare. Some requests will be refused if the platforms say “this isn’t really a risk.” Platforms may also have incentives to deny that something is a mitigation measure to avoid research access. Mid-term value—won’t work fast and maybe will ultimately be defeated.

Goldman: What are Internet Observatory’s experiences w/benefits & threats by House Republicans?

Keller: serious internet researchers among the many academic researchers in the US targeted by various far right people including members of Congress and journalists with good relations w/Elon Musk, targeted as Democratic elite censorship apparatus: allegedly by identifying specific tweets as disinformation, they contributed to suppression of speech in some kind of collusion w/gov’t actors. About 20 lawsuits; [Goldman: subpoenas—information in researchers’ hands is being weaponized—consider this as a warning for people here. His take: they’re trying to harm the research process.] Yes, they’re trying to deter such research and punish the people who already did it, including students’ information when students have already had their parents’ homes targeted. Politicians are threatening academic speech b/c, they say, they’re worried about gov’t suppressing speech.

Goldman: consider the next steps; if you have this information, who will want it from you and what will they do with it? A threat vector for everyone doing the work.

Keller: relates to IP too—today’s academic researcher is tomorrow’s employee of your competitor or of the gov’t; researchers are not pure and nonoverlapping w/other categories.

Elkin-Koren: is the data more secure when held by the platform, though? Can subpoena the platform as well as the university.

Goldman: but you would take this risk into account in your research design, though.

Van Hoboken: At the point this is happening, you have bigger democratic problems; in Europe we are trying to avoid getting there and promote research that has a broader impact. But it’s true there are real safety and politicization issues around what questions you ask.

Goldman: bad faith interpretation of research: the made up debate over

RT: Question spurred by a paper I just read: is the putative “value gap” in © licensing on UGC platforms a systemic risk? Is Content ID a mitigation measure?

[various] Yes and no answers. One: © infringement is illegal content, so you could fit it in somewhere, but to create a problem, it would have to go beyond the legal obligations of Art. 17 b/c there’s already a specific legal obligation.

Keller: don’t you need to do the research to figure out if there’s a problem?

Yes, to study effects of content moderation you need access; can get data with appropriate questions. Could argue it’s discriminatory against independent creators, or that there is overfiltering which there isn’t supposed to be. But that’s not regulated by Art. 17.

Catch-22—you might have to first establish that Content ID is noncompliant before you can get access.

Frosio: you might need the data to test whether there is overblocking. [Which is interesting—what about big © owners who say that it’s not good enough & there’s too much underblocking? Seems like they’d have the same argument in reverse.]

Would need a very tailored argument.

Quintais Follow-up: had conversations with Meta—asked for data to assess whether there was overblocking and their response was “it’s a trade secret.”

Samuelson: Art. 40 process assumes a certain procedure for getting access. One question is can you talk to the platforms first despite the enumerated process. Some people will probably seek access w/o knowing if the data exists. There’s an obligation to at least talk to the approved researchers. But the happy story isn’t the only story: platforms could facilitate good-for-them research.

A: the requirements, if taken seriously, can guard against that—have to be a real academic in some way to be a vetted researcher; reveal funding; not have a commercial interest; underlying concept: the funder can’t have preferred access to the results. Platforms can already fund research if they want to.

Flynn: Ideological think tanks?

A: probably won’t qualify under DSA rules.

Samuelson: but the overseers of this access won’t be able to assess whether the research is well-designed, will they?

A: that’s why there’s an inbetween body that can make recommendations. They propose to provide expertise/advice.

Leistner: Art. 40 comes with a price: concentration of power in Commission, that is the executive and not even the legislature. Issues might arise where we are as scared of the Commission as US folks are of Congress at the moment. That doesn’t mean Art. 40 is bad, but there are no transparency duties on the Commission about what they have done! How the Commission fulfills this powerful role, and what checks and balances might be needed on it, needs to be addressed.

Paddy Leerssen: Comment period: US was the #1 country of response b/c US universities are very interested in access. Scraping issues: access to publicly accessible data/noninterference obligations. How far that goes (overriding contracts, © claims, TPMs) is unclear. Also unclear: who will enforce it.

Conflict with open science/reproducibility/access to data underlying research. Apparent compromise: people who want to replicate will also have to go through the data request process.

Leistner: but best journals require access to data, and giving qualified critics access to that underlying data—your agreement with Nature will say so.

Transatlantic Dialogue Workshop, Institute for Information Law (IViR), Amsterdam Law School Part 1: Overarching Questions

My apologies, but I’m extremely jetlagged and will not attribute well or capture a lot of nuance.

Chair: João Pedro Quintais

Impulse Statement: Niva Elkin Koren: Déjà vu from 1990s: radical technology change, but the world is different and tech moves in a different direction. Polarization/globalization as a response to isolation of Russia and China. Investment in R&D is not in distributive, generative things that are open to the public but in seeking private domination. Conflicts b/t types of regulation where lots of changes are happening at the same time. Copyright as an example: requiring disclosure of datasets used for training by the initiators of the model doesn’t solve any of the actual problems of how users are using the model by adding new inputs to train it further. Learning from other things regulated in digital ecosystem: Database Directive, GDPR. GDPR has become a gold standard but how much do we know about whether it’s enforced or whether it makes any difference in people’s actual level of privacy? It definitely created a regulatory burden that affected competition, but what became of the big promises? US pushes to provide alternative standards—crossborder privacy regulation as an alternative.

Impulse Statement: Matthias Leistner: You didn’t know generative AI existed and so your regulations don’t cover it. ECJ is getting first action on application of DSA to Zalando’s classification as VLSOP. I’ve never heard of Zalando but it is an online fashion platform; systemic risks, even if they could be identified, are very different for Amazon/retail than for Facebook. Litigation indicates: it isn’t more efficient than standard competition or sector-specific regulation; it just reflects that there wasn’t enough information about what they were regulating.

Sector-specific approach raises fundamental interface issues—DSA sits uncomfortably with GDPR and proposed AI Act. Aggravated if, beyond public law regulation which allows smoothing of inner inconsistencies by nonenforcement, this becomes basis for private law liability. Martin Husovec would say it’s definitely not, but under © liability cases, the CJEU would ask whether a diligent operator has followed all necessary precautions and duties—will be tempting to say that failure to comply w/DSA=© liability. And outside the large platforms we’d really have a problem.

Comment: political pressure to regulate leads to focus on what’s feasible. In an ideal picture, what are the fundamental first principles that are distinct from current platform issues? Internet is multilayer, with infrastructure/nontraditional hosting actors. Can we agree on human involvement in automation processes? Redress mechanisms? Piecemeal approach leads to overlaps and national regime conflicts. Where to focus to fix?

Martin Senftleben: The more complex the system gets, the lighter and less detailed the framework should be—open-ended notice and takedown would be better than DSA. But the DSA offers some room b/c they’ve overdone it so much. There’s so much complexity and inconsistency that we could use that to say there’s no clear answer, so to harmonize the legislation, there is space for us as academics to make sense of it and keep it up to date. DSA is based on three categories of hosting; search engines entered at the last moment, and the DSA underperforms as to entities that do hosting, content generation, and search. This type of content provision is growing enormously, as w/Microsoft’s use of AI. Already outdated.

RT: Jennifer Pahlka’s Recoding America (highly recommended!) documents regulatory failure from a waterfall approach where detail keeps getting added at each stage. Suggestions become requirements and you end up with procedures that require outdated or counterproductive elements because they’re in the regs. Federal/state/local overlaps also affect this. What if anything is different in Europe?

Daphne Keller: There’s a real difference b/t a culture that has real trust in regulators to do things with vague language and one that doesn’t. Europeans don’t think the regulators will be unreasonable. [Tort-based culture can’t be the full explanation because Pahlka documents these problems in places like military contracting where there’s no tort potential.] Data access is an example of openness where researchers are hoping that the process will look very different from what appears to be described in the legislation. #1 ask of civil society: keep us involved in the process; no particular ask other than role in interpretation. Chinese legal culture: The action isn’t in legislation; the action is in administration afterwards. The DSA is more like that. [Is China a culture with real trust in regulators?]

Quintais: DSA gives regulators more power than GDPR did.

Joris van Hoboken: Intermediary liability part of DSA is well-known and there’s a lot of consistency with prior practice—DSA adds a few elements. Separation between liability provisions and all the other stuff. Senftleben could be right about effects on liability of duties, but from a regulatory perspective there’s a separation.

Eric Goldman: what’s our definition of what would qualify as success or failure of the DSA? Might be broken down by topic. Who’s going to hold the powers that be accountable for whether or not this accomplishes the goals that they claimed it would/we think it should?

Bernt Hugenholtz: complexity reflects European tradition of civil law in the books, and other reasons, many already pointed out—desire to deal w/urgent tech developments, etc. But also, this is a regulation—an act—a directly binding instrument which is very different from what we grew up with (directives, which are instructions to member states to do certain things in harmonized fashion). Those could be vaguer/more concise b/c more was left to member states to implement/fill in gaps/national law could continue in the gaps. Regulation requires more specificity b/c you’re the regulator for the entire union. [In the US, we’d talk about preemption here.] DSA might be too early for AI, but on the other hand regulation is always too early in this field, since we never know where the field is going.

Matthias Leistner: Don’t generalize too much—some parts of DSA might be too early; revisions to Ecommerce directive might be too late. The more open the standards are, the less you need to worry about being too early.

US/EU differences: There are always internal European and Union-Member States issues. It is remarkably different in different areas at the European level. GDPR leaves it to member states to specify; DMA/DSA are comprehensive but tend to further centralize, which might have certain advantages (efficiency) and disadvantages (flexibility/power concentrated in Brussels). The data act follows a different approach: seems to provide umbrella regulation w/further specification possible, e.g. open health regime. EU-Member state issues: need to ask new questions—is a regulation v a directive conclusive? Major question is private enforcement. Damages claims of individual users provision in DSA is the first time parliament has considered this kind of enforcement, but creates new questions about tort law and its enforcement. Not clear whether member states could add criminal liability, for example.

Leaving “hot potato” issues to self-regulation as a popular way to avoid them? But that doesn’t work for everything—it is naïve to think that Zalando hasn’t been talking to the Commission for months, but the result is litigation. DSA’s self-audit requirements: can this work? Just delegating it to a further level.

Christophe Geiger: different EU/US traditions around regulatory oversight/verification of compliance with key values v private ordering. DSA is obviously overdoing it, but fundamental approach is very interesting—compare IP: don’t leave rightsholders & platforms alone, someone else has to step in. 27 different coordinators/regimes/traditions of intervention is problematic but the principle of regulatory oversight is at least interesting.

Eleanora Rosati: It is a success to have gotten to the point of adopting direct regulation—DSM Directive has resulted in extremely fragmented transposition and it’s not clear whether they can go as far as they have, including providing new rights not provided for in DSM. The relation b/t different provisions of DSM—link tax, data mining, “value gap,” etc—is unclear. So we have at least skipped that frustration created by directives of this type, including Ecommerce Directive. Vagueness of the safe harbors in that directive is no indicator of success. Didn’t create a level playing field.

Senftleben: What would DSA success look like?: a great question. We used to navigate cities ourselves, didn’t leave it to the machine. We knew where the different parts of the city were. Now we don’t. We’re lazy. We don’t do our own content filtering. The DSA puts burdens on citizens to be active. Transparency information: you can see how things function; the burden is on the citizen to do something (including creating an NGO or using the redress mechanisms). Fear: huge failure b/c we are just too lazy to use that. We’ll just behave like we do with Google Maps and follow what the automated systems tell us. Success=people are empowered in a real way and track how information flows/reaches them.

Pam Samuelson: How do I teach people to follow this? It is too complicated/requires a kind of perspective about regulation as a good that isn’t part of the current US regulatory culture. One thing US might see in a Brussels effect: larger platforms adapt so US doesn’t have to do anything legislatively, whether different or the same. But is that really a good thing? Do you care about barriers to entry? If you do, then maybe this isn’t the optimal strategy. [Goldman: that’s one question for the success/failure metrics.] I’d tell a startup to come to the US and only think of entering the European market after a certain amount of success.

Elkin-Koren: Common market creation is a measure of success, and digital platforms are likely to comply. What are the other consequences? Intended or not? Not intended to lower competition; tries to look at mice and elephants differently, but may turn out the opposite.

Giancarlo Frosio: Fundamental rights as a missing concept! That’s the great achievement of the DSA, meant to be interpreted and applied w/reference to fundamental rights recognized by Charter, to achieve a fair balance of conflicting fundamental rights.

Sebastian Schwemer: Goal is to regulate recommendation on the internet generally, focusing on process and not content. Would advise startups to start in Europe b/c as long as you’re in compliance everything is simple [I think I misunderstood this]. We focus too much on VLOPs.  DSA is overloaded, to be sure, but there’s important stuff. VLOPs provisions are different—competences aren’t clear where Commission has so much power not just over process but over content, setting rules and enforcing them both. 2021: Denmark proposed a social media law doing DSA+; it was shut down, but now they’re trying to introduce age verification on all platforms. DSA allows more leeway, again by focusing on process. Problem is two-tier system with regular platforms and VLOPs, which are understudied.

Justin Hughes: Some topics here are premature, others old hat (trusted flaggers). Opposite of the precautionary principle: AI Act seems like precautionary principle issue. Take on the known unknowns at least. 1201 looked like a failure, now it looks like a success. [????] Maybe the self-audits will fail, and they’ll never be eliminated b/c people are afraid to eliminated them. Maybe a few national authorities will dominate rather than 27. In the US: Maybe [big platforms] will take their lobbying energy elsewhere and not oppose new US regulations, but maybe they’ll continue to oppose them on principle.

Leistner: might compare amount of litigation under different regulations, directives. DMA: goal is to have more traders on platforms/increase diversity—could just check whether this happens five years from now. That would be clear-cut. Much more difficult w/r/t DSA b/c we don’t know what the DSA actually wants; there are a number of theories (nerd harder?). Could look at share of platform resources devoted to content moderation over time, strength of European democracy, whether there is European brain drain/smart Indian innovators come to California or to Munich, or other things. What kind of data would we need to have some plausible natural experiment?

Giancarlo Frosio: Again, protection of fundamental rights is a key measure—tools to force platforms to set up algorithms so they don’t limit fundamental rights. [This is why I don’t get why Wikipedia and the Internet Archive are even covered, since they don’t have the algorithmic problems at which the DSA is supposedly aimed.]

Keller: one key issue is that VLOPs have the ability to shape the rules for those who come up after them—codes of conduct negotiated by older companies that work for them but may not work for new entrants. This comes up with questions of what risk mitigation looks like.

Geiger: fundamental rights is a main pillar of DSA. But here I think we need to push for academic community’s permission to do homework/coordinate efforts to ensure this is actually happening. Most regulatory authorities are political appointments w/no specialization in IP or fundamental rights. Some will be strongly captured by IP claimants.

10th Circuit endorses presumption of Lanham Act false advertising injury in mostly two-player market

Vitamins Online, Inc. v. Heartwise, Inc., --- F.4th ----, 2023 WL 4189604, Nos. 20-4126, 21-4152 (10th Cir. Jun. 27, 2023)

Proceedings below most recently blogged here.

Vitamins Online sued Heartwise under the Lanham Act and Utah’s Unfair Competition Law for false advertising about the ingredients of its competitive nutritional supplements and manipulating those products’ Amazon reviews. The district court ruled for Vitamins Online at a bench trial and ordered disgorgement of NatureWise’s profits for 2012 and 2013. The court also awarded Vitamins Online attorneys fees and costs.

Both parties appealed and the Tenth Circuit favored Vitamins Online, remanding for further consideration of punitive damages and an injunction—and, more broadly, approving a presumption of injury in these specific circumstances, with discussion of using antitrust principles (ugh) to determine whether a presumption is appropriate.

The supplements here involve garcinia cambogia and green coffee extract, which both purportedly help with weight loss.

Vitamins Online purportedly offered unique (at least during relevant time periods) ingredients that were clinically proven “to help support/assist with weight loss” unlike other versions of the same ingredients. For example, by the middle of 2013, Vitamins Online was the only seller of garcinia cambogia with “SuperCitrimax” on Amazon, and Dr. Oz’s 2013 show on garcinia cambogia featured the chief researcher for SuperCitrimax, leading Dr. Oz to urge his viewers to buy only that type. After this show, Vitamins Online’s sales increased substantially. Dr. Oz had similar effects on Vitamins Online’s green coffee.

NatureWise’s products advertised that they met the same Dr. Oz-endorsed requirements. To cut a long story short, they often didn’t. E.g.: “Although NatureWise’s garcinia cambogia did not contain SuperCitrimax, NatureWise’s founder specifically wanted to advertise SuperCitrimax because Vitamins Online was selling it, and thus NatureWise referenced SuperCitrimax on its Amazon product page and included the SuperCitrimax logo on the garcinia cambogia label.”

Both parties relied on Amazon for sales.

NatureWise asked its employees—who complied—to up-vote good reviews for its products and down-vote its products’ bad reviews (known as “block voting”), thereby affecting which reviews appeared at the top of the products’ pages. This was a violation of Amazon’s policies, and so NatureWise’s management did not want Amazon to learn of this practice. In addition, NatureWise offered free products to customers in exchange for a review. This also violated Amazon’s policies.

NatureWise’s entry into the market knocked Vitamins Online from its #1 seller spot, which has competitive advantages. During 2012-2013, Naturewise made over $9.5 million in profit from the accused products, which the trial court ordered disgorged. That court also awarded fees for various discovery improprieties, which the court of appeals upheld.

Falsity: A fact question reviewed for clear error; there was none, either on the ingredient claims or the Amazon reviewbombing claims.

Of most interest: The district court didn’t clearly err in finding that block voting on the helpfulness of reviews and the offering of free products in exchange for reviews were misrepresentations. Block voting: the district court found that the number of “helpful” votes was artificially inflated and therefore literally false. NatureWise argued that nothing in the reviews themselves was false. “[T]he issue is not the falsity of the reviews themselves but rather the misleading impression ‘that many unbiased consumers find positive reviews to be helpful and negative reviews to be unhelpful.’” An expert explained that reviews “have a very significant impact on the purchase decision process” when consumers believe that the reviews are “objective and genuine.” Thus, it was not clearly erroneous for the district court to find that NatureWise’s block voting misled customers, “given that customers were likely under the misimpression that it was unbiased consumers—rather than NatureWise’s employees—who found good reviews of NatureWise products to be helpful and bad reviews unhelpful.” The court also noted the district court’s additional finding that NatureWise’s management was worried that customers would find out about the block voting. “This fact indicates that NatureWise believed customers were being misled about the helpfulness ratings.”

Free products: The district court found that NatureWise made literally false representations because it represented that it did not offer free products in exchange for reviews—even though it did. But, NatureWise responded, the free products were not contingent on the content of the reviews, and that the act of giving a free product did not render the reviews themselves false. Again, “NatureWise’s actions misled consumers about the number of reviews from unbiased customers and the true ratio of putative unbiased positive to negative reviews.” Vitamins Online’s expert concluded that the act of offering a product in exchange for a review is likely to skew the positive results of the review.

The district court gave Vitamins Online a rebuttable presumption of injury  “because the markets at issue were essentially two seller markets, so it could be presumed that sales wrongfully gained by NatureWise were sales lost by Vitamins Online.” This was correct.

A presumption of injury began in the Second Circuit for comparative advertising. Even without a direct comparative statement, if the ad targets an “obvious competitor,” that can also qualify for a presumption, and when there’s an essentially two-party market, the ad will always target an obvious competitor. A “strict two-player market is no longer inflexibility required. Rather, the market simply must be ‘sparsely populated.’”

Thus, the rule: “once a plaintiff has proven that the defendant has falsely and materially inflated the value of its product (or deflated the value of the plaintiff’s product), and that the plaintiff and defendant are the only two significant participants in a market or submarket, courts may presume that the defendant has caused the plaintiff to suffer an injury.” The presence of “a few other insignificant market participants” doesn’t change anything “so long as the plaintiff and defendant are the only significant actors in the market, since the defendant will still presumably receive most of the diverted sales.”

Caveats: even in essentially a two-player market, the presumption is a presumption that injury occurred, not about its degree. “The sparse competitor market can support a finding of causation, but damages, if sought, will typically require some further evidence or analysis.” The sparse competitor market can support a finding of causation, but damages, if sought, will typically require some further evidence or analysis. And the presumption is rebuttable.

Back to the presumption: “Whether the presumption of injury is applicable therefore turns primarily on the scope and occupancy of the market. To make these determinations, our antitrust caselaw is instructive.” [Cue antitrust lawyers talking about the difficulties of market definition in antitrust! FWIW, I’m giving you essentially all of the market definition done by the court; you can contrast that to what a market definition analysis by an antitrust economic expert looks like and consider how “instructive” that really is.] Product market boundaries are defined by cross-elasticity of demand; high cross-elasticity means products are substitutes and low means they aren’t. Submarkets “may be determined by examining such practical indicia as industry or public recognition of the submarket as a separate economic entity, the product’s peculiar characteristics and uses, unique production facilities, distinct customers, distinct prices, sensitivity to price changes, and specialized vendors.”

Market definition is a question of fact, and it was not clearly erroneous to find the market sparsely populated. There was evidence “that the parties were operating in a two-player market and that the existence of other competitors were de minimis. That is enough to render the presumption of injury applicable.” But … what about alternatives? Is the market all green coffee, all green coffee sold on Amazon, all weight loss supplements, something else? In a footnote: “We are not adopting our entire antitrust corpus as the relevant standard to use in defining the market. … But the antitrust analogue is a roughly useful template from which to start the analysis.”

NatureWise failed to rebut the presumption. It argued that there could be no causation without correlation, but the record showed that Vitamins Online’s sales dropped at roughly the same rate as NatureWise’s sales rose for at least specific quarters, which was enough. Nor was Vitamins Online required to prove a nexus between the false advertising and the lost sales. “Once Vitamins Online made the requisite showing that the markets in question were composed of just two significant market players, then the district court was entitled to presume that NatureWise caused an injury.” [I assume materiality is in there somewhere.]

NatureWise also argued that there were intervening factors causing Vitamins Online to lose sales, but they didn’t show clear error. (1) Dr Oz’s shows allegedly caused a flood of competitors to enter the market—but that was answered by the trial court’s “essentially two-party market” finding.  Further, “this alleged flood of competitors would presumably have resulted in sales losses for NatureWise as well—but NatureWise’s sales increased when Vitamins Online’s sales decreased.” (2) Vitamins Online’s products were “far more expensive” than competitors’. But expert evidence contradicted this. (3) Vitamins Online’s products had an average rating of 2.9 out of five stars, which would cause poor sales. “But most of Vitamins Online’s products had a similar average rating both when its sales rose before NatureWise entered the markets and when they fell after NatureWise entered the market and employed in deceptive sales practices.”

Disgorgement was not an abuse of discretion, given the facts above. But the district court was not required to award disgorgement for 2014 and after. It’s not error to limit profits to a period in which the plaintiff can show actual damages, considering that as part of the equitable balancing. The court rejected Vitamins Online’s argument that, under the statute, it had only to “prove defendant’s sales,” and the burden was on NatureWise to prove which portion of the sales are not attributable to the false advertising. But § 1117(a) still requires a plaintiff to “show some connection between the identified ‘sales’ and the alleged infringement.” “Section 1117(a) does not presumptively entitle Vitamins Online to all NatureWise’s sales proceeds no matter how temporally disconnected from the false advertising injury.”

The district court denied an injunction on the basis that Vitamins Online was adequately compensated by a disgorgement of profits, and because it found that it would be against the public interest to force NatureWise to remove all its product reviews from Amazon. But it should have considered enjoining future review manipulation, including block voting and free products.

The district court also needed to consider punitive damages under the UCL. Enhanced damages aren’t ok under the Lanham Act when the plaintiff was already “adequately compensated,” but under Utah law, only one of the seven relevant factors for punitive damages considers the actual damages award.

Wednesday, June 28, 2023

And Taco Tuesday

 Speaking of authenticity, here is a funny WSJ podcast--even if you don't listen to the whole thing, it's worth listening to Gregory Gregory claim he invented Taco Tuesday, the discussion of its appearance in the 1930s, and then the last minute where he makes a surprising admission.

Fajita followup: geographic origin as inherently contested concept

 Yesterday's post about what every reasonable consumer of Mexican food knows sparked some interest in my household. It's not a new observation that Twiqbal's common sense can involve things that are not actually common sense to all reasonable people, and my spouse was struck by the fajita story specifically. After a bit of research, he came up with the following, which complicates the characterization of fajitas as Tex-Mex, but certainly doesn't contradict the idea that authenticity is a social construct. The question--familiar to students of geographic indications more generally--is how the law should intervene in attempts to stabilize or shift that social construct. Even if the law doesn't, economically motivated producers will do so, sometimes with indifference to existing meanings, it's not always obvious how legal intervention affects consumer welfare (even straight-up false "Hecho in Mexico" could arguably be welfare-promoting if consumers wrongly preferred food made in Mexico which otherwise satisfied fewer of their preferences because they misinterpreted Mexican origin as a signal of other qualities).

So: Mario Montaño writes that "the origin of fajitas has been well documented to have been somewhere in the South Texas border region." But he objects to calling fajitas "Tex-Mex," on the grounds that "The American food industry, enacting the principles of cultural hegemony, has effectively incorporated and reinterpreted the food practices of Mexicans in the lower Rio Grande border region, relabeling them “Tex Mex” and further using that term to describe any Mexican or Spanish food that is consumed by Anglos. Although Mexicans in this region do not refer to their food as Tex Mex, and indeed often consider the term derogatory, the dominant culture has redefined the local cuisine as “earthy food, festive food, happy food, celebration. It is peasant food raised to the level of high and sophisticated art.”

Mario Montaño, “Appropriation and Counterhegemony in South Texas: Food Slurs, Offal Meats, and Blood,” in Usable Pasts, ed. Tad Tuleja, Traditions and Group Expressions in North America (University Press of Colorado, 1997), 50–67, https://doi.org/10.2307/j.ctt46nrkh.7.

Tuesday, June 27, 2023

Ambiguity in consumer protection cases means something different than ambiguity in Lanham Act cases

Would you believe I substantially shortened the analysis?

La Barbera v. Olé Mexican Foods Inc., 2023 WL 4162348, No. EDCV 20-2324 JGB (SPx) (C.D. Cal. May 18, 2023)

Granting reconsideration, the court reverses its previous ruling and dismisses the claims with prejudice. Notable for an extensive discussion of why it would be unreasonable for a consumer of defendant’s products to think they originated in Mexico, dooming the usual California claims, and for surfacing a key problem with the idea of an "ambiguous" claim.

All the accused products allegedly use: (a) the phrase “El Sabor de Mexico!” or “A Taste of Mexico!”; (b) a Mexican flag on the front and center of the packaging; and (c) the brand name “La Banderita” (or “the flag”), a reference to the Mexican flag displayed prominently on the Products. Some also contain a circular logo with the Mexican flag and the word “Authentic.” Sone also contain Spanish words or phrases, such as “Sabrosísimas” or “Tortillas de Maiz.” However, all the accused products “clearly and prominently” say “MADE IN U.S.A.” on the back and state they were “Manufactured by: Olé Mexican Foods, Inc., Norcross, GA 30071.”

The court agreed with defendant that recent Ninth Circuit precedents required more of reasonable consumers than earlier cases. When the front of the package is not misleading but ambiguous (whatever that means), consumers can be required to look at the back of the package for clarification.

Note: What ambiguity versus misleadingness usually means in practice, as far as I can tell, is whether the court thinks the front is misleading. The court here tries to create a distinction that could work, but—as it notes—the standard of care it imposes is pretty high, requiring more lawyerly precision than most consumers (even lawyers) engage in for most purchases. The court thinks this result is dictated by a single panel of the Ninth Circuit; I’m not so sure.

Anyway, plaintiffs must plausibly allege “a probability that a significant portion of the general consuming public or of targeted consumers, acting reasonably in the circumstances, could be misled.” Applying the Manuka honey precedent, the court found that standard not satisfied. The court gave a detailed exegesis of that case, Moore v. Trader Joe’s Co., 4 F.4th 874 (9th Cir. 2021), noting that the panel reasoned that reasonable consumers would use contextual information to conclude that “100% New Zealand Manuka Honey” did not have to be made with honey solely sourced from manuka flowers. “Perhaps the panel’s most surprising holding was the following: ‘given the foraging nature of bees, a reasonable honey consumer would know that it is impossible to produce honey that is derived exclusively from a single floral source.’” In addition, “the products’ inexpensive price would put a reasonable consumer on notice that the concentration of Manuka flower nectar was relatively low,” as would a “10+” rating on the label.

So, “if the Ninth Circuit thought the plaintiffs’ reading of the Trader Joe’s product so implausible that it could be dismissed as a matter of law, it would surely hold the same here.” Moore

imagines a “reasonable consumer” who is intelligent, someone capable of analyzing different pieces of information, engaging in logical reasoning, and drawing “contextual inferences” from a product and its packaging. She is diligent, for she does not just view one phrase or image in isolation, but looks at the entirety of the packaging together—she “take[s] into account all the information available” to her and the “context in which that information is provided and used.” And she appears to exhibit some skepticism about the representations made to her by a product’s advertising, aware that corporations attempting to sell items are conveying information to a given end and in a certain light. She does not expect advertisers to lie to her, nor should any consumer be expected to endure affirmative misrepresentations or strongly misleading claims. But she understands nuance and context, and that when making her purchasing decisions, all representations made by a seller are designed to sell.

Being a reasonable consumer of all the low-cost products we buy sounds exhausting! But, the court reasoned, Moore also has additional premises:

(1) that reasonable consumers in the market for a given product generally know a little bit about that product, at least more than a random person on the street, who may have no interest in that product or given it any thought; and (2) if a consumer cares about any particular quality in a product, she is willing to spend at least a few seconds reading a product’s packaging to see if it answers her question.

It follows that the “reasonable tortilla consumer” “knows a thing or two about tortillas, including a cursory understanding of their history and cross-border appeal, and has a basic grasp of broader social phenomena, such as the significant presence of Mexican American or Latino immigrants in the United States and the foods they have introduced into the mainstream American market.”

Moreover, if something is material to consumers, Moore indicates that they will be more careful and “invest at least a few seconds into reading the front and back labels of a product to see if it answers their question.” Although a reasonable consumer starts with the front of a package, “unless the front label is unmistakably clear about the issue for which she seeks an answer, she knows that she would ‘necessarily require more information before [she] could reasonably conclude’ what the answer is.” Then she would look at the product itself, including its context, for answers. This distinction between misleadingness and ambiguity is workable, but courts will have to keep in mind that, when a substantial number of reasonable consumers would think they could answer the question (and would be wrong about a reasonable answer) from the package front, misleadingness is possible.

As a matter of judicial experience and common sense, “any reasonable tortilla consumer who cares even a little bit about whether the tortillas she is buying in a grocery store are made in a Mexican factory rather than an American one, is willing to spend five seconds reading a product’s packaging to find out.” The court took pains to distinguish this from requiring the consumer to use the company’s website or compare reviews online. It was not suggesting that consumers must do anything more than “spend five seconds looking at the front and back labels.”

While I think Moore was wrong—it made up a class of sophisticated Manuka honey purchasers and the back labels did nothing to clarify—the analysis here is much more solid given the specific context of geographic origin information. A reasonable consumer “does not approach purchasing decisions with a professorial genius or inclination toward exhaustive research,” but if she cared about Mexican origin, she’d notice that the front package doesn’t say “one way or the other” and check the back, because “geographic origin information is often on the back of packaging.”

Another way to see it is that consumer protection claims arise along a spectrum:

On one end, plainly fanciful or unreasonable interpretations of a product’s labeling are subject to dismissal. On the other end, false or ambiguous front-label claims cannot be cured by contradicting back-label statements as a matter of law. Between these poles, ambiguous front-label claims that are consistent with back-label claims permit courts greater latitude to consider the surrounding context of the product and packaging to determine if the claims are misleading.

I don’t like this framing as much because it double-counts ambiguity while conflating two different meanings. Some ambiguity is what Lanham Act cases mean when they say a claim is ambiguous: “some consumers may receive a true message, while others may receive a false one,” and that should be included in the class of potentially misleading claims. But the ambiguity the court here seems focused on, which seems right, is whether a reasonable consumer could even answer the relevant question from the front of package information.

Her, “there are certainly no falsehoods on the front of the labels, while the back labels have nothing to contradict or correct.”

So let’s look at the packages:

La Banderita Sabrosísimas Corn

Other than “La Banderita” and “Sabrosísimas,” every single other word (apart from “tortillas” itself) on the front and the back of the packaging was in English, not Spanish. It wasn’t true that there was a “Mexican flag on the front and center of the packaging”: the tricolor was there, but “Defendant has erased the national coat of arms (an eagle perched on a cactus with a rattlesnake in its mouth) from the center white stripe and replaced it with an image of corn.”

La Banderita Sabrosísimas Flour

Though there was more Spanish, e.g., Sabrosísimas Tortillas Caseras and El Sabor de Mexico, English words and phrases still predominated, and the back was even clearer than the other packages that it was made in the US.

La Banderita Burrito Grande

Hardly any Spanish-language representations on the front label, apart from the brand name and “Burrito Grande,” “the latter of which hardly counts as Spanish because it would be recognizable to even monolingual English-speaking Americans.”

La Banderita Whole Wheat Fajita

Similar.

A reasonable consumer  

would know that the United States of America is a nation of immigrants. She has a basic understanding that there is such a thing as global capitalism, in which markets for goods and services operate across borders. She would know that foods associated with other cultures, from Chinese to Italian to Mexican, have become enormously popular in the United States, with Americans of all kinds enjoying these cuisines, or Americanized versions of them, at restaurants and at home. Because these foods are commonplace in the United States, not just in their countries of origin, she understands that there are American businesses, or multinational businesses, that sell these kinds of foods—i.e., it is an inherently unreasonable assumption that just because a food clearly originates from a foreign country, whether it is pasta or tortillas, that ipso facto it must be made in that country. This is obviously true of ethnic restaurants, which by definition serve food originating from foreign nations but made in the United States. But it is also true of manufactured products, including the kind of products produced by the kinds of businesses like Defendant’s: an American company manufacturing and selling goods in the United States, founded and led by a family with an immigrant background who make foods originating from their family’s nation of origin.

Some Spanish on the package wouldn’t be misleading. “Virtually all of the words on the Products’ labels are in English, not Spanish, and even most of the Spanish words or phrases are translated into English.” Even assuming that a package entirely written in Spanish might convey to a consumer that its primary market is Spanish speaking, and that a reasonable consumer could infer that products targeting the Mexican market are more likely to be made in Mexico, no reasonable consumer would think these products targeted a monolingual Spanish-speaking market.

Further, nothing in statements like “The Taste of Mexico!” were inherently false or misleading; rather, they were meaningless/trivially true in that tortillas are Mexican; “one could eat a hamburger or any quintessentially ‘American’ food outside the United States and say that it ‘tastes of America.’”

The court was also careful to note that images, logos and graphics “can convey a strong message to consumers, and often they can be more powerful than words alone.” But not all flag-like uses are the same.

If the Products contained an actual Mexican flag, especially one paired with any kind of statement of a seemingly “official” nature, perhaps a consumer could think the Products display some kind of governmental imprimatur of Mexican origin. A product that bears the Mexican flag and says “Made in Mexico” or “Hecho en Mexico” beneath it is obviously misleading if, in fact, the product is made in the United States. So, too, may be a product bearing a Mexican flag with the words “Official Product” or “100% Mexican” or another phrase that suggests a representation regarding the product’s supply chain, rather than just a cultural affiliation. But no reasonable consumer would think the Mexican flag imagery on the Products suggests that they must be made in Mexico, for these are highly stylized “flags,” decidedly unofficial in nature: they adopt the Mexican tricolor, with images of wheat or corn in lieu of the national coat of arms. Such imagery evokes Mexican heritage, which is truthful, rather than misleading: tortillas originate from Mexico.

Then there was some more speculative stuff, like that reasonable consumers will know that fajitas are Tex-Mex, not Mexican (really?), that whole wheat is an American thing, and that burritos have only limited popularity in northern Mexico but they’re central to Mexican-American cuisine.

The court was also not persuaded by the complaint’s allegations that a substantial number of reasonable consumers would pay more for tortillas of Mexican origin. Although a price premium for authenticity could be plausible, “it is not reasonable to assume that potential consumers for Defendant’s products are seeking out the most traditional of tortillas, for they are buying them at ‘grocery retailers in California.’” The plaintiff alleged that Olé targeted Hispanic consumers, “[b]ut anyone with a basic familiarity with Mexican culture, including many or most Mexican Americans living in the United States, knows that the vast majority of Mexicans acquire their tortillas at tortillerias, and no tortilleria would package their products in a manner remotely similar to the Products. The Products also do not look like mass-produced tortillas sold in Mexican grocery or convenience stores, not least because they are mostly written in English, not Spanish.”

Anyway, authenticity was nothing more than subjective; it was a social construct. “Reasonable tortilla consumers care about whether their tortillas are good—and that is not the same thing as caring solely about whether their tortillas are made in Mexico.” [I was pretty sympathetic up to this point, but consumers are generally entitled to want what they want for whatever reasons they want, even if the reasons are dumb ones. And while “authentic” is probably puffery in this context, that’s not the challenged representation—it’s Mexican origin, which people can plausibly value because of their varied conceptions of authenticity.]

But the court used this point to challenge the idea that defendant was acting in bad faith: It was ok to offer “Mexicanness” with made-in-the-US food. Have some culture war:

An individual can be proud to be an American while simultaneously acknowledging, and celebrating, her Mexican heritage; a company can be based in the United States with a corporate culture that acknowledges, and celebrates, Mexican culture and cuisine. By implicitly rejecting these ideas, lawsuits like this one perpetuate an unfortunate, and unreasonable, undercurrent of essentialism: that foreign-sounding people, words and foods are less American than the people, language and cuisine associated with the “real” America. If a consumer begins with the presumption (seemingly an unrebuttable one if she is also unwilling to look at the back of a package to see if it is made in the United States) that all ethnic food products she finds in an American grocery store are manufactured in a foreign country because they are inherently “foreign,” she fundamentally misunderstands the American experience. For purposes of a FAL, CLRA or UCL claim, she is also not a reasonable consumer ….

The dismissal (which is now on appeal) was without leave to amend, despite the existence of a survey. The survey couldn’t change the results. Among other things, the survey deliberately showed only portions of the packaging to participants and to exclude the parts where they say they are made in the United States. “A participant would have to click on a tiny and difficult-to-find link to view any additional images, and nowhere do the survey results show whether or how many participants did so.”

Comment: Here’s where an understanding of the dual meanings of “ambiguous” would be helpful. If there is a good enough “I don’t know/not enough information to answer” option, even a survey showing only the front can provide relevant evidence about whether the front of the package is ambiguous in the way that means “a reasonable consumer would understand that she can’t answer X question with this information” or ambiguous in the way that means “30% are fooled and 70% aren’t.”

The survey also fatally failed “to provide adequate factual detail as to why a consumer might believe the Products were made in Mexico.” They needed to be asked why they believed that.