Social Media in Action in Commercial Litigation

Chapter Authors

United States

John L. Hines, Jr., Partner – jhines@reedsmith.com
Janice D. Kubow, Associate – jkubow@reedsmith.com

United Kingdom

Emma Lenthall, Partner – elenthall@reedsmith.com
Louise Berg, Associate – lberg@reedsmith.com


Introduction

This chapter explores emerging exposures associated with misleading advertising and defamation in social media.

The ever-growing number of conversations in social media venues creates new opportunities for advertisers to promote their brand and corporate reputation. These same conversations, however, create new risks. Online disparagement of a corporation or its products and/or services through social media can spread virally and very quickly, making damage control difficult. Accordingly, corporations need to be aware of their rights and remedies should they fall prey to harmful speech on the Internet. An organization also needs to understand how to minimize its own exposure and liability as it leverages social media to enhance its brand and reputation.

Within the context of social media, the two greatest risks to brand and reputation are, respectively, misleading advertising and defamation. Within the realm of misleading advertising, companies need to pay attention to new risks associated with the growing phenomenon of word-of-mouth marketing.

Social Media in Action in Commercial Litigation

False Advertising and Word-of-Mouth Marketing: Understanding the Risks

The US position

The presence of social media increases the risk that your organization will be touched by false advertising claims–either as a plaintiff or a defendant. First, more communication means more opportunity for miscommunication generally and for a misstatement about your or your competitor’s brand. Compounding this risk is the fact that social media marketing and sales channels (including word-of-mouth marketing programs) are now highly distributed, making enforcement of centralized communication standards difficult. Finally, social media frequently operates as a kind of echo chamber: consumers hear their likes and dislikes repeated back to them, amplified, and reinforced by those who share similar feelings.[1] In light of all these factors, the growth of social media is likely to see false advertising claims skyrocket. Indeed, it is worth noting that a 2008 Federal Judicial Center Report concluded that between 2001 and 2007, the number of consumer protection class actions filed annually rose by about 156 percent.[2]

False Advertising Generally

Generally, the tapestry of laws covering false advertising consists of Section 5 of the FTC Act[3] (the “FTC Act”), Section 43(a) of the Lanham Act,[4] the state deceptive practices acts, and common law unfair competition. All of these laws target deception of one form or another, but they differ in their requirements as to who can bring an action, the burden of proof required, and the available relief.

Section 5 of the FTC Act prohibits “unfair and or deceptive acts or practices.”[5]According to the FTC Policy Statement on Deception (1983),[6] deception exists if there is a material representation, omission or practice that is likely to mislead an otherwise reasonable consumer. Neither intent nor actual harm is a required element, and the FTC, in making a determination, is free to draw upon its experience and judgment rather from actual evidence in the marketplace.[7] The FTC will find an advertiser’s failure to disclose facts actionable under Section 5 if a reasonable consumer is left with a false or misleading impression from the advertisement as a whole.[8] The advertiser generally bears the burden of substantiating the advertising claim.[9] The FTC Act permits monetary and injunctive relief.[10]

Prior to, or in lieu of, an FTC proceeding, parties may find themselves before the National Advertising Division (“NAD”), a self-regulatory body that also focuses on resolving deceptive and misleading advertising. Parties generally participate in NAD proceedings willingly so as to avoid potentially more consequential action at the FTC. Although claims can be brought by consumers or competitors at the NAD, there is no private right of action at the FTC or in federal court under the FTC Act. Consumers seeking to file claims in court for consumer fraud and false advertising must resort to applicable state deceptive practices statutes and common law.

Competitors are also protected against deceptive practices under Section 43(a) of the Lanham Act, which provides for civil actions for injunctive and monetary (in state or federal court) for false or misleading statements made in commercial advertisement. The Seventh, Ninth and Tenth Circuit Courts of Appeals have tended to restrict standing under the Lanham Act to parties who are in direct competition; the other Circuits have a slightly broader standing threshold—but relief is not available to consumers. Under the Lanham Act, it is not necessary to show actual harm or intent to deceive to obtain an injunction.[11] To obtain damages, however, it is necessary to show that customers were deceived and that the plaintiff was harmed. Some courts raise a presumption of harm where the plaintiff proves the defendant’s intent and bad faith.

The plaintiff in a Lanham Act action has the burden of proving that the claim is deceptive.[12] The Lanham Act prohibits false and misleading statements; accordingly, the mere failure to disclose or omission to state a fact is not per se actionable. However if the failure to disclose makes a statement “affirmatively misleading, partially incorrect, or untrue as a result of failure to disclose a material fact,” then that statement is actionable.[13] In cases of implied deception, this means the plaintiff will have to introduce extrinsic consumer survey evidence.

As noted above, the growth of social media is likely to result in an increase in enforcement actions and private civil actions generally in connection with false advertising. Moreover, as discussed below, the FTC Guides make bloggers and advertisers using word-of-mouth marketing particularly vulnerable to deceptive practices and false advertising claims based on the blogger’s failure to disclose a material connection to the advertiser.[14] In addition, to clarifying the FTC’s own position with reference to how rules applicable to endorsements apply to social media, the FTC Guides are likely to be applied by state and federal courts when interpreting the Lanham Act and state deceptive practices acts.[15]

“Word of Mouth” Marketing

The Duty to Disclose

Social media has spawned virtually a new advertising industry and methods for spreading brand in an old way: word-of-mouth marketing. Word-of-mouth marketing involves mobilizing users of social media to “spread the word” about the advertiser’s goods and services. According to the Word of Mouth Marketing Association, word-of-mouth marketing is “[g]iving people a reason to talk about your products and services, and making it easier for that conversation to take place. It is the art and science of building active, mutually beneficial consumer-to-consumer and consumer-to-marketer communications.”[16]

Word-of-mouth marketing typically refers to endorsement messaging. Specifically, an endorsement is “an advertising message” that consumers are likely to believe is a reflection of the opinions and beliefs of the endorser rather than the “sponsoring” advertiser.[17] When a television ad depicts “neighbors” talking about the merits of the Toro lawn mower, we don’t believe that these statements reflect their personal beliefs; we know that they are actors speaking for the advertiser. On the other hand, Tiger Woods touting Nike golf equipment is an endorsement; we believe that we are listening to his personal views. A third-party’s statement, however, is not an advertisement (and not an endorsement) unless it is “sponsored.” To determine whether it is an endorsement, consider whether in disseminating positive statements about a product or service, the speaker is: (1) acting solely independently, in which case there is no endorsement, or (2) acting on behalf of the advertiser or its agent, such that the speaker’s statement is an ‘endorsement’ that is part of an overall marketing campaign?”[18]

As with all advertising, the bedrock concern of the FTC is with “unfair or deceptive acts or practices” prohibited under Section 5 of the FTC Act.[19] Deceptive acts or practices, generally, may include a failure to disclose material facts relative to a particular advertising claim. Thus, in the context of an endorsement, the relationship between the advertiser and the endorser may need to be made apparent to the consumer in order for the consumer to properly weigh the endorser’s statement. The FTC Guides state that advertisers are subject to liability for false or unsubstantiated statements made through endorsements, or for failing to disclose material connections between themselves and their endorsers, and that endorsers also may be liable for statements made in the course of their endorsements.[20] Section 255.5 of the FTC Guides requires that where a connection exists between the endorser and the seller that might materially affect the weight or credibility of the endorsement, such connection must be fully disclosed.

The FTC Guides distinguish three features of endorsements in the context of social media: (1) dissemination of the advertising message; (2) advertisers’ lack of control; and (3) material connections.

First, in traditional print and broadcast media, the advertiser controlled the messaging. Endorsements were embedded largely in a message controlled by the advertiser. This has changed. As the FTC explains (emphasis added):[21]

When the Commission adopted the Guides in 1980, endorsements were disseminated by advertisers—not by the endorsers themselves—through such traditional media as television commercials and print advertisements. With such media, the duty to disclose material connections between the advertiser and the endorser naturally fell on the advertiser.

The recent creation of consumer-generated media means that in many instances, endorsements are now disseminated by the endorser, rather than by the sponsoring advertiser. In these contexts, the Commission believes that the endorser is the party primarily responsible for disclosing material connections with the advertiser.

Consistent with this observation, the FTC Guides were amended to provide that “[e]ndorsers also may be liable for statements made in the course of their endorsements.”[22] Consistent with this observation, the FTC Guides were amended to provide that “[e]ndorsers also may be liable for statements made in the course of their endorsements.”[23] While at this writing the FTC has indicated that it does not intend to pursue individual users of social media and that it will be focusing enforcement on the advertisers, individual social media users would be ill advised to ignore the very clear mandates directed to them in the FTC Guides, standards that are also likely to influence courts in their interpretation of the Lanham Act and similar state laws.

Second, advertisers will frequently find themselves in relationships with apparently remote affiliate marketers, bloggers and other social media users. However, the advertiser’s lack of control over these remote social media users does not relieve the advertiser of responsibility for an endorser’s failure to disclose material information. “The Commission recognizes that because the advertiser does not disseminate the endorsements made using these new consumer-generated media, it does not have complete control over the contents of those statements.”[24] The Commission goes on to state, however, that “if the advertiser initiated the process that led to these endorsements being made—e.g., by providing products to well-known bloggers or to endorsers enrolled in word of mouth marketing programs—it potentially is liable for misleading statements made by those consumers.”[25]

Importantly, for advertisers, the determination of liability hinges on whether the “the advertiser chose to sponsor the consumer-generated content such that it has established an endorser sponsor relationship.”[26] Again, that relationship may exist with otherwise remote users. The FTC points out, however, that “[it], in the exercise of its prosecutorial discretion, would consider the advertiser’s efforts to advise these endorsers of their responsibilities and to monitor their online behavior in determining what action, if any, would be warranted.”[27]To avoid prosecution, if not liability, advertisers should heed the Commission’s admonition:[28]

[A]dvertisers who sponsor these endorsers (either by providing free products—directly or through a middleman—or otherwise) in order to generate positive word of mouth and spur sales should establish procedures to advise endorsers that they should make the necessary disclosures and to monitor the conduct of those endorsers.

Finally, the FTC Guides indicate that social media endorsers may have a heightened duty to disclose material connections to the advertiser. “[A]cknowledg[ing] that bloggers may be subject to different disclosure requirements than reviewers in traditional media,” the FTC states:[29]

The development of these new media has, however, highlighted the need for additional revisions to Section 255.5, to clarify that one factor in determining whether the connection between an advertiser and its endorsers should be disclosed is the type of vehicle being used to disseminate that endorsement—specifically, whether or not the nature of that medium is such that consumers are likely to recognize the statement as an advertisement (that is, as sponsored speech). Thus, although disclosure of compensation may not be required when a celebrity or expert appears in a conventional television advertisement, endorsements by these individuals in other media might warrant such disclosure.

 . . .

The Commission recognises that, as a practical matter, if a consumer’s review of a product disseminated via one of these new forms of consumer-generated media qualifies as an “endorsement” under the construct articulated above, that consumer will likely also be deemed to have material connections with the sponsoring advertiser that should be disclosed. That outcome is simply a function of the fact that if the relationship between the advertiser and the speaker is such that the speaker’s statement, viewed objectively, can be considered “sponsored,” there inevitably exists a relationship that should be disclosed, and would not otherwise be apparent, because the endorsement is not contained in a traditional ad bearing the name of the advertiser.

Word of Mouth Marketing: Summary

The FTC’s message is thus clear: (1) bloggers and other social media users are viewed as primary disseminators of advertisements; (2) endorsers in social media, along with the sponsoring advertisers, are subject to liability for failing to make material disclosures relating to the endorsement relationship (e.g., gifts, employment and/or other connections and circumstances); (3) the FTC appears to take the position that there is a higher threshold of disclosure in social media than traditional media, and that the endorsement relationship itself is likely to trigger the obligation to disclose; (4) advertisers need to take reasonable steps to assure that material disclosures are in fact made; (5) advertisers cannot rely on the “remoteness” of the social media endorsers or on the advertiser’s lack of control over them to escape liability; (6) advertisers are technically liable for a remote endorser’s failure to disclose; (7) an advertiser’s ability to avoid discretionary regulatory enforcement due to the endorser’s failure to disclose will be a function of the quality of the advertiser’s policies, practices and policing efforts. A written policy addressing these issues is the best protection.

False Endorsements

False endorsement cases arise under Section 43(a) of the Lanham Act where a person claims that his name or likeness, or actions attributed to him, are being used improperly to promote particular goods or services.

The Internet is rife with spoofing, fake profiling and other malicious conduct directed by one social media user against another. Frequently the conduct involves the transmission and publication of embarrassing or highly personal details about the victim. While historically, false endorsement cases have been brought commonly by celebrities or other people well-known to a community, the prevalence of social media will likely see the rise of false endorsement cases brought by non-celebrity victims under Section 43(a) and parallel state law.[30]

In Doe v. Friendfinder Network, Inc.,[31] the defendant operated a network of web communities where members could meet each other through online personal advertisements. Someone other than the plaintiff created a profile for “petra03755” including nude photographs and representations that she engages in a promiscuous lifestyle. Biographical data, according to the plaintiff, caused the public to identify her as “petra03755” to the community. The plaintiff alleged that the defendant did nothing to verify accuracy of the information posted, caused portions of the profile to appear as “teasers” on Internet search engine results (when users entered search terms matching information in the profile, including the true biographical information about the plaintiff,) and advertisements that in turn directed traffic to defendant’s site. In denying the motion to dismiss the Lanham Act claim, the district court stated:[32]

The plaintiff has alleged that the defendants, through the use of the profile in “teasers” and other advertisements placed on the Internet, falsely represented that she was a participant in their on-line dating services; that these misrepresentations deceived consumers into registering for the defendants’ services in the hope of interacting with the plaintiff; and that she suffered injury to her reputation as a result….

For purposes of this motion, then, the court rules that the plaintiff’s claim for false designation under 15 U.S.C. § 1125(a)(1)(A) does not fail simply because she is not a “celebrity.”

The UK position

While there is at present no specific legislation aimed at social media, there is a plethora of legislation and self-regulation that impacts on almost all activities connected to blogging, social networking or undertaking new forms of promotions on line. Some of the most important legal controls are:

The Advertising Standards Authority and the ‘CAP’ Code

The Advertising Standards Authority is an independent body which regulates all forms of advertising, sales promotion and direct marketing in the UK. Different regimes apply to broadcast and non-broadcast advertising. Online advertisements are covered by the self regulatory ‘non-broadcast’ Codes of Advertising Practice (CAP Code). [33]. While this Code only applies at present to advertisements in ‘paid for’ space, this is likely to change shortly. There is huge political pressure to extend the remit of the ASA and the CAP Code to all promotional messages on the Internet. In any event, all sales promotions are covered by the CAP Code. Advertisers need to be aware of the need for compliance with the Code. For example, the ASA regulates pop-up and banner ads on social networking sites and viral email or other marketing messages which advertisers pay social media to seed, though the position is not entirely clear. In addition there is a risk that Trading Standards or other regulators could intervene by utilising legislation, as described further below.

The ASA will not regulate any advertisements published in foreign media or which originate from outside the UK. Advertisers need only be concerned if they are placing an advertisement on a UK-based social networking site. However, the ASA does operate a cross-border complaints system in conjunction with ‘EASA’, the European Advertising Standards Alliance.

The CAP Code sets out a number of key principles to protect consumers against false advertising and other harmful advertising practices. For example, it states that advertising should be legal, decent, honest and truthful, and should not mislead by inaccuracy, ambiguity, exaggeration or otherwise), should not cause offence and should not contain misleading comparisons. It also contains specific rules relating to particular types of advertisement and products.

The UK non-broadcast advertising industry is self-regulating and therefore compliance with the CAP Code is voluntary. However, penalties for breaching the Code can include the following.

  • Refusal of further advertising space: The ASA can ask sellers of ad space in all media to refuse to carry an ad
  • Adverse publicity: ASA adjudications are published weekly and can be widely reported by the media
  • Withdrawal of certain trading privileges (e.g., discounts)
  • Enforced pre-publication vetting
  • Ineligibility for industry awards
  • Legal proceedings: In the case of misleading ads or ads which contain unfair comparisons, the ASA can refer the matter to the Office of Fair Trading. The OFT can seek undertakings or an injunction through the courts or issue an Enforcement Order under the Enterprise Act 2002.

Advertisers also need to be aware that more powerful sanctions are in the pipeline and that, practically speaking, the risk of damage to the brand by an adverse adjudication is a real deterrent to most reputable advertisers and brand owners.

Advertisers who like to put out edgy content do not necessarily need to fear ASA regulation. ASA adjudications do not automatically stamp out anything which pushes the boundaries. As an example, a company called Holidayextras paid video site Kontraband to carry a viral ad for internet parking. The ad featured a man speaking with a heavy Irish accent who was running a dodgy car parking operation. He stumbled out of a caravan (beside which were a fence and a sign saying ‘ca parkin’) and swore as he chased off children and threw a chair at them. Throughout the ad, subtitles appeared which were a more polite interpretation of his words (for example, he appeared to kick a car and punch the driver and the subtitles stated "Just pop it in the space over there please Parker"; "There's a good chap"). More extreme behaviour and questionable practices followed, and at the end of the ad, a car was shown on fire. From his caravan the man phoned the customer saying “there’s been a slight problem with your Mondeo”. The ASA failed to uphold a complaint that the ad was offensive to Irish people and Romany travellers. They noted the ad was intended to show a humorous contrast between a fictional caricature and a company that valued security. Although the character spoke with a heavy Irish accent and ran his business from a caravan, because he displayed extreme behaviour from which the humour in the ad was derived, they did not consider the ad suggested that behaviour was typical of Irish or Romany communities. Whilst they understood that some people could find the ad in poor taste they concluded it was unlikely to cause serious or widespread offence.

False Endorsements

It is unlikely that the ASA will regulate third party endorsements of an advertisers’ products which appear on social media, unless the advertisers paid or actively participated with the media provider to put them there. As noted above, only ads in ‘paid-for’ space fall within the ASA’s remit, subject to possible imminent change as mentioned above.

However, advertisers who place ‘paid for’ ads containing endorsements should be aware that, according to the CAP Code, they should obtain written permission before referring to/portraying members of the public or their identifiable possessions, referring to people with a public profile or implying any personal approval of the advertised products. They should also hold signed and dated proof (including a contact address) for any testimonial they use. Unless they are genuine opinions from a published source, testimonials should be used only with the written permission of those giving them.

Advertisers should take particular care when falsely representing that a celebrity has endorsed their products or services as they could be vulnerable to a claim for passing off (regardless of whether the endorsement appears in paid-for space). Unlike most other jurisdictions, it is possible under English law to use dead and living celebrities without consent, provided there is no implied endorsement or a breach of any trade mark. The danger with the Internet, however, is that material may be accessible in jurisdictions outside the UK and therefore using the image of celebrities without permission in the online environment carries a greater degree of risk than on more traditional media.

Passing Off

Passing off is a cause of action under English common law. It occurs where consumers are misled by someone who is making use of another person’s reputation, and can take two forms:

  • direct passing off, where an individual falsely states that his goods or services are those of someone else (for example, if someone were to set up a fake YouTube site);
  • indirect passing off, where someone is promoting or presenting a product or service as impliedly associated with, or approved by someone else when that is not the case (for example, where an advertiser produces a fake viral which appears to show a celebrity using their product. Liability could result even if lookalikes or soundalikes are used).

Consumer Protection from Unfair Trading Regulations 2008

False advertising and word-of mouth marketing on social media could also fall foul of the Consumer Protection from Unfair Trading Regulations 2008 (which implement the EU Unfair Commercial Practices Directive in the UK). The regulations include a general prohibition on unfair business to consumer commercial practices which is so wide that its application could extend to a variety of commercial practices on social media. The regulations also legislate against misleading actions/omissions and aggressive commercial practices, and set out prohibitions on 31 specific practices that will be deemed unfair in any circumstances. Several of these could be relevant to commercial activity on social media. As an example, prohibition 11 prevents traders from using editorial content in the media to promote their products or services without making it clear that the promotion has been paid for. The prohibitions apply to any ‘trader’, i.e., a natural or legal person acting in the course of his trade, business, craft or profession. Contravention can lead to criminal penalties. This does not bode well for so-called ‘street teams’ as used by some brands to promote products. Street teams are often young people who are employed on a part-time basis to eulogise about a particular brand or product on social media platforms. Often difficult to spot, street teams can be hugely effective at driving brand equity because consumers do not realise that they are being targeted – instead, they believe that they are truly on the receiving end of genuine word-of-mouth recommendations.

Advertisers may also find useful the Word of Mouth Association UK Code of Ethics useful see http://womuk.net/ethics/. The Word of Mouth Marketing Association (“WOMMA”) and WOM UK are the official trade associations that represent the interests of the word of mouth and social media industry. The Code sets standards of conduct required for members that include sensible guidelines on the disclosure of commercial interests behind on line commercial activities and social network sites.

The Business Protection from Misleading Marketing Regulations 2008

The Business Protection from Misleading Marketing Regulations 2008 prohibit misleading advertising and set out rules for comparative advertising. Advertising is defined as ‘any form of representation which is made in connection with a trade, business, craft or profession in order to promote the supply or transfer of a product’. This broad definition could clearly cover false advertising and word-of–mouth marketing (as well as other content) on social media. A trader who falls foul of the regulations can be punished by a fine (or imprisonment for engaging in misleading advertising). A trader is defined as any person who is acting for purposes relating to his trade, craft, business or profession and anyone acting on their behalf. There is a defence for the ‘innocent’ publication of advertisements.

Social networking: a new form of advertising regulation?

The most effective means of controlling advertiser activity in the modern world is the ability for consumers to voice their discontent.

Sometimes social networking sites may enable consumers to send a message to advertisers where the regulator can’t. In January 2010, more than a thousand people joined a Facebook campaign to ban UK billboard advertising a website for those looking for “extramarital relations". The ASA had rejected a complaint about the billboard on the grounds that the ad would not cause "serious or widespread offence" and said that its remit was to examine the ad in isolation, rather than the product it was promoting, which is a legally available service. At the time of writing, the group had over 2,700 members

Equally the damage that can occur when a brand misleads the public can much more easily be broadcast to a wider audience via social networking and blogging sites.

Defamation and Harmful Speech: Managing Corporate Reputations

The U.S. position

In addition to confronting issues involving online brand management generally and word–of-mouth advertising specifically, corporations face similar challenges in protecting reputation, including risks associated with disparagement and defamation.

The architectures of the Internet and social media make it possible to reach an unlimited audience with a flip of the switch and a push of the send button—and at virtually no cost. There are few barriers to people speaking their mind and saying what they want. Furthermore, because of the anonymity social media allows, users are increasingly choosing to express themselves with unrestrained, hateful and defamatory speech. These tendencies, encouraged exponentially by the technology and the near-zero cost of broadcasting one’s mind, are likely to be further exacerbated under circumstances such as the current economic crisis, where people are experiencing extraordinary frustration and fuses are short.

Words can hurt. Defamation can destroy reputations. For individuals, false postings can be extraordinarily painful and embarrassing. For corporations, who are increasingly finding themselves victims of defamatory speech, a false statement can mean loss of shareholder confidence, loss of competitive advantage, and diversion of resources to solve the problem. While the traditional laws may have provided remedies, the challenges to recovering for these actions that occur over social media are enormous because the operators of the media that facilitate defamatory postings are frequently immune from liability. (Of course, if a corporation is the operator of a blog or other social media, there will be some comfort in the “immunities” offered to operators of these media.) The immunity under the applicable federal law, the Communications Decency Act (the “CDA”), and some other key issues associated with online defamation are discussed below.

Defamation Generally

Although the law may vary from jurisdiction to jurisdiction, to make a case for defamation, a plaintiff must generally prove: “(a) a false and defamatory statement concerning another; (b) an unprivileged publication to a third party; (c) fault amounting at least to negligence on the part of the publisher; and (d) either actionability of the statement irrespective of special harm or the existence of special harm caused by the publication.”[34] Defamation cases are challenging to litigate. It should be noted that in the United States, the First Amendment sharply restricts the breadth of the claim. Defamation cases frequently carry heightened pleading requirements and a shortened statute of limitations. If the victim is an individual and a public figure, he or she will have to prove malice on the part of the defendant to make a successful case. Finally, the lines between opinion and fact are frequently very hard to draw and keep clean.

Anonymous Speech

Online defamation presents added complications. Online, and in social media specifically, the source of the harmful communication is frequently anonymous or communicating through a fake profile. At the first line of attack, piercing anonymity of the anonymous speaker can be challenging because of heightened standards under First Amendment and privacy laws. A plaintiff victim will often file his case as a Jane or John “Doe” case and seek to discover the identity of the defendant right after filing. The issue with this approach is that many courts are requiring the plaintiff to meet heightened pleading and proof standards before obtaining the identity of the defendant. Effectively, if the plaintiffs can’t meet the heightened pleading standard to obtain the identity of the defendant, they will be unable to pursue their cases. In one leading case, the New Jersey Appellate Court established a test that requires plaintiff “to produce sufficient evidence supporting each element of its cause of action on a prima facie basis,” after which the court would “balance the defendant’s First Amendment right to anonymous speech against the strength of the prima facie case presented and the necessity for the disclosure.”[35]

Special Challenges: Service Provider Immunity

As noted above, the challenges to the corporate victim are compounded by the fact that its remedies against the carrier or host (the website, blog, search engine, social media site) are limited. The flipside, of course, is that corporations may have greater room in operating these kinds of sites and less exposure—at least for content that they don’t develop or create. (See Chapter 1 – Advertising) A blogger will be liable for the content that he creates, but not necessarily for the content that others (if allowed) post on his blog site.

Early case law held that if a site operator takes overt steps to monitor and control its site and otherwise self-regulate, it might be strictly liable as a publisher for a third party’s defamation even if the operator had no knowledge of the alleged defamatory content. Arguably, this encouraged site operators not to monitor and self-regulate.[36] Other early case law also held that if the operator knew about the defamation, it would be liable if it did not do something to stop the conduct.[37]These holdings arguably created an incentive to take down any potentially dangerous information to avoid liability—and thus, according to some, threatened to chill speech and dilute a robust exchange of ideas.

All of these early cases were overruled in 1996 by the CDA.[38] Section 230(c) of the CDA overruled all of the early cases by providing as follows: “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[39] The term “information content provider” means “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”[40] Under Section 230(c), the operator, so long as not participating in the creation or development of the content, will be “immune” from a defamation claim under the statute.

The CDA makes it challenging to attach liability to a website, blog, social media platform or other electronic venue hosting offensive communication. Under U.S. law, these service providers have a virtual immunity, unless they participate in the creation or development of the content. Cases involving social media make the breadth of the immunity painfully clear. In Doe v. MySpace, Inc.,[41] a teen was the victim of a sexual predator as a result of conduct occurring on MySpace. The teen’s adult “next of friend” sued MySpace for not having protective processes in place to keep young people off the social media site. In effect, the suit was not for harmful speech, but for negligence in the operation of MySpace.[42] The Texas District Court rejected the claim, and in doing so highlighted the potential breadth of the “immunity”:[43]

The Court, however, finds this artful pleading [i.e., as a “negligence” claim] to be disingenuous. It is quite obvious the underlying basis of Plaintiffs’ claims is that, through postings on MySpace, Pete Solis and Julie Doe met and exchanged personal information which eventually led to an in-person meeting and the sexual assault of Julie Doe…. [T]he Court views Plaintiffs’ claims as directed toward MySpace in its publishing, editorial, and/or screening capacities. Therefore, in accordance with the cases cited above. Defendants are entitled to immunity under the CDA, and the Court dismisses Plaintiffs’ negligence and gross negligence….

It is not clear that other courts would interpret the CDA as broadly as did the Texas court. Indeed, the breadth of the CDA remains highly disputed among the courts, academics and policymakers who raise the prospect of amending the law from time to time.

Companies that operate their own blogs or other social media platforms, such as a Twitter page can generally avoid liability for speech torts on their sites if they stick to traditional editorial functions—and do not allow those activities to expand into any conduct that could be interpreted as creation and development of the offensive conduct.[44] Although exercising editorial control is not penalized, the question confronting the courts is the point at which a company goes beyond editing or beyond providing a forum, and into the realm of creation and development.[45]

Where “creation and development” begins and ends may not always be a bright line. For example, the mere reposting of another “content provider’s” content is arguably safe and within the editorial province of the social media operator. Although not completely free from doubt, it appears that a blog operator can receive a potential posting, review the content for editorial concerns, and then post it without the content thereby becoming the operator’s creation.[46] Some courts hold that the operator’s reposting to third-party sites is still within the grant of the immunity. In Doe v. Friendfinder Network, Inc., for example, the community site caused the defamatory postings to be transmitted to search engines and advertisers and other linked sites. Holding that Section 230 protected that conduct, the court noted: “Section 230 depends on the source of the information in the allegedly tortious statement, not on the source of the statement itself. Because ‘petra03755’ was the source of the allegedly injurious matter in the profile, then, the defendants cannot be held liable for ‘re-posting’ the profile elsewhere without impermissibly treating them as ‘the publisher or speaker of [ ] information provided by another information content provider.’ … 47 U.S.C. § 230(c)(1).”[47] It is worth emphasizing that the Section 230 bar applies to providers “or users” of interactive computer services.[48] Significantly, there is at least an argument that re-tweeters (as “users”) are protected under the statute.

Plaintiffs continue to reach for creative attacks on Section 230. In Finkel v. Facebook, Inc., et al.,[49] the victim of alleged defamatory statements claimed that Facebook’s ownership of the copyright in the postings barred its right to assert Section 230. The plaintiff urged, in effect, that the defendant could not claim ownership of the content and simultaneously disclaim participation in the “creation and development” of that same content. Rejecting this argument, the New York trial court stated that “‘[o]wnership’ of content plays no role in the Act’s statutory scheme.”[50] Furthermore, the court reiterated Congressional policy behind the CDA “by providing immunity even where the interactive service provider has an active, even aggressive role in making available content prepared by others.”[51] The court was clear in dismissing the complaint against Facebook where the interactive computer service did not, as a factual matter, actually take part in creating the defamatory content.

This is an important decision. Many sites assume ownership of content through their terms of use, and a contrary ruling would materially restrict application of the CDA in those cases. Further litigation is likely in this area.

Some courts have explored plaintiffs’ assertions of service provider “culpable assistance” as a way of defeating the provider’s CDA defense. In Universal Comm’n Sys., Inc. v. Lycos, Inc.,[52] the plaintiff argued that the operator’s immunity was defeated by the construct and operation of the website that allowed the poster to make the defamatory posting. The First Circuit rejected the argument for a “culpable assistance” exception to the CDA under the facts as presented, but left open the possibility of such an exception where there was “a clear expression or other affirmative steps taken to foster unlawful activity.”[53]

This result is consistent with the Ninth Circuit’s decision in Fair Housing Council of San Diego v. Roommates.com, LLC.[54] In that case, involving an online housing service, the court held that the CDA did not provide immunity to Roommates.com for questions in an online form that encouraged illegal content. Roommates.com’s services allowed people to find and select roommates for shared living arrangements. The forms asked people questions relating to their gender and sexual orientation. Although Roommates.com clearly did not provide the content in the answers, the Ninth Circuit held that it was not entitled to immunity. The majority ruled that Roommates.com was not immune for the questionnaire itself or for the assembling of the answers into subscriber profiles and related search results using the profile preferences as “tags.” The court noted that the questions relating to sexual preferences posted by Roommates.com were inherently illegal and also caused subscribers to post illegal content themselves by answering the questions.[55] In a case that evoked a sharp dissent and defense of a strong immunity, the clear take-away from the Roommates.com decision is a view that the immunity is far from absolute.[56]

Entities that operate social media sites need to be especially careful not to allow their “editing” to turn into creation and development of content. Although these issues are far from settled, any embellishments and handling of posted content should be approached cautiously and only in the context of traditional editorial functions.

CDA Immunity: Scope of the IP Exception

One important issue dividing the courts is the scope of the immunity as it relates to intellectual property. Specifically, although the CDA confers a broad protection on service providers, it also provides that it “shall [not] be construed to limit or expand any law pertaining to intellectual property.”[57]In other words, a blog operator, for example, cannot assert a CDA defense to claims that, although involving speech, are rooted in harm to the victim’s intellectual property. If the victim asserts, as against the operator a claim for copyright infringement based on a blogger’s uploading of protected material on to the blog (clearly involving “speech”), the operator has no CDA defense. The victim and the operator will have to resolve their claims under the copyright law, and particularly the Digital Millennium Copyright Act. Likewise, if the victim asserts a claim under Section 1114 of the Lanham Act that its federally registered trademark is being wrongfully used on the blog, the operator arguably cannot rely on the CDA as a shield against liability.[58]

The courts differ over the scope of the intellectual property exception to immunity, and specifically over the definition of intellectual property for purposes of the statute. In Perfect 10, Inc. v. CCBill, LLC,[59] the court opted for a narrow reading of “intellectual property” and hence a broader scope for the immunity. Specifically, the Ninth Circuit “construe[d] the term ‘intellectual property’ to mean ‘federal intellectual property.’”[60] Accordingly, without determining whether the state law claims truly involved “intellectual property,” the Ninth Circuit held that the intellectual property exception does not, as a threshold matter, apply to state law claims, and therefore affirmed dismissal of various state law claims on CDA grounds.

On the other hand, some courts have opted for a broader reading of “intellectual property” that would have the exception cover relevant state law. For example, the court in Doe v. Friendfinder Network, Inc. determined that intellectual property under the CDA exception encompasses applicable state law and, on that ground, refused to dismiss the plaintiff’s right of publicity claim against the website operator.[61]

Reporter’s Privilege

Application of existing rules to new technologies can raise yet more hurdles in speech cases. For example, suppose false information about your company appears on a blog or that some bit of confidential information appears. As part of damage control, you may want to find the source–or compel the blog to disclose the source. This leads to an interesting question–to what extent are blogs actually “newspapers.” The question is one that courts are being forced to consider, because newspapers traditionally have a “reporter’s privilege” that allows them to resist revealing their sources. For example, in 2004, Apple faced such an issue with respect to someone who allegedly leaked information about new Apple products to several online news sites. Apple sought the identity of the site’s sources and subpoenaed the email service provider for PowerPage, one of the sites, for email messages that might have identified the confidential source. In 2006, a California Court of Appeals provided protection from the discovery of sources by the constitutional privilege against compulsory disclosure of confidential sources.[62] Courts continue to consider similar issues, and a number of legislative proposals have been introduced at the state and federal level.

Most recently, the New Jersey appellate court considered the issue in Too Much Media, LLC v. Hale,[63] where the court offered useful guidance on attributes distinguishing providers of information that are “news media” (and giving rise to a reporter’s privilege) and those that are not. In that case, a software provider in the adult entertainment sector brought a defamation claim against the defendant who operated a blog targeting pornography. The New Jersey court rejected the defendant’s assertion of the reporter’s privilege in response to plaintiff’s discovery for information relating to sources of certain information posted on the blog. Among other factors, the court noted that the defendant “produced no credentials or proof of affiliation with any recognized news entity, nor has she demonstrated adherence to any standard of professional responsibility regulating institutional journalism, such as editing, fact-checking or disclosure of conflicts of interest.”[64] The court went on to note “[a]t best, the evidence reveals defendant was merely assembling the writings and postings of others. She created no independent product of her own nor made a material substantive contribution to the work of others.”[65]

Ratings Sites

Social media has given rise to a proliferation of ratings sites. Many businesses are beginning to feel the effects of online negative reviews. The ratings sites themselves, however, need to tread carefully because the negatively affected businesses are jumping at the chance to shift their losses back to the ratings site.

Traditionally, ratings sites have two primary defenses.

First, to the extent that site operator itself is rating sites, the site operator’s system and/or list may be protected under the First Amendment as its “opinion.” Second to the extent that the site is carrying the ratings of third parties, the ratings site operator is protected under Section 230 of the Communications Decency Act for the tortious speech of the third parties who blog their ratings on the site (e.g., defamatory ratings).

The cases supporting an opinion defense reach back to cases challenging securities and credit ratings, such as Jefferson County Sch. Dist. No. R-1 v. Moody’s Inv. Services, Inc.[66] In Search King Inc. v. Google, Inc. v. Google Technology, Inc.,[67] which relied on Jefferson County Sch. Dist., Search King allegedly promoted an advertising business that identified highly ranked sites and then worked out deals with those sites to sell advertising on behalf of other companies. Google allegedly disapproved of Search King’s business model (which capitalized on Google’s PageRank ranking system) and responded by moving Search King itself to a lower page rank—causing it to move off the first page for certain queries. Rejecting Search King’s claim for interference with business advantage on the grounds that Google’s PageRank algorithm is protected opinion, the court found that manipulating the results of PageRank were not actionable because there was “no conceivable way to prove that the relative significance assigned to a given web site is false.”

Cases involving credit and securities ratings continue to be worth monitoring as relevant precedent for Internet ratings cases. In one of the cases growing out of the recent sub-prime crisis against Moody’s, Standard and Poor’s and other securities ratings agencies, a New York federal court rejected “the arguments that the Ratings Agencies’ ratings in this case are nonactionable opinions. ‘An opinion may still be actionable if the speaker does not genuinely and reasonably believe it or if it is without basis in fact.’”[68]Rejecting the argument that Jefferson County Sch. Dist. mandated a different result, the court noted that even under that case “‘[i]f such an opinion were shown to have materially false components, the issuer should not be shielded from liability by raising the word ‘opinion’ as a shibboleth.’”[69]

In the context of Internet ratings sites, it remains to be seen just where courts draw the line at such material false components, but ratings companies are obviously well advised to tailor their public statements and documents very precisely to their actual practices.

Ratings sites will have to be careful about not taking action that causes them to lose their immunity under Section 230. As an example of the kinds of cases to watch for, Yelp was recently sued in various class actions for allegedly manipulating the appearance of consumer reviews in instances in which the site reviewed had not purchased advertising from Yelp.[70] Yelp purports to help people find the “right” local business by listing consumer reviews; in order to correct for unduly malicious or biased reviews, all reviews are filtered through Yelp’s algorithm. Plaintiffs have claimed that Yelp circumvented the algorithm—suppressing positive reviews and emphasizing negative ones—in cases in which the reviewed site refused to buy advertising. Yelp has vigorously denied the allegations and is also waging a thoughtful collateral campaign through social media (including a YouTube video on how its filtering works).

These claims are demonstrative of the kinds of claims ratings sites are likely to face. If this kind of conduct was in fact endemic to the site, the plaintiffs would have a basis to argue against Section 230 immunity generally.

Defamation Law in England

The UK position

Generally speaking, the English courts are less vigorous in their defence of free speech than their American counterparts. There is no equivalent to the First Amendment in England. The outcome of a defamation case is decided by balancing the right to free speech against the right to reputation. Under the European Convention of Human Rights (which has been enacted into UK law) these rights are of equal value.

As a result of the greater protection given to reputation in comparison with other jurisdictions (such as the United States), the UK has become the forum of choice for many defamation claimants.

To prove defamation under English law, the claimant must show that a statement:

  • is defamatory (i.e., is a statement which tends to lower the claimant in the estimation of right-thinking members of society generally);
  • identifies or refers to the claimant; and
  • is published by the defendant to a third party.

A number of claims have already been made under UK defamation law in respect of social networking sites. In Applause Store Productions and Firsht v Raphael (2008), the defendant, a former friend of Matthew Firsht, set up a Facebook profile in Firsht’s name and a Facebook group entitled ‘Has Matthew Firsht lied to you?’. This contained defamatory material suggesting that he and his company had lied to avoid paying debts. This was found to be libelous and damages of £22,000 were awarded. The judge took into account the likelihood of a high level of hits on the webpage – here it could be accessed by the Facebook London group which had around 850,000 members.

The rise of social media has resulted in a prevalence of ‘hate’ sites – blogs or Facebook groups specifically set up to promote the ‘hatred’ of a celebrity or a company. There is therefore plenty of scope for defamation claims, but statements on these sites will not always be defamatory. In Sheffield Wednesday Football Club Limited v Neil Hargreaves (2007) which concerned postings on a football club fan website about the club’s management, the judge considered whether the statements could “reasonably be understood to allege greed, selfishness, untrustworthiness and dishonest behaviour” and were therefore defamatory, or whether the posts were mere “saloon-bar moanings”.

One key difference between US defamation law is that the UK does not have the single publication rule – so on the internet, a new cause of action arises every time the website is accessed. This has been criticised as online publishers potentially face unlimited liability in respect of older material which remains on their sites. The government launched a consultation in September 2009 to consider changing this in relation to online publications.

Anonymous Speech

A Norwich Pharmacal order is an order which the UK courts may make requiring a third party to disclose information to a claimant or potential claimant in a legal action. Where a third party is involved in the wrongful acts of others (whether innocently or not), they have a duty to assist the party injured by those acts, and so a court will order them to reveal relevant information.

Norwich Pharmacal orders can be used to require social networking sites to disclose the identities of site users. For example, in the Sheffield Wednesday case referred to above, the High Court ordered the operator of the football club fan website to disclose the identifies of four users of the site who had posted the allegedly defamatory messages concerning the club’s management. A similar order was obtained against Facebook in the Applause Store case referred to above.

Service Provider Immunity

EC Directive 2000/31/EC (the E-commerce Directive) states that Internet service providers (“ISPs”) providing hosting services receive partial immunity from defamation (and other) actions (Article 14). An ISP will be immune if it does not have actual knowledge of illegal activity or information, or knowledge of the facts or circumstances from which it is apparent that the activity or information is illegal.

An ISP will lose immunity if, on obtaining knowledge of the illegal activity, it fails to act expeditiously to remove or to disable access to the information.

Section 1 of the English Defamation Act 1996 provides a similar defence where a secondary publisher takes reasonable care in relation to the publication of the statement, and did not know and had no reason to believe that what he did caused or contributed to the publication of a defamatory statement.

As a result of these provisions and cases which interpret them, ISP immunity in the UK is much narrower than in the United States. ISPs can lose their immunity if they know or ought to know about infringing statements and are therefore more likely to take action to remove possibly defamatory statements.

In Godfrey v Demon Internet Limited (1999, pre-dating the E-Commerce Directive) a defamatory statement was posted on a Usenet newsgroup and the ISP was named as a defendant. The claimant sent the ISP a fax informing it of the defamatory statement and requesting its removal. The defendant ignored this and allowed the statement to remain for a further 10 days. It was held that the ISP was a common law publisher of the material and as it knew of the offending statement but chose not to remove it, it placed itself in an ‘insuperable difficulty’ and could not benefit from the s1 defence in the Defamation Act.

However, an ISP who does not host the information or have an involvement in initiating, selecting or modifying the material, and effectively acts only as a conduit, will have a defence under the Defamation Act and the E-Commerce Regulations. This was demonstrated in the case of Bunt v Tilley (2006), where a number of ISPs were absolved from liability in respect of defamatory postings on newsgroups.

It is in an ISP’s interests to be quick to remove defamatory material if they wish to remain immune. For example, after the Godfrey case above, the ISP removed the comments and suspended newsgroup access to certain members until they signed a form of indemnity. Similarly, another ISP, Kingston Internet Limited, shut down an ‘anti-judge website’ after the Lord Chancellor’s department wrote to complain. However, by requiring ISPs to act in this way, it could be argued that the law goes beyond what is necessary, and that the scales are being pushed too far in favour of protection of reputation at the expense of free speech.

Protection of Sources

Like the U.S., the UK has laws which protect journalistic sources. However, unlike the U.S., protection is not afforded only to newspapers. The relevant provision (section 10 of the Contempt of Court Act 1981) states that ‘no court may require a person to disclose, nor is any person guilty of contempt of court for failing to disclose, the source of a publication for which he is responsible, unless it is established to the satisfaction of the court that disclosure

is necessary in the interests of justice or national security, or for the prevention of disorder or crime’. This wording clearly extends beyond journalists and could apply to social media. However, as the public policy reasoning behind the section may not be there in the case of many publications on social media, a court may be more ready to find that disclosure is necessary.

Bottom Line—What You Need to Do

Clients who are victims of speech torts must be prepared to act—but they must use the right tool when the problem arises. These tools range from a conscious choice to do nothing, responding with a press release; responding on the company’s own blog, fan page on Facebook and/or Twitter page; and/or engaging a reputation management company (for example, making use of search engine optimisation techniques to reduce visibility of negative comment). The negative publicity associated with disparaging comments can be greatly exacerbated by “sticky” sites that get high rankings on Google causing, for example, a negative blog posting to be highly listed when a potential customer types your organisation’s name into Google or another search engine. Your organisation is well advised to undertake a multi-prong strategy: consider the legal options, but consult with search engine and reputation management specialists to see if there might be a communications/ technical solution. Of course, litigation, including proceedings to unmask the anonymous speaker, should be considered. But a heavy-handed approach may simply make a bad situation worse—and at great expense. Litigation—or even a cease-and-desist letter that finds its way to an Internet posting—may give your organisation exactly the kind of publicity it does not want.

Frequently, malicious actors will time their communications to a key corporate event, such as the company’s earnings reports, in order to enhance the damage from the comment. Gone are the days when response to an incident can be vetted by a formal legal memorandum to corporate counsel. The damage can be “done” in literally a matter of hours. A quick response can make all the difference.[71] Accordingly, it is important for companies to understand the exposures to brand and reputation in social media, to have policies in place for managing internal and external communications in these new media, and to have contingent plans for dealing with reputation and brand disparagement, whether as the responsible party or as the victim, before the event happens—so that the response can be quick and damage the minimal.

Clients who find themselves on the end of a complaint should also be prepared to act quickly in order to mitigate any damage done. Also, if the websites in question are accessible in the UK, ISPs and other content hosts could lose any immunity they may have if they are notified about infringing material and take no action.



[1]      See, Cass R. Sunstein, “On Rumors: How Falsehoods Spread, Why We Believe Them, What Can Be Done, (Farrar, Straus, and Giroux 2009).

[2]      The Impact of the Class Action Fairness Act of 2005 on the Federal Courts.

[3]      15 U.S.C. § 45.

[4]      15 U.S.C. § 1125(a).

[5]      15 U.S.C. § 45.

[7]      Kraft, Inc. v. Federal Trade Commission, 970 F.2d 311, 314 (7th Cir. 1992);FTC v. Brown & Williamson Tobacco Corp., 776 F.2d 35, 40 (D.C. Cir. 1985).

[8]      Int’l Harvester Co., 104 FTC 949 1058 (1984).

[9]      Sandoz Pharmaceuticals v. Richardson-Vicks, 902 F.2d 222, 228 (3d Cir. 1990).

[10]    15 U.S.C. § 45 (m)(1)(A) (civil penalty of $10,000 per violation where violator has actual knowledge, or knowledge fairly implied). 15 U.S.C. § 53(b).

[11]    U.S. Healthcare v. Blue Cross of Greater Philadelphia, 898 F.2d 914, 921 (3d Cir. 1990); Johnson & Johnson v. Carter-Wallace, Inc., 631 F.2d 186, 190-91 (2d Cir. 1980).

[12]    Sandoz Pharmaceuticals v. Richardson-Vicks, 902 F.2d 222, 228 (3d Cir. 1990) (“The key distinctions between the FTC and a Lanham Act plaintiff turns on the burdens of proof and the deference accorded these respective litigants. The FTC, as a plaintiff, can rely on its own determination of deceptiveness. In contrast, a Lanham Act plaintiff must prove deceptiveness in court.”).

[13]    U.S. Healthcare, 898 F.2d at 921 (3d Cir. 1990) (quoting 2 J. McCarthy, Trademarks and Unfair Competition § 27:713 (2d Ed. 1984)).

[14]    See, Guides Concerning the Use of Endorsements and Testimonials in Advertising, available at http://www.ftc.gov/opa/2009/10/endortest.shtm (“FTC Guides”) (issued Oct. 5, 2009 and effective Dec. 1, 2009).

[15]    See, e.g., Ramson v. Layne, 668 F.Supp. 1162 (N.D. Ill. 1987).

[16]    FTC Guides, at 5, n.11.

[17]    FTC Guides, § 255.0.

[18]    FTC Guides, at 8.

[19]    15 U.S.C. § 45.

[20]    FTC Guides, § 255.1(d).

[21]    FTC Guides, at 38-39.

[22]    FTC Guides, § 255.1(d).

[23]    FTC Guides, § 255.1(d).

[24]    FTC Guides, at 42.

[25]    Id.

[26]    FTC Guides, at 15.

[27]    Id.

[28]    FTC Guides, at 39.

[29]    FTC Guides, at 40, 42.

[30]    See, 1 McCarthy, Rights of Publicity, § 5:22 (“under the proper circumstances, any person, celebrity or non-celebrity, has standing to sue under § 43(a) for false or misleading endorsements.”), quoted in Doe v. Friendfinder Network, Inc., 540 F.Supp.2d 288, 301 (D.N.H. 2008).

[31]    540 F.Supp.2d 288 (D.N.H. 2008).

[32]    Id. at 305-306; see also, Ting JI v. Bose Corporation, 2009 WL 2562663, at *3, No. 06-10946-NMG (D. Mass, Aug. 12, 2009).

[33]    The CAP Code can be found on CAP’s website at http://www.cap.org.uk.

[34]    Restatement, Second, Torts § 558.

[35]    Dendrite v. Doe, 775 A.2d 756, 760 (N.J. App. 2001); but see, Solers, Inc. v. Doe, 977 A.2d 941, 954 (D.C. 2004) (requiring a prima facie showing but rejecting a balancing test at the end of the analysis); see also, Cohen v. Google, Inc., No. 100012/09 (Unpublished) (New York Supreme Court orders Google’s Blooger.com to disclose identity of anonymous blogger, where plaintiff established the merits of her cause of action for defamation and the information sought was material and necessary to identify potential defendants).

[36]    E.g., Stratton Oakmont v. Prodigy, 1995 WL 323710, at *3 (N.Y. Sup. Ct., May 24, 1995) (Unreported).

[37]    E.g., Cubby v. Compuserve, 776 F.Supp. 135 (S.D.N.Y. 1991).

[38]    47 U.S.C. § 230 (“CDA”).

[39]    47 U.S.C. § 230(c)(1).

[40]    47 U.S.C. § 230(f)(3).

[41]    474 F.Supp. 2d 843 (W.D. Tex. 2007).

[42]    In Barnes v. Yahoo!, Inc., 570 F.3d 1096 (9th Cir. 2009), for example, the Ninth Circuit dismissed a claim for negligence where the claim was more clearly tied to a failure to take down offensive speech.

[43]    474 F.Supp.2d at 849.

[44]    See Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003) (provider’s “minor alterations” to defamatory material not actionable); 318 F.3d 465, 470-71 (3d Cir. 2003); Ben Ezra, Weinstein & Co. v. Am. Online, Inc., 206 F.3d 980, 985-86 (10th Cir. 2000) (rejecting argument that service provider’s deletion of some, but not all, inaccurate data about plaintiff from another source “transforms Defendant into an ‘information content provider’ “); Blumenthal v. Drudge, 992 F.Supp. 44, 52 (D.D.C.1998) (exercise of “editorial control” over defamatory third-party content fell within § 230 immunity); Doe v. Friendfinder Network, Inc., 540 F.Supp.2d 288, 297 and n. 10 (D.N.H. 2008) (slight editorial modifications to defamatory profile does not defeat immunity).

[45]    See, Anthony v. Yahoo! Inc., 421 F.Supp.2d 1257, 1262-1263 (N.D. Cal. 2006) (service’s alleged creation of false profiles inducing plaintiff to maintain his membership not barred by Section 230); Hy Cite Corp. v. badbusinessbureau.com, L.L.C., 418 F.Supp.2d 1142, 1149 (D. Ariz. 2005) (service provider’s creation of its own comments and other defamatory content associated with third-party postings defeats Section 230 defense).

[46]    Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003); Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997) (right to exercise traditional editorial functions, including “whether to publish, withdraw, postpone or alter”).

[47]    540 F.Supp.2d at 295-96 (emphasis in original).

[48]    See Barrett v. Rosenthal, 50 Cal.4th 33, 146 P.3d 510 (2006) (noting § 230(c)(1) protects any “provider or user” (emphasis added)), California Supreme Court holds individual user of social media immune from reposting message she received electronically from another “content provider”).

[49]    2009 WL 3240365, No. 102578/09 (N.Y. Sup. Sept. 15, 2009).

[50]    2009 WL 3240325, at *1.

[51]    2009 WL 3240365, at *1 (citing Blumenthal v. Drudge, 992 F.Supp. 44, 52 (D.D.C. 1998)).

[52]    478 F.3d 413 (1st Cir. 2007).

[53]    Id. at 421.

[54]    521 F.3d 1157 (9th Cir. 2008) (en banc).

[55]    See Nemet v. Chevrolet Ltd. v. Consumeraffairs.com, 591 F.3d 250, 256-257 (4th Cir. 2009) (distinguishing Roommates.com, the Fourth Circuit holds, among other things, that defendant is not encouraging illegal conduct).

[56]    See also, Chicago Lawyers’ Committee for Civil Rights Under Law, Inc. v. Craigslist, Inc, 519 F.3d 666, 669-70 (7th Cir. 2008) (rejecting that Section 230 confers an absolute immunity).

[57]    47 U.S.C. § 230(e)(2).

[58]    See, Doe v. Friendfinder Network, 540 F.Supp.2d at 303 n. 13 (notion that trademark claims are not intellectual property claims, while not before the court, strikes it as “dubious”).

[59]    488 F.3d 1102 (9th Cir.), cert. denied, 128 S.Ct. 709 (2007).

[60]    Id. at 1118-19.

[61]    540 F.Supp.2d 299-304. Accord, Atlantic Recording Corporation v. Project Playlist, 603 F.Supp.2d 690 (S.D.N.Y. 2009).

[62]    O’Grady v. Superior Court (Apple Computer, Inc.), 39 Cal.App.4th 1423 (Sixth Dist. 2006).

[63]    2010 WL 1609274, A-0964-09 (N.J. Super. A.D., April 22, 2010).

[64]    2010 WL 1609274, at *11.

[65]    Id.

[66]    175 F.3d 848 (10th Cir. 1999) (affirming dismissal of claims directed to credit ratings based on First Amendment).

[67]    2003 WL 21464568, No. CIV-02-1457-M (W.D. Ok., May 27, 2003).

[68]    Abu Dhabi Commercial Bank v. Morgan Stanley & Co., et al, slip op. 08 Civ. 7508 (SAS) at 34 (S.D.N.Y. Sept. 2, 2009), quoting In re IBM Corp. Sec. Litigation, 163 F.3d 102, 109 (2d Cir. 1998).

[69]    Id. at 34-35 n.126 (quoting 175 F.3d at 856).

[70]    Cats and Dogs Hospital v. Yelp, Inc. CV10-1340VBF (C.D. Cal. 2010); Levitt v. Yelp, Inc., CGC-10-497777 (Superior Court, San Francisco, 2010).

[71]   Clifford, “Video Prank at Domino’s Taints Brand,” http://www.nytimes.com/2009/04/16/business/media/16dominos.html (April 15, 2009).

Trackbacks (0) Links to blogs that reference this article Trackback URL
http://www.legalbytes.com/admin/trackback/200242
Comments (0) Read through and enter the discussion with the form at the end
Post A Comment / Question Use this form to add a comment to this entry.







Remember personal info?