140 defamatory characters posted in the Twitter-sphere could cost tens of thousands of dollars in damages, the New South Wales District Court has held. The case reignites the complexities of defamation law in a world where everyone is a publisher and information is disseminated across the globe at the click of a button.
In the recently handed down court decision of Mickle v Farley, 20 year old Andrew Farley was sued by Ms Christine Mickle, a highly-regarded music teacher who taught at the same school. Mr Farley believed Ms Mickle was responsible for his father (the previous head teacher of music) leaving the school, and posted multiple allegations on both Twitter and Facebook. The comments were false, as his father had left the school in 2008 “in order to attend to personal issues.” The suggestion that she was responsible for the harm or ill-health of the father caused distress to Ms Mickle, who subsequently took a year of sick leave.
The case itself is unremarkable in regards to the current state of defamation law in Australia except that it’s the first Twitter judgement. Judge Elkaim awarded $85,000 in compensatory damages. He commented:
“when defamatory publications are made on social media it is common knowledge that they spread. They are spread easily by the simple manipulation of mobile phones and computers. Their evil lies in the grapevine effect that stems from the use of this type of communication.”
Judge Elkaim additionally awarded $20,000 for aggravated damages for Mr Farley’s uncooperative behaviour. The case shows the accountability of social media users for their actions, even when Mr Farley only had a mere 60 Twitter followers and 50 Facebook friends.
The Social Media Legal Landscape in Australia:
Whilst Andrew Farley’s case was the first Twitter judgement in Australia, it’s not the first to hit the courts. In 2012 music reviewer Joshua Meggitt sued Marieke Hardy and Twitter, but settled out of court. This otherwise unremarkable case demonstrated the law responding to technological change.
Companies too must, of course, be careful as there is a history of liability for the failure to remove posts on their Facebook pages written by others. This is important, as the control and responsibility of the page rests on the company – even for content which it did not produce. Even Google couldn’t escape publisher liability when the search engine failed to remove defamatory search results after several requests from a Mr Trkulja, who was defamed and Google found to be liable in late 2012.
Defamation Law in Australia and Beyond:
As defamation law us is usually more concerned about where content is downloaded, rather than uploaded, the internet has made the law more complex. As content can be viewed or downloaded anywhere, amateur and professional writers can now be exposed to defamation laws across the globe. In Australia, a person can sue for defamation in the state or territory where his reputation is established, even when the content was published overseas. This pick-and-choose system provides advantages for people who believe they’ve been defamed. By bringing an action in countries with stricter freedom of speech laws increase the chances the material will be held defamatory, whilst it’s lower in countries which strongly promote free speech. In reality, most potentially defamatory comments never eventuate into law suits because of the cost and time to do so, and many won’t be across multiple jurisdictions.
Defamation has always been a topical issue, but as the law and evolves to meeting the challenges of the social media age, it has become more relevant to everyday people. There is a misconception that social media is treated differently from traditional forms of media. In reality, this isn’t entirely true, and whilst people may let their guard down with what they say on their Facebook page, it can have unintended ramifications. Judge Elkaim’s words should be a warning to those with a propensity for hot-headed tweeting, or perhaps even a careless fib. For the young Andrew Farley, he found himself owing more than $100,000 plus significant legal costs in circumstances where he was unlikely to have considered this a likely outcome at the time of his tweeting.
Poor Qantas. In recent times the airline has suffered many a social media mishap. Back in 2011 their Twitter hashtag #qantasluxury was hijacked by unhappy customers who delivered an unprecedented number of cuttingly sarcastic and highly critical responses. In 2012 the airline battled to remove a snarky parody PR account from Twitter. The most recent incident occurred in July this year when a hardcore pornographic image was displayed for about 7 hours on the Qantas Facebook page, much to the shock of an 8 year old boy and his father.
Of course Qantas is only one of many brands to suffer at the hands of social media. Australia’s Next Top Model recently had their promotional hashtag #antmselfie hijacked by feminist group Collective Shout, who claimed the competition was superficial and encouraged sexualised behaviour. The group’s actions drew attention to photo entries from girls as young as 9.
So what can you do to minimise the legal risks and avoid being featured in one of the many online articles gleefully titled ‘Companies that have made Huge Social Media Mistakes’?
Australian Law and Guidelines
In Australia, social media moderation is a hotly contested subject. The Australian Competition and Consumer Commission (ACCC) and a number of industry bodies have released somewhat conflicting guidelines, as summarised below.
The ACCC has made it clear that it considers content on social media sites to be advertising and/or marketing communications. Importantly, this means that competition law applies to such content.
Accordingly, brands have a responsibility to ensure content on their social media pages is accurate, irrespective of who put the content there. Brands will be held responsible for user posts or public comments made on social media pages which are false, misleading or deceptive if the brand knows about them and decides not to remove them.
In regards to moderation generally, the ACCC says that the amount of time a company needs to spend monitoring its social media pages depends on the size of the company and the number of fans or followers they have.
Australian Association of National Advertisers (AANA)
The AANA has stated that its self-regulatory codes apply equally to digital and traditional media.
For brands that are interacting and participating actively on a digital platform, the AANA Best Practice Guideline ‘Responsible Marketing Communications in the Digital Space’ recommends brands moderate at least once every business day. Brands should also moderate immediately after posting or engaging online and for at least 2 hours following a post.
Interactive Advertising Bureau (IAB)
Unlike the ACCC and the AANA, the IAB believes that user comments directed towards a social media platform do not constitute advertising. However, user comments can be converted into promotional statements through an organisation’s direct endorsement or expression of agreement. Further, the risk of an organisation becoming responsible for a user comment on its social media platforms increases once it has been made aware of the comment and it has had the opportunity to review it and take appropriate action.
In its new publication ‘Best Practice for User Comment Moderation’, the IAB suggests that companies moderate comments to the extent their resources allow. At a minimum, this should involve reviewing and moderating recently published comments at the same time as posting a new comment. The IAB notes that brands should increase their moderation if they are engaging in online interaction that is provocative, and designed to illicit controversial responses.
So what should you do?
When it comes to social media moderation, a common sense approach is best. Brands should moderate their pages regularly, taking into account the extent and activeness of their social media presence. For large international companies such as Qantas, this may mean consistent monitoring 24 hours, 7 days a week. For smaller brands, this may be once a business day. All brands should remove posts that are, or are likely to be, false, misleading or deceptive, defamatory, offensive or which breach intellectual property laws as soon as the brand becomes aware of them.
Most importantly, all companies should have in place:
- A social media policy, which sets out employer expectations around professional and private use of social media;
- Community manager guidelines, which set out clear company policies and practices around moderation and the removal of offensive or illegal content;
- House rules/community guidelines, which set out the standards expected from community users; and
- A crisis management plan, in case something does go amiss.
Of course, in an ideal world you will never have to use that crisis management plan!
The industry has been abuzz with the ruling from the Advertising Standards Bureau (ASB) that content on a brand’s Facebook page is considered to be advertising and/or marketing communications. Many of our clients have been calling to enquire about the consequences for the industry and how the ruling may impact their or their clients’ Facebook pages.
The decision by the ASB is in line with a decision of the Federal Court in 2011. In Australian Competition and Consumer Commission v Allergy Pathway Pty Ltd (No 2)  FCA 74 the Federal Court held that health company, Allergy Pathway, was liable for postings of third parties in social media because it had control over its social media pages, knew that misleading testimonials had been posted on Facebook and Twitter, and took no steps to remove them.
The recent determination of ASB regarding Diageo’s Smirnoff Facebook page has extended the reach of the Federal Court decision by stating that provisions of the Advertiser Code of Ethics (the Code) apply to an advertiser’s Facebook page and to content generated by advertisers, as well as material or comments posted by users or friends. The ASB found that a Facebook page falls within the definition of advertising and marketing communications under the Code and is not merely a networking tool used by existing customers.
In a decision of the ASB on the same day as the Smirnoff decision, the ASB found that the VB Facebook page had breached various provisions of the Code. Again the ASB found that the Facebook page was a marketing communication tool and that the provisions of the Code applied to content created by VB as well as content posted by users or friends. Importantly, the ASB noted that the VB Facebook page user comments, identified in the complaint, were posted in reply to questions posted by VB.
The above decisions closed a perceived loophole, which allowed brands to benefit from social media without accepting responsibility for content posted by advertisers or customers on Facebook, which would have otherwise been inconsistent with the Code or a breach of the Competition and Consumer Act 2010 (Cth) (the Act). In particular, s18 of the Australian Consumer Law in Schedule 2 of the Act, prohibits misleading or deceptive conduct.
In an article published by the Canberra Times, the Australian Competition and Consumer Commission (ACCC) backed the determination of the ASB. The competition watchdog sent a warning to large companies with a wealth of resources at their fingertips – if comments are not removed within 24-hours then the company will face potential court action.
If the ACCC’s past modus operandi is anything to go by, an ACCC prosecution of a large company who has not obeyed the ACCC stated view will usually follow.
While the ASB has so far refused to issue specific guidelines on social media policy, here at von Muenster we expect that the ACCC will release industry guidelines in the near future.
What does all this mean for agencies and their clients?
Community managers will now have to be vigilant in monitoring their social networking pages to ensure that all content posted by any person is not in breach of the Code or in contravention of the Act. It will on a case by case basis be necessary to moderate, respond to or even remove content posted by a brand’s Facebook page users. Community managers should also undertake training on the requirements of the Code and the Act to ensure they are able to identify posts by third parties which may be problematic. This training should not just be linked to infringements under the Code or Act but also other applicable laws, including defamation, copyright, trademarks, causing offence and racial discrimination.
It is difficult to provide a precise formula or guide as to what content or posts should be left, moderated or removed. Different rules apply under the self-regulatory Code and applicable laws, such as the Australian Consumer Law which forms part of the Act. Each situation turns on its own facts and circumstances, and often one will have to consider numerous factors, including the nature of the Facebook page and the advertiser; the nature of the products; the audience that is engaging with the Facebook page; the effect of other related marketing communications; and the overall context. It will be important to seek legal advice on a case by case basis if unsure.
At the end of the day, a commonsense approach will need to be taken. If a post or content is suspicious, offends, is blatantly wrong or could cause a is representation to other Facebook page users, then you need to ask the question: ‘do I moderate, remove or leave the post’? The Code and other applicable industry codes (for example the ABAC alcohol code) are quite straightforward and should readily be able to be applied to what is being posted on Facebook pages.
Section 18 of the Australian Consumer Law is a little more complicated. As stated above this section prevents misleading or deceptive advertising claims and is designed to protect consumers. It applies to Facebook and other social media sites, including posts by users (see Australian Competition and Consumer Commission v Allergy Pathway Pty Ltd discussed above). Not all posts that are incorrect or inflated will be misleading. The posts need to be considered against the Facebook page as a whole – the other posts, posts by the advertiser and the context. The audience of the Facebook page needs to be considered and then the message that is being conveyed to that audience needs to be ascertained – if the message is misleading or constitutes a misrepresentation to a reasonable member of the audience of the Facebook page, then it is possible there will be a breach of Section 18.
Personal opinions, puffery – ‘hey this is the best drink in the whole world’ – and other forms of social banter are unlikely to lead others into an erroneous assumption about the brands products. It is where the brand puts out a misleading message or allows a misleading message to develop, and the responding posts reinforce or amplify this message, where we see possible breaches occurring. The possible spectrum of situations are endless and again, each Facebook page and situation will need to be assessed on its own merits.
If an organisation is active in social media and is engaging on a frequent basis, then for larger organisations with greater resources, it is likely that the response time to moderate or remove offending content may be as little as 24-hours, however this is by no means law at this time and awaits a judicial pronouncement.
Please get in touch if we can be of any further assistance in helping you and your clients navigate the implications of these decisions.
A common question asked of our team members is: ‘if a defamatory comment is posted on our social media page, would we be liable?’ To answer this question, we have provided a brief overview of the law as it currently stands.
The nature of social networking sites lends itself to the inherent risk of a consumer posting a defamatory comment on the site, being a comment that could:
- injure the reputation of a person by exposing them to hatred, contempt or ridicule;
- cause others to shun or avoid a person; or
- lower a person in the estimation of others.
Liability attaches at time of publication (and re-publication) of a defamatory comment, including on a brand’s own site, Facebook or Twitter page. But who is liable for the defamatory publication?
The author, being the consumer who posted the defamatory comment, will usually be liable. But the defamed person will often not pursue the author, due to the difficulty that online anonymity/pseudonymity may pose in identifying the author or the simple fact that the author’s pockets are typically not as deep as those of the brand.
It is unlikely that a social networking host or provider, which provides the platform itself (i.e. Facebook, Twitter etc.), will be regarded as publishing, or even as authorising the publication, of the defamatory material, given its role as platform host/provider is a passive one. In the recent English decision of Payam Tamiz v Google Inc, Google UK Limited  EWHC 449 (QB), Justice Eady agreed with Google’s argument that it merely provided access to the communications system Blogger.com and did not create, select, solicit, vet or approve the content on the system – this is all controlled by the blog owner. Justice Eady summarised:
“… it may perhaps be said that the position is, according to Google Inc, rather as though it owned a wall on which various people had chosen to inscribe graffiti. It does not regard itself as being more responsible for the content of these graffiti than would the owner of such a wall.”
Even if the social network host or provider is deemed to be a publisher, it may be afforded protection in Australia as an ‘internet content host’ (ICH) or ‘internet service provider’ (ISP) under section 91 of Schedule 5 of the Broadcasting Services Act 1992 (Cth), although this has not yet been determined with respect to defamation in Australia (see our blog ‘Twitter sued for Defamation’ on this point). Section 91(1) of this Act states that any law of a State or Territory, or rule of common law or equity, has no effect to the extent to which it subjects an ICH/ISP to liability for hosting or carrying particular internet content of which it was not aware or requires an ICH/ISP to monitor, make inquiries about, or keep records or, internet content that it hosts or carries.
The question however remains whether a brand operating a social networking site will be deemed to be the publisher of defamatory comments of users of their site and accordingly liable. The law in Australia is largely untested. Liability will likely depend upon the extent of control exercisable by the brand over what is published on their site as well as the brand’s knowledge of the defamatory material on their site.
In terms of control, Facebook, for instance, affords significant control to the page owner, including control over who can post, control over who can view the posts, and the power to delete posts. However, just because a brand may have the technical capability to take down defamatory comments on their site does not automatically deem them to be a publisher.
The brand’s knowledge of the defamatory material must also be considered. Does the brand authorise the publication of the defamatory matter or merely facilitate it? If the brand is found to be a publisher, the answer to this question will determine whether the brand may avail themselves to the defence of innocent dissemination, which applies “to those who participate in the communication of defamatory matter but do not authorise that communication” (Thompson v Australian Capital Television Pty Ltd (1996) 186 CLR 574 per Gaudron J).
But when may a brand be found to ‘authorise’ the defamatory matter – at time of posting, when the brand becomes aware of the defamatory comment, or never? This issue has been considered for the purpose of contempt, where it was held that the publisher “accepted responsibility for the publications when it knew of the publications and decided not to remove them” (ACCC v Allergy Pathway Pty Ltd and Anor (No 2)  FCA 74 per Finkelstein J).
Therefore, if your brand exercises sufficient control to be able to take down defamatory material from its social networking site, and fails to do so within a reasonable time of notification of the existence of the defamatory material, it may be held responsible for the continued publication of the defamatory material. It is however uncertain whether your brand can merely rely upon users of its site to notify them of the defamatory material, or whether it must actively monitor content on its site. As a risk management strategy, we recommend moderating content on your brand’s site on a frequent basis and taking down any defamatory content as soon as reasonably practicable, so that your brand is not the one that ends up resolving this question before the courts.
It was reported in the media (for example click here) this week that Twitter is being sued for defamation for the first time under Australian law. The case arose after a Melbourne man, Joshua Meggitt, was wrongly accused and named by writer and TV identify Marieke Hardy as the author of a hate blog dedicated to her. Hardy wrongly named and shamed Meggitt as the ‘anonymous’ internet bully using the Twitter micro-blogging service (twitter.com). Whilst Meggitt and Hardy settled their differences out of court, it is Twitter that now finds itself the subject of court proceedings.
Under the Australian Defamation Acts (2006), defamation can occur where one person publishes content [words, sound, video, images] that damages the reputation of another identifiable person. Whilst Twitter states in its Terms of Service that “You are responsible for your use of the Services, for any Content you post to the Services, and for any consequences thereof” and that to the maximum extent permitted by law Twitter will not be liable for any damages resulting from any conduct or content that is defamatory, offensive or illegal (amongst other things), under the Australian Defamation Acts liability for a defamatory publication can extend to an organisation where the organisation is able to exercise control over a publication and there is a failure to prevent or terminate a defamatory publication by a third party. A publication can include for example a brand’s blog or Facebook page or as in this case, Twitter’s micro-blogging service.
It seems likely that in this case Twitter does have a case to answer and may eventually find itself at the wrong end of the court’s judgment. There is however every possibility that this case will settle before a judgment is rendered and the liability of sites like Twitter for defamation will remain untested under Australian law.
Regardless, this case is a timely reminder that brands need to be aware of what is being published by third parties on their blogs, forums and Facebook pages in order to avoid potential liability for defamation.
Here are some tips:
- Only allow blog posts by identifiable registered users – so they can be tracked down in case a defamation proceeding is brought by an aggrieved person;
- Have clearly stated community guidelines and netiquette outlawing personal attacks, vilification and defamation; and
- Actively censor or monitor blog postings / comments and remove those that breach community guidelines or rules of the brand’s site.
von Muenster Solicitors is moving into the blogosphere! Keeping on top of media and communications industry developments from a legal perspective can seem like a constant uphill battle, so that’s where we come in.
Want to know what’s relevant to you and your clients in the legal sphere without needing to trawl through the legalese? We’ve got you covered.
Hot off the press IP decisions and what they mean for you and your client’s brand? Yes please.
Facebook changed its promotions guidelines (again)? We’re on it.
Leave the legal headaches to us, and we’ll leave you to get your creative on.
We hope you take the time to have a look around our revamped website. Drop us a line with any queries, feedback and musings or just to have a chat. We are here to help.