7 Are current regulatory responses sufficient and appropriate?
Current federal anti-discrimination laws would generally apply to cyberspace to the extent that discriminatory behaviour (or harassment) online relates to a protected attribute, and could be said to have occurred in one of the stipulated areas of ‘public’ life. This is particularly clear in relation to the prohibition on sexual harassment under the Sex Discrimination Act 1984 (Cth) (SDA), as this Act was amended in 2011 to ensure that online sexual harassment was captured.[109] So, for example, if someone is ‘cyber -sexually harassed’ at work by a colleague who is using a work computer (or work mobile phone), there would be a strong argument that this situation would be covered by the SDA. Concepts such as the ‘workplace’ and an ‘education institution’ have been interpreted to include sufficiently closely related ‘cyberspace’ within their boundaries.
However, within these defined areas of public life, a number of exceptions exist in terms of potential coverage. Further, the areas of public life currently covered by anti-discrimination laws will not necessarily extend to cover purely ‘social’ or ‘informational’ contexts. Online socialising through social network sites may be at once both a private and a public activity but is unlikely to fall within the ambit of, for example, employment, the provision of goods and services, accommodation or education. Yet despite this, many forms of cyber-discrimination and harassment have occurred and continue to occur on social networking and/or social media sites.
In contrast, the provisions of the federal RDA could potentially apply to these contexts. Unlike the SDA, the Disability Discrimination Act 1992 (Cth) and the Age Discrimination Act 2004 (Cth), the RDA is not limited to specified areas of public life. Rather, the general discrimination protections simply require that the act had the ‘purpose or effect of nullifying or impairing the recognition, enjoyment or exercise, on an equal footing, of any human right or fundamental freedom in the political, economic, social, cultural or any other field of public life’.[110] Similarly, the racial hatred provisions contained in the RDA apply to acts done ‘otherwise than in private’.[111]
This broader construction of ‘public life’ under the RDA means that race discriminatory actions on online socialising or informational/social media platforms could be covered by the Act. For example, under the racial hatred provisions, the placing of anti-Semitic material on a website which was not password protected was held by the Federal Court to be an act ‘not done in private’, and therefore subject to the protections of the RDA. [112] Accordingly, To the extent ‘cyber-trolls’ and others engage in cyber-racist behaviour on social networking and media sites, their actions could be covered by the provisions contained in the RDA.
There are limitations to the protection under the RDA, such as those in s 18D (which excludes from s 18C things done ‘reasonably and in good faith’ in the context of artistic works, discussions and debates, fair and accurate reporting and fair comment expressing a genuine belief).[113] One important limitation of the coverage of the RDA is the ability to actually enforce orders against hosts of such information. The RDA has an ‘ancillary liability’ provision which makes it unlawful to ‘assist or promote’ unlawful acts of discrimination, [114] which could capture the actions of ‘hosts’ (as opposed to the creator of the information).[115] However ancillary liability provisions do not apply to the racial hatred provisions.[116]
The need then to pursue the individual responsible for actually posting the offensive material creates significant difficulties where sites allow people to create and post information on websites or blog/socially network anonymously or pseudo-anonymously using multiple personae (issues of anonymity are discussed in detail above). [117] For example, it has been reported that in 2012 Facebook was ordered by the UK High Court to provide the email and IP addresses of a number of so-called ‘cyber-bullies’ so that a complainant could proceed with a private prosecution.[118]
While Facebook may informally agree to comply with orders such as these, formal enforcement of such orders in an overseas jurisdiction remains problematic (issues of multiple jurisdictions have been discussed above). The Commission faces similar difficulties in compelling disclosure of such information when trying to resolve complaints. Website hosts and social networking sites have informally complied with requests by the Commission to remove information. Yet despite the Commission having the power to compel disclosure of such information, this cannot actually be enforced without the overseas counterpart jurisdiction ordering such enforcement in compliance with its own jurisprudence (of which there is no guarantee).
7.2 Regulation of ‘offensive’ behaviour
(a) Overview
Where ‘offensive’ and/or bullying behaviour occurs that falls outside of a defined area of public life and does not attach to a protected attribute for the purposes of anti-discrimination laws, other laws may apply.
At the domestic level, the regulation of cyber-bulling and other forms of offensive behaviour are regulated by a complex array of federal and state laws. Federal and state workplace bullying health and safety regulations cover forms of cyber-bullying to the extent that they occur within the workplace. In terms of the regulation of offensive Internet behaviour more generally, the focus of federal laws is on regulating the conduct of Internet service providers or ‘content hosts’ of potentially offensive material.[119]
A somewhat anachronistic addition to this is a broad-ranging provision in the federal criminal law which prohibits the use of a carriage service in a manner which a reasonable person would find offensive.[120] In contrast to the federal focus on regulating ‘content hosts’, State and Territory laws impose obligations on producers of content.[121] A number of non-legislative initiatives also exist focusing on monitoring and educating the public about content on the Internet. These forms of regulation are described in further detail below.
(b) Regulation of workplace (cyber)bullying
Cyber-bullying has brought the issue of the regulation of ‘offensive behaviour ’ on the Internet to the forefront of government, media and community attention. As discussed above, the issue of cyber-bullying clearly engages a number of human rights recognised under international law. In Australia, bullying which is not covered by anti-discrimination laws may be covered by work health and safety legislation, by criminal laws in Victoria and by the Commonwealth Criminal Code Act 1995. Uniform work health and safety legislation has been adopted by the Commonwealth, four States and the Territories since 2011.
The Work Health and Safety Codes of Practice 2011: How to manage work health and safety risks[122] operates under the Work Health and Safety Act 2011 and applies to all bodies and persons having duties under the Act. The Code includes bullying in the definition of hazard, and describes workplace bullying as a work-related health issue.[123] The effect of this is that employers and officers have a duty to prevent workplace bullying, and workers and other people at the workplace have a duty not to engage in workplace bullying. Similarly, in Victoria and Western Australia (jurisdictions which have not adopted the uniform legislation) ‘bullying’ has been interpreted as included in concepts of ‘health and safety risk’[124] and ‘hazard’.[125] There appears to be nothing in workplace bullying provisions to preclude coverage of workplace ‘cyber-bullying’.
As for the provisions contained in Victorian[126] and Commonwealth[127] criminal law, these are not limited to the workplace and cover behaviours that constitute ‘cyber-bullying’. These provisions are covered in further detail below.
(c) Regulation of Internet providers and content hosts
At the federal level, apart from the workplace-specific bullying regulations described above, the focus of the regulation of offensive behaviour centres on those who host offensive content on the Internet (as opposed to those who create it). The Broadcasting Services Act 1992 (Cth) aims to restrict access to or prohibit certain types of offensive Internet content, and provides a complaints mechanism.[128] It prohibits Internet content that is (or would be) classified as ‘Refused Classification’ (RC) which applies to publications, films or computer games that:
- depict, express or otherwise deal with matters of sex, drug misuse or addiction, crime, cruelty, violence or revolting or abhorrent phenomena, in such a way that they offend against the standards of morality, decency and propriety generally accepted by reasonable adults to the extent that they should not be accorded a classification other than RC; or
- describe or depict in a way that is likely to cause offence to a reasonable adult, a person who is, or appears to be, a child under 18 (whether the person is engaged in sexual activity or not); or
- promote, incite or instruct in matters of crime or violence.[129]
These provisions apply to very serious forms of ‘offensive’ content which may include extreme forms of cyber-bullying. This scheme does not target, for example, ‘hate speech’. As a result, this scheme does not represent an avenue of redress against a person or group who is vilified by Internet hate speech in Australia. However, in extreme cases where online content incites crime or violence (which could, for example be racially based), the Australian Media and Communications Authority (ACMA) could issue a removal notice.[130]
ACMA investigates complaints about online content that may be ‘prohibited content’ according to criteria of the National Classification Code (set out above). Where the content is classified as ‘prohibited’, ACMA issues a series of notices to Australian-based hosts requiring either the removal of the content, or restricted access within a set timeframe (failing which a penalty applies). Where Australian-hosted prohibited content is considered to be sufficiently serious, ACMA must notify law enforcement agencies.[131] Apart from providing filtering software options, where sufficiently serious prohibited content is hosted outside Australia, ACMA notifies a ‘member hotline’ in the country where the content appears to be hosted or in the absence of a hotline, notifies the Australian Federal Police for action through Interpol.[132]
A co-regulatory framework based on industry codes also forms part of the scheme under the Broadcasting Services Act 1992 (Cth).[133] These codes may be developed by the industry,[134] or required by ACMA,[135] and are registered and enforced by ACMA.[136] Two industry codes that have been developed impose various obligations on content hosts, ISPs, mobile carriers, and content service providers, including:
- obligations in responding to notices
- requirements about what information must be provided to users
- requirements about making filters available
- requirements about establishing complaints procedures and
- the appropriate use of restricted access systems.[137]
There are, however, significant regulatory challenges in attempting to enforce classification laws in relation to online media content, including:
- inconsistency in offence and penalty provisions between Australian jurisdictions;[138]
- the quantity and mutable nature of online content;
- the number of persons producing content and its hosting all over the world; and
- the difficulty of determining age and of restricting content.[139]
(d) Regulation of producers of content and upload of/access to content
Certain state and territory laws contain provisions directly regulating the actual production and use (as opposed to hosting) of online content. For example, the Classification (Publications, Films and Computer Games) (Enforcement) Act 1995 (Vic) makes it an offence to ‘use an on-line information service to publish or transmit, or make available for transmission’ objectionable material, child pornography or ‘material unsuitable for minors’.[140]
In addition to these provisions, all Australian jurisdictions have laws dealing with cyber-stalking – a behaviour that can form part of ‘cyber-bullying’. A number of states have explicitly extended the definition of this crime to include the sending of electronic messages.[141] At the Commonwealth level offences relating to behaviour on the Internet[142] includes cyber-fraud and stalking;[143] threats to kill or cause serious harm;[144] making hoax threats;[145] and most relevantly, engaging in conduct that a reasonable person would find to be menacing, harassing or cause offence.[146]
It is instructive to consider that similar legislative provisions in the UK have been used to prosecute individuals for various forms of ‘offensive behaviour’ in cyberspace.[147] This includes situations where a teenager made offensive comments about a murdered child on Twitter; a young man wrote on Facebook that British soldiers should ‘go to hell’ and a third posted a picture of a burning paper poppy (a symbol of remembrance of war dead).[148] According to news reports all were arrested, two were convicted, and one jailed.[149]
Concerns about the impact of these prosecutions on freedom of expression led the UK Director of Public Prosecutions to release guidelines on prosecuting cases involving social media communications, in recognition that an excess of prosecutions ‘chills’ free speech and that a higher threshold for prosecution was required.[150] The interim guidelines are intended to ‘strike the right balance between freedom of expression and the need to uphold criminal law’.[151]
The guidelines state that if someone posts a message online that clearly amounts to a credible threat of violence, specifically targets an individual or individuals, or breaches a court order designed to protect someone, then the person behind the message may be prosecuted.[152] People who receive malicious messages and pass them on (i.e. by re-tweeting) can also be prosecuted.[153] However, the guidelines provide that online posts that are merely ‘grossly offensive, indecent, obscene or false’ must reach a higher threshold before the conduct would be considered for prosecution, and in many such cases a prosecution is unlikely to be in the public interest.[154]
In Australia, other offences aimed particularly at the protection of children on the Internet include laws criminalising sexual grooming (targeting the use of a carriage service, the Internet or mobile phone for sexual activity with children),[155] and using a carriage service to send or receive child pornography (which may capture some ‘sexting’).[156] Distributing images can be a form of cyber-bullying if a young person is coerced into posing, or if images are distributed without consent.[157] Further, in 2011 the SDA was amended in 2011 to give legal protection to young people who have experienced sexual harassment (including online) at an educational institution by permitting students under the age of 16 to lodge a complaint under that Act.[158]
Criminal offences also exist against the promotion of suicide on the Internet.[159]
7.3 International (cross-jurisdictional) regulatory initiatives
An example of a regional approach to regulating racist hate speech on the Internet is the European Additional Protocol to the Convention on Cybercrime.[160] The aim of the Additional Protocol is to limit or at least reduce the amount of hate material online by requiring States Parties to criminalise the ‘making available’ or ‘distribution’ of racist or xenophobic material through a computer system within their jurisdictions. The Protocol creates a legal framework for international co-operation in the prosecution (at the domestic level) of cross-jurisdictional hate speech in cyberspace.[161]
The Australian Government has ratified the European Convention on Cybercrime,[162] which focuses on international cooperation to combat cybercrime.[163] However it has not signed or ratified the Additional Protocol dealing with racist hate speech online.
At the international level, non-mandatory principles in relation to conduct on the Internet have been proposed for domestic adoption.
In 2001 the Durban Declaration and Plan of Action was adopted at the United Nations World Conference against Racism, Racial Discrimination, Xenophobia and Related Intolerance. The Durban Declaration contains principles relating to racism on the Internet which recognise both the capacity of the Internet as a tool to promote tolerance and educate others, and the need to avoid use of the Internet which promotes racism and intolerance.[164] The Plan of Action includes calling on States to impose legal sanctions in respect of incitement to racial hatred through new information and communications technologies, including through the Internet.[165]
In 2003 at the United Nations World Summit on the Information Society a Plan of Action was produced, which stated that:
All actors in the Information Society should take appropriate actions and preventive measures, as determined by law, against abusive uses of ICTs [Information and Communication Technologies], such as illegal and other acts motivated by racism, racial discrimination, xenophobia and related intolerance, hatred, violence, all forms of child abuse, including paedophilia and child pornography, and trafficking in, and exploitation of, human beings.[166]
The idea for an International Criminal Court or Tribunal for Cyberspace has also been raised by a number of advocates.[167] However the draft proposal is limited to the ‘most serious violations of international cybercrime law’ and may not capture or resolve all the issues raised here.[168]
7.4 Non-legislative initiatives
Alongside formal ‘legislative measures’, a number of ‘non-legislative’ measures are being utilised in regulating offensive forms of online behaviour. Importantly, a number of these measures are aimed at preventative behaviour-change (as opposed to reactive legislative responses). Such measures include the development of company policies; co-regulatory measures (between industry and a regulator(s)); voluntary codes of practice and general educative/attitudinal change initiatives.
In Australia, examples of such measures include:
- Self-regulation by websites through the adoption of codes of conduct (see, for example, Facebook’s ‘Statement of Rights and Responsibilities’).[169] This includes ‘unofficial’ regulation by internet service providers/content hosts in responding to informal requests to remove material.
- The ‘Cybersmart’ education program created by the Australian Government and ACMA.[170] This program is designed to providing information and education to children and young people to empower them to be safe online, and to support parents, teachers and library staff. It aims to build resilience through cyber-smart behaviour and an awareness of rights and consequences.
- The Australian Government’s Cybersafety Help Button initiative,[171] which provides users (particularly children and young people but also parents/carers and teachers) with easy online access to a wide range of cyber-safety and security resources to help with cyber-bullying, unwanted contacts, scams, frauds and inappropriate material.
- Back me up,[172] a social media campaign run by the Commission to encourage young people to take positive action to support friends or peers who are cyber-bullied. It offers young people information about how to take safe and effective action when they witness cyber-bullying.
- The Consultative Working Group on Cyber-safety (in which the Commission participates).[173] The Group considers those aspects of cyber safety that particularly affect Australian children, such as cyber bullying, identity theft and exposure to illegal and inappropriate content. It provides advice to the Australian Government on priorities and measures needed to ensure world's best practice safeguards for Australian children engaging in the digital economy.
7.5 Other proposals for responding to discrimination, harassment and hate speech online
(a) Legislative reform
A number of legislative reforms have been proposed in relation to the regulation of the issues discussed above in section 5 of this paper.
For example, in relation to the cyber-safety of young people, the Parliamentary Joint Select Committee on Cyber-Safety has identified privacy laws as an area in need of ‘cyber-reform’.[174] The reforms suggested by the Committee included:
- amending the small business exemption where these businesses transfer personal information offshore;[175]
- developing guidelines on the appropriate use of privacy consent forms for online services;[176]
- imposing a code which includes a 'Do Not Track' model;[177]
- ensuring privacy laws cover organisations that collect information from Australia[178]
- reviewing the enforceability of provisions relating to the offshore transfer of data.[179]
In addition, the Joint Select Committee recommended that training be provided to all Australian Police Forces as well as judicial officers to ensure they are adequately equipped to address cyber-safety issues.[180] It also recommended that a National Working Group on Cybercrime be created to undertake a review of legislation in Australian jurisdictions relating to cyber-safety crimes.[181] The report also explored the proposal of creating an online Ombudsman to deal with cyber-safety issues.[182]
It is interesting to note that the New Zealand Law Commission recommended a number of legislative reforms in that jurisdiction in respect of ‘harmful digital communications’. This included the creation of a new communications offence targeting all types of digital communications (including through social media) which are ‘grossly offensive or of an indecent, obscene or menacing character’ and which cause harm.[183] It also recommended making it an offence to publish intimate photographs or recordings of another person without their consent,[184] and to incite a person to commit suicide, irrespective of whether or not the person does so.[185]
It further proposed the establishment of a specialist Communications Tribunal capable of providing speedy, cheap and efficient relief outside the traditional court system.[186] This Tribunal would in effect operate as a mini-harassment court specialising in digital communications (using mediation to resolve trivial matters),[187] and providing civil remedies such as takedown orders and cease and desist orders.[188] The New Zealand Law Commission proposed that it would be the option of last resort, and that the threshold for obtaining a remedy would be high.[189]
(b) Other measures
Over the next 3 years the Commission will be partnering with academia as part of an Australian Research Council project on cyber-racism and community resilience. The project will include a review of the Australian legal, regulatory and policy framework that surrounds racist speech on the Internet. The aim of the project is to contribute towards the development of new approaches to the regulation of cyber-racism and to co-operative work between industry and regulators to improve responses to cyber-racism.
The Joint Select Committee on Cyber-Safety in its report on Cyber-safety and young people proposed various educative measures, including:
- development of an agreed national definition of cyber-bullying[190]
- introduction of a cyber-safety student mentoring program[191]
- national core standards for cyber-safety education in schools[192]
- a national online training program for teachers and students that addresses bullying and cyber-bullying[193]
- the introduction nationally of ‘Acceptable Use’ agreements governing access to the online environment by students[194] and the use of resources such as the CyberSafety Help Button[195]
- the incorporation of cyber-safety materials into teacher training courses,[196] including advice about available processes in the event of cyber-bullying[197]
- the promotion of self-assessment tools,[198] and investigation of the information made available to parents at ‘point of sale’ of computers and mobile phones.[199]
- increasing affordable access to crisis help lines[200]
- enhancing the effectiveness of cyber-safety media and educational campaigns[201] including making materials available through a central portal.[202]
The Joint Select Committee also proposed industry-based initiatives, including that the Australian Government encourage the Internet Industry Association to increase accessibility to assistance and complaints mechanisms on social networking sites,[203] and negotiate protocols with overseas networking sites to ensure the timely removal of offensive material.[204]
The non-legislative recommendations of the New Zealand Law Commission included:
- requiring all schools to implement effective anti-bullying programs[205]
- establishing on-going data collection (including measurable objectives and performance indicators) for defining and measuring covert and overt forms of bullying[206] and developing reporting procedures and guidelines[207]
- consideration of the use of Information and Technology contracts which are routinely used in schools, to educate students about their legal rights and responsibilities with respect to communication[208]
- the development of consistent, transparent and accessible policies and protocols for how intermediaries and content hosts interface with domestic enforcement mechanisms.[209]
[109] The Sex and Age Discrimination Legislation Amendment Act 2011 (Cth) amended the SDA to expressly provide that the Division relating to sexual harassment (Div 3 of Part II) applies ‘in relation to acts done using a postal, telegraphic, telephonic or other like service (within the meaning of paragraph 51(v) of the Constitution).’ The Explanatory Memorandum to the Amending Act stated that the power in paragraph 51(v) of the Constitution ‘has gained significant relevance in the context of sexual harassment given the ubiquity of new technologies such as social networking websites, e-mail, SMS communications, and mobile telephone cameras.’
[110] Racial Discrimination Act 1975 (Cth) s 9 (emphasis added).
[111] Racial Discrimination Act 1975 (Cth) s 18C.
[112] Jones v Toben [2002] FCA 1150 (the judgment in this case was upheld on appeal: see Toben v Jones (2003) 129 FCR 515).
[113] Racial Discrimination Act 1975 (Cth) s 18(D).
[114] Racial Discrimination Act 1975 (Cth) s 17.
[115] See J Hunyor, ‘Cyber-racism: can the RDA prevent it?’ (2008) 46 Law Society Journal 34, pp 34-35.
[116] J Hunyor, above, p 35.
[117] J Hunyor, above, p35.
[118] See J Swift, ‘Bains Cohen takes on Facebook in internet bullying case’, The Lawyer, 12 June 2012. At http://www.thelawyer.com/bains-cohen-takes-on-facebook-in-internet-bullying-case/1012919.article (viewed 28 August 2013).
[119] Broadcasting Services Act 1992 (Cth), Schedule 5.
[120] Criminal Code Act 1995 (Cth) s 474.17.
[121] See Joint Select Committee on Cyber-Safety, Parliament of Australia, High-Wire Act: Cyber-Safety and the Young (2011), pp 305-7, 313-324. At http://www.aph.gov.au/Parliamentary_Business/Committees/House_of_Representatives_Committees?url=jscc/report.htm (viewed 28 August 2013).
[122] Work Health and Safety Codes of Practice 2011: How to manage work health and safety risks. At http://www.comlaw.gov.au/Details/F2011L02804 (viewed 28 August 2013).
[123] Work Health and Safety Codes of Practice 2011, above, at 1.2 and 2.1.
[124] WorkSafe Victoria, Your guide to workplace bullying – prevention and response (October 2012), p. 2. At http://www.worksafe.vic.gov.au/__data/assets/pdf_file/0008/42893/WS_Bullying_Guide_Web2.pdf (viewed 28 August 2013).
[125] Department of Commerce, Bullying and violence: frequently asked questions (see question 7: What are the duties of the employer under the Act in relation to bullying?), http://www.commerce.wa.gov.au/worksafe/content/safety_topics/Bullying/Questions.html (viewed 28 August 2013).
[126] See Crimes Act 1958 (Vic), s 21A.
[127] Criminal Code Act 1995 (Cth) s 474.17
[128] Broadcasting Services Act 1999 (Cth) schs 5 and 7.
[129] See Broadcasting Services Act 1999 (Cth) sch 7 ss 20-21 and National Classification Code 2005 (Cth) items 1(a)-(c).
[130] I Nemes, note 107, pp 204-5.
[131] Australian Law Reform Commission, Classification- Content Regulation and Convergent Media: Final Report, Report No. 118 (2012), para 2.24. At http://www.alrc.gov.au/publications/classification-content-regulation-and-convergent-media-alrc-report-118 (viewed 28 August 2013).
[132] Australian Law Reform Commission, above.
[133] Broadcasting Services Amendment (Online Services) Act 1999 (Cth) schs 5 and 7.
[134] Broadcasting Services Act 1992 (Cth), Division 4, s 130M.
[135] Broadcasting Services Act 1992 (Cth), Division 4, s 130N.
[136] Broadcasting Services Act 1992 (Cth), Division 3, ss 101, 89, 90.
[137] See Australian Law Reform Commission, note 131, para 2.29.
[138] See Australian Law Reform Commission, above, paras 15.42-15.54.
[139] See Australian Law Reform Commission, above, para 16.34.
[140] Classification (Publications, Films and Computer Games) (Enforcement) Act 1995 (Vic), ss 57, 57A and 58.
[141] See for example Crimes Act 1958 (Vic) s 21A; Crimes (Domestic and Personal Violence) Act 2007 (NSW) ss 7, 8, 13.
[142] Criminal Code Act 1995 (Cth) Part 10.6, Div 474, Sub-div C.
[143] Criminal Code Act 1995 (Cth) Part 10.6, Div 474, Sub-div C, s 474.14
[144] Criminal Code Act 1995 (Cth) Part 10.6, Div 474, Sub-div C, s 474.15
[145] Criminal Code Act 1995 (Cth) Part 10.6, Div 474, Sub-div C, s 474.16
[146] Criminal Code Act 1995 (Cth) Part 10.6, Div 474, Sub-div C, s 474.17
[147] See for example the Communications Act 2003 (UK) ss 32, 127. See also Chambers v DPP [2012] High Court Justice Queen’s Bench Division Divisional Court Case No CO/2350/2011 (27 June 2012); J, Lawless, ‘On-line rants land Facebook and Twitter users in legal trouble’, The Sydney Morning Herald, 19 November 2012. At http://www.smh.com.au/technology/technology-news/online-rants-land-facebook-and-twitter-users-in-legal-trouble-20121116-29gf7.html (viewed 28 August 2013).
[148] See J Lawless, above.
[149] See J Lawless, above.
[150] J Lawless, above.
[151] K Starmer (Director of Public Prosecutions, UK) quoted in D, Casciani, ‘Prosecutors clarify offensive on-line posts’, BBC News UK, 19 December 2012. At http://www.bbc.co.uk/news/uk-20777002 (viewed 29 August 2013).
[152] Director of Public Prosecutions (UK), Interim guidelines on prosecuting cases involving communications sent via social media (19 December 2012), para 12. At http://publicintelligence.net/uk-cps-social-media-guidelines/ (viewed 29 August 2013).
[153] Director of Public Prosecutions (UK), above, para 2.
[154] Director of Public Prosecutions (UK), above, paras 12(4) and 13.
[155] See the offences in the Criminal Code Act 1995 (Cth) ch 10, pt 10.6, div 474, sub-div F.
[156] See the offences in the Criminal Code Act 1995 (Cth) ch 10, pt 10.6, div 474, sub-div D.
[157] Joint Select Committee on Cyber-Safety, note 121, para 11.77.
[158] See Sex and Age Discrimination Legislation Amendment Act 2011 (Cth) s 56, which removed the requirement in s 28F(2)(a) of the SDA that a student who suffers sexual harassment must be an ‘adult’ student (i.e. 16 or over) for the sexual harassment to be unlawful under the SDA.
[159] See the offences in the Criminal Code Act 1995 (Cth), ch 10, pt 10.6, div 474, sub-div G.
[160] Additional Protocol to the Convention on Cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems, opened for signature 28 January 2003, CETS No. 189 (entered into force 1 March 2006). At http://conventions.coe.int/Treaty/EN/Treaties/Html/189.htm (viewed 28 August 2013).
[161] See the Explanatory Report to the Additional Protocol to the Convention on Cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems, para 3. At http://conventions.coe.int/Treaty/EN/Reports/Html/189.htm (viewed 28 August 2013).
[162] Convention on Cybercrime, opened for signature 23 November 2001, CETS No. 185, (entered into force 1 July 2004). At http://conventions.coe.int/Treaty/EN/Treaties/Html/185.htm (viewed 28 August 2013).
[163] See also the Cybercrime Legislation Amendment Act 2012 (Cth).
[164] Durban Declaration, adopted at the World Conference against Racism, Racial Discrimination, Xenophobia and Related Intolerance, (endorsed by GA Resolution 56/266), UN Doc A/CONF.189/12 (2001). At http://www.un-documents.net/durban-d.htm (viewed 29 August 2013). .
[165] Durban Programme of Action, adopted at the World Conference against Racism, Racial Discrimination, Xenophobia and Related Intolerance, (endorsed by GA Resolution 56/266) (2001), para 145. At http://www.refworld.org/docid/3db573314.html (viewed 29 August 2013).
[166] World Summit on the Information Society: Plan of Action, UN Doc WSIS-03/GENEVA/DOC (2003), para 25(c). At http://www.un-documents.net/wsis-poa.htm
(viewed 29 August 2013).
[167] See, for example, Judge S Scholberg, An International Criminal Court or Tribunal for Cyberspace, (Paper to the 13th International Criminal Law Congress, Queenstown, New Zealand, 12-16 September 2012). At http://www.crimlaw2012.com/abstract/11.asp (viewed 28 August 2013).
[168] Judge S Scholbery, above, p 2.
[169] Available at http://www.facebook.com/legal/terms (viewed 29 August 2013).
[170] See http://www.cybersmart.gov.au (viewed 29 August 2013).
[171] See http://www.dbcde.gov.au/online_safety_and_security/cybersafetyhelpbutton_download (viewed 29 August 2013).
[172] See http://somethingincommon.gov.au/backmeup (viewed 29 August 2013).
[173] See http://www.dbcde.gov.au/funding_and_programs/cybersafety_plan/consultative_working_group (viewed 29 August 2013).
[174] See Joint Select Committee on Cyber-Safety, note 121, Ch 5
[175] Joint Select Committee on Cyber-Safety, above, Recommendation 4, pp xxvii and 151.
[176] Joint Select Committee on Cyber-Safety, above, Recommendation 6, pp xxvii & 163.
[177] Joint Select Committee on Cyber-Safety, above, Recommendation 8, pp xxvii & 174.
[178] Joint Select Committee on Cyber-Safety, above, Recommendation 9, pp xxvii & 174.
[179] Joint Select Committee on Cyber-Safety, above, Recommendation 11, pp xxviii & 175.
[180] Joint Select Committee on Cyber-Safety, above, Recommendations 21 and 22, pp xxx & 330.
[181] Joint Select Committee on Cyber-Safety, above, Recommendation 23, pp xxx & 335.
[182] Joint Select Committee on Cyber-Safety, above, Chapter 13, pp 355-373, paras13.1-13.55.
[183] New Zealand Law Commission, note 81, pp 14-15.
[184] New Zealand Law Commission, above, p 15.
[185] New Zealand Law Commission, above, p16.
[186] New Zealand Law Commission, above, pp16-17.
[187] New Zealand Law Commission, above, p 18.
[188] New Zealand Law Commission, above, p16.
[189] New Zealand Law Commission, above.
[190] Joint Select Committee on Cyber-Safety, note 121, Recommendation 2, pp xxvi and 63.
[191] Joint Select Committee on Cyber-Safety, above, Recommendation 3, pp xxvi and 117.
[192] Joint Select Committee on Cyber-Safety, above, Recommendation 14, pp xxviii and 263.
[193] Joint Select Committee on Cyber-Safety, above, Recommendation 19, pp xxx and 274.
[194] Joint Select Committee on Cyber-Safety, above, Recommendation 14, pp xxviii and 263.
[195] Joint Select Committee on Cyber-Safety, above, Recommendation 14, pp xxviii and 263.
[196] Joint Select Committee on Cyber-Safety, above, Recommendation 17, pp xxix and 269.
[197] Joint Select Committee on Cyber-Safety, above, Recommendation 18, pp xxix and 272.
[198] Joint Select Committee on Cyber-Safety, above, Recommendation 24, pp xxxi and 430.
[199] Joint Select Committee on Cyber-Safety, above, Recommendation 25, pp xxxi and 438.
[200] Joint Select Committee on Cyber-Safety, above, Recommendation 26, pp xxxi and 438.
[201] Joint Select Committee on Cyber-Safety, above, Recommendation 27, pp xxxi and 464.
[202] Joint Select Committee on Cyber-Safety, above, Recommendation 29, pp xxii and 499.
[203] Joint Select Committee on Cyber-Safety, above, Recommendation 30, pp xxxii and 507.
[204] Joint Select Committee on Cyber-Safety, above, Recommendation 31, pp xxxii and 508.
[205] New Zealand Law Commission, note 81, p 19.
[206] New Zealand Law Commission, above.
[207] New Zealand Law Commission, above, p 20.
[208] New Zealand Law Commission, above.
[209] New Zealand Law Commission, above, p 19.