Electronic Frontiers Australia

Government says NO to expanding data retention to civil cases

Just before Easter, the government announced that it will not be expanding access to telecommunications data ('metadata') to civil litigants.

This is an important victory.

Had the government allowed even a limited expansion of access, it would almost certainly have been just the first of a number of such expansions.

It's also heartening to see that, despite running over the Christmas-New Year period, the government's consultation received 262 submissions, including 217 from individuals.

All but "a small number" of these submissions were opposed to any expansion in access.

This is a significant number of submissions to a consultation that was clearly intended to slip under the radar. We hope that the guidance we published helped some of those people with their submissions.

You can read here the report of this review that was tabled in parliament.

But, the fight against warrantless mass surveillance is far from over.

There is nothing to stop this or any future government deciding to give civil litigants access to data that is only retained for the purposes of the data retention scheme in the future.

Any such expansion could open up whole new troves of data for all sorts of civil actions including copyright enforcement cases, divorce/property settlements and employment disputes.

We're lobbying federal MPs and Senators to bring forward the review of the data retention legislation - it's currently due to be commenced in April 2019. As part of that, we're also pushing for:

  • a universal warrant requirement for access to data (currently warrants are only required when a journalist's data is requested);
  • no expansion in the restricted list of 22 agencies that are currently able to request data; and,
  • a reduction in the retention period for data from the current two years to no more than six months.
Like our work? We need your support

We rely on donations and membership subscriptions to continue our work. Effective lobbying requires travel and Canberra gets pretty expensive to get to and stay in when parliament is sitting.

Your support will help us continue our work on data retention and other digital rights issues.

If you can, please contribute today with:

Or, get actively involved:


Related Items:

Get a VPN today!

From today, 13th April 2017, all Australian telecommunication providers are now required to collect a whole range of your telecommunications data ('metadata') and retain it for two full years, so that it can be requested by government agencies.

This data includes information about your phone usage (including texts and your location) and about your Internet connection. This information allows very detailed conclusions to be made about many aspects of your life and there are almost no protections against investigative "fishing expeditions" or systemic abuse of power.

With the exception of journalists' data, no warrants are required for access to this data, and there is little effective oversight. The data retention scheme therefore represents a genuine threat to the privacy of all Australians.

That’s why we’re supporting today as a national day of action – we’re calling on Australians to educate themselves about the scale of this surveillance and take appropriate precautions.

So, we're declaring today, Thursday 13 April as 'National Get A VPN Day'.
1. What is a VPN and why do I need one?

A Virtual Private Network (VPN) is an online service that creates an encrypted 'tunnel' from your computer to a remote Internet gateway, which will often be in a different country. The encryption means that your Internet Service Provider (ISP) will not know which sites you are visiting - they will only see that you are communicating with a single address, that of your VPN.

Image: RapidVPN

Let's say you're active with an environmental group that the government is interested in, and the government has obtained access to the list of addresses that have visited that group's website. If you're using a VPN, they will not be able to identify you as having visited that site as they'll only have the address of the external gateway of your VPN.

Simply put, using a VPN breaks the identifying links between your computer and the websites you visit, thereby protecting you from government surveillance.

Because they encrypt your traffic, VPNs also provide protection from eavesdropping. If your traffic is ever directly intercepted, the encryption means it will be unreadable. This is particularly important if you're using a public wi-fi service.

For more information, here are good overviews from LifeHacker and from Wired.

2: Which VPN should I choose?

Different VPN services vary significantly in terms of quality, and particularly in terms of how much privacy protection they include.

For a better understanding of how VPNs can (and sometimes can’t) be trusted to protect your anonymity, see this article from Brian Krebs.

Some things to think about include:

  • What data does the VPN record? Is the VPN retaining web logs? Does the VPN know your IP address and the times that you connect to their servers? Also, what kind of advertising data does the VPN service store and does it hand that data over to third parties?
  • How long does the VPN store data? Nearly all VPNs will store some data in order to troubleshoot network issues. However, the duration of that storage plays a key role in terms of the privacy protection afforded to users. After all, if the data has been deleted, then it cannot be accessed by a third party. Ideally, a VPN should be wiping user data within hours of it being recorded. If a VPN is storing data for anything more than a few days then beware.
  • Read the privacy policy carefully. If you don't find the answers to your questions in their privacy policy then ask them directly, or steer clear.
  • What country are they based in? For example, you may want to avoid services based in Australia, UK, US, New Zealand or Canada (the so-called 'Five Eyes' countries, which have comprehensive intelligence-sharing arrangements in place). You may also want to avoid services based in countries with authoritarian governments.
  • What payment methods do they support? Using BitCoin &or other digital currencies will provide you with an extra layer of anonymity

Here are some good reviews and guides that will help you find the right VPN provider for you:

Or, you can, if you're technically-minded, roll-your-own. Here’s a handy guide for creating your own VPN service from Crypto Australia.

3: Help spread the word - tell your friends to #GetaVPN

Once you've got yourself sorted, don't forget about your friends, family and work colleagues.

  • Send them a link to this page
  • Retweet our link on Twitter, using the #GetaVPN hashtag
  • Share our Facebook post
  • Write to your local newspaper - letters to the editor can be an effective way to highlight an issue. See the contact section on your chosen media outlet. Keep it short and to the point.
4: Tell your MP and Senators what you think of mandatory data retention

We've been lobbying MPs and Senators over the last few years about the dangers of mandatory data retention, but adding your voice will help us to achieve the review of this legislation that we're seeking.

See our guidance on lobbying parliamentarians for ideas on how to be most effective, and for links to find your local MP and Senators from your state.

You may want to mention the following points when you contact them:

  • All access to this data should require a warrant - not just for journalists' data. A majority of European Union countries require some form of independent, judicial authorisation for access to this sort of data, so there's no reason why Australians shouldn't enjoy the same protection.
  • It's important that additional agencies aren't added to the list that are allowed access to this data. The one good part of the data retention legislation is that it reduced the number of agencies able to access this data from literally hundreds to less than two dozen (Police and anti-corruption bodies mainly).
  • The two year retention period is unjustifiably long and must be reduced to at most six months.

You can see which MPs and Senators voted for and against mandatory data retention on the excellent They Vote For You site.

You can also see which MPs and Senators voted for and against a universal warrant requirement for access to this data.

Related Items:

Bytes & Rights 2017 - Perth

The Bytes and Rights 2017 conference is being held as part of the Festival of the Web, an 8-day series of international conferences in early April in Perth.

See the full program of Festival of the Web events.

Bytes and Rights is a conference focused on the many issues around how society responds to changing technology, especially technology related to the Internet.

We will discuss legal, policy, regulatory and social responses to issues such as human rights, intellectual property enforcement, security and harassment. We aim to bring together experts including academics, lawyers, commercial practitioners, technologists, and civil society groups in a open dialog that cuts across specific disciplines.


Monday 3rd & Tuesday 4th April 2017


A variety of passes are available for just Bytes & Rights, or the entire Festival of the Web.

Please visit our registration page for more details and to register.

Related Items:

What is Fair? Karen Chester from the Productivity Commission

The Blue Diamond Gallery. CC-BY-SA

Good morning. Thank you Professor Giblin and congratulations on the launch of yours and Professor Weatherall’s book – What If We Could Reimagine Copyright? An anthology of ten thoughtful essays that collectively make for a forward‑looking treatise on copyright. And my thanks to the Australian Digital Alliance for inviting me to speak today, and to the broader church of the Australian Digital Alliance membership. Many of whom contributed time and effort in making submissions to the Productivity Commission’s inquiry into Australia’s intellectual property arrangements. Thank you.

This post is a speech given by Karen Chester, Deputy Chair of the Productivity Commission, to the Australian Digital Alliance Forum at the National Library of Australia on 24th February 2017. It is licensed under a Creative Commons Attribution 3.0 Australia (CC-BY 3.0 AU) licence.

With my words today I hope to do three things. First, share the lens through which the Commission reviewed and analysed Australia’s intellectual property settings, especially in matters of copyright. Second, do some much needed myth busting — to address claims made about copyright that on any objective examination are more fiction than fact. And third, and most importantly, convey what matters most in getting the policy settings right here.

At the get go of this Inquiry, we envisaged our task would be about how policy could grapple with the cocktail mix of technology, adaptability, creative endeavour, innovation and competition. And it did so to a large part. But at the end of the day — all roads led us to one simple truth; to ask and answer what is fair.

And when we use the term fair we’re not limiting this to fair use. Albeit copyright exception is the policy that matters most for getting the innovation and equity equation right. Because it’s not just about the creators vs the tech giants. And it’s not a zero sum game between rights holders and content users as some would have us believe.

It is about school kids, uni students, less tech savvy older people, less tech savvy younger people, documentary film makers, 55 year old redundant workers, universities and TAFEs trying to teach in a more accessible way, and the cost for anyone down under consuming the creative or innovative endeavour of others. For at the end of the day, out of kilter IP settings have and will continue to create a largely silent and growing class of ‘have-nots’.

So today I hope to connect the dots to the many everyday Australians that stand to benefit from the policy changes we have recommended to Government. For there is a compelling policy narrative to be had here — one of innovation and agility. But perhaps more importantly it is also one of equity that we can relate to everyday Australians. For when we relate the benefits of change to many Australians we know what is fair.

To understand our recommendations, it’s important to start at the beginning and think about IP more broadly. And in doing so our simple truth of what is fair should not be unsurprising. For today IP is embedded in all aspects of modern daily life. It is akin to love in the immortal words of The Troggs’ 1967 classic — Love is all around us. Because IP affects everything and everyone. And it is for this very reason that IP is a policy exemplar — it puts the public into public policy. And perhaps this is why changes to IP are so contentious — because their change affects everyone.

But perhaps it’s also reflective of the plethora of reviews and studies into IP policy over the past two decades or so — work that reflects thousands of hours of professional endeavour, angst and millions of dollars.

This figure depicts the number of reviews into the IP weeds, and often these have been very tightly focused on a particular sort of IP right rather than considering a suite of IP rights. And we know that an array of rights is almost always used by firms and creators to protect expressions of ideas in the modern age.

And it is the siloed nature of these previous IP reviews that has rendered them less effective. Where the concentrated costs of change are readily accounted for fully, while the diffuse and at times unquantifiable future benefits to the community are considered partially at best. And it’s hard, if not impossible, to make good public policy when you’re only thinking about some of the public.

Only three reviews have taken a whole-of-IP approach in the last two decades: the 2000 Ergas Committee on Intellectual Property and Competition Policy Review, the 2008 Cutler Review of Australia’s innovation system and then the Productivity Commission most recently.

The Harper Review of Competition Policy explicitly recognised this when it considered IP matters. It’s why they recommended the Productivity Commission analyse the IP system from a broad perspective. And the Government not only endorsed that recommendation but sent us a very broad terms of reference — the ultimate public policy circuit breaker.

And it’s in the Commission’s DNA to take such an approach. Indeed, our Act requires us to take a community-wide approach: to look at the IP forest rather than particular trees. The only shackle on the inquiry was the requirement to be bound by existing international agreements, but not to the extent that prevented us from making recommendations about how to improve such agreements in the future. And we certainly accepted the invitation to do so.

And with a community-wide view in our DNA, we invest much in community consultation and transparency. It is very much a ‘you tell us’ approach to public policymaking. Where all we ask of inquiry participants is to show us the evidence … and to be honest. And to harvest this evidence, we held 6 public hearings (hearing from just over 120 inquiry participants), we held 69 meetings with creators, consumers and experts, we conducted four round tables involving around 50 participants, we examined and consulted across seven different jurisdictions to get an idea about what was specific to Australia and what was not. And this is before considering the 620 plus public submissions made to the inquiry — every one of them read and studied. Along with our own original analysis. This is how we establish our evidence base.

And because we take an evidence based approach, we even (heaven forbid) change our minds when presented with compelling evidence. This can be seen in our final report where evidence in hearings and the second (post draft report) round of submissions did change our minds (from draft to final report). As can be seen in the areas of business method and software patents, and in plant breeders’ rights, and even in the form of fair use that we recommended in the final report. We conceded, and rightly so, that the smart folk at the Australian Law Reform Commission got the framing of fair use exceptions right and we strayed.

So it does beggar belief that some folk have suggested our report ignores the evidence. For those folk, it is the very breadth of our evidence that helps us to assess what some claim to be evidence but what on closer examination proved to be groundless and (at times) self-serving assertion.

Now balance matters in the high wire act of getting IP policy settings right. Crafting an incentive for creators and innovators to bring ideas to market, while making sure those incentives don’t cruel welfare of the broader community is no mean feat.

And the community had a lot to tell us about the balance of IP — and that the balance was out of kilter for some rights. For some, the balance was fundamentally broken – even if it still represents the finest legal thinking of the 19th century. So the inquiry’s immediate goal was to work out how to fix the balance, but also recognising that mechanisms needed to be put in place to keep the appropriate balance for future generations.


Perhaps what I mean is best shown by our examination of patents. For here we found that too many are granted to low-value innovation. And many are used for less honourable motives. We heard evidence of patents being used strategically to prevent follow-on innovation and stymy competitive forces, to delay the introduction of cheaper generic drugs (at an annual cost of a quarter of a billion dollars).

So we made seven recommendations to fix these problems. And then to make sure it stays fixed, we recommended an ‘objects clause’ — a legislated roadmap for future courts on when and how patents should be granted.

Now no one will argue with the principle that patents are supposed to reward socially valuable innovation and inventions (except perhaps some patent attorneys). For the new ideas and ways to implement them are ultimately what drive wellbeing in society. But in practice, the Commission found a large proportion of patents are granted to ‘low value’ ideas. Think a pharmaceutical of identical formulation to a predecessor, but just with a different dosage. Think a pizza box that folds out into a bib.

This isn’t a new problem, but it’s one that other jurisdictions (like Europe) seem to have had greater resolve to fix relative to Australia. And while there have been local efforts to ‘raise the bar’ — to make getting a patent harder — we had to assess the assertion that we had raised the bar enough down under by examining the outcomes. And in doing so we discovered that assessing patent eligibility had seen very modest change.

We examined the patents (for the same innovations) that had been granted here and in Europe since we raised the bar — and it looked more like raising the limbo bar at a toddlers’ birthday party — no one lost out.

For the Commission getting patents policy right is akin to the John West business model. It’s the fish that John West rejects that makes John West the best. And our original analysis revealed that Australia despite purporting to raise the bar continues to grant a lot of patents to innovations that the EU rejects on the grounds of not being good enough. So there is a long way to go before we are the ‘John West’ of patent policy.


Then there is fairness in enforcement. We heard from participants about the high cost of enforcing IP rights, particularly when a court is involved. And indeed it was a concern of authors with any change to the copyright exception provisions. One participant described the situation:

"… we have a Rolls Royce system called the Federal Court. You go there. The starting price will be $200,000 minimum… Take it from there. $400,000, and then you might have the costs of the other side".

A lot of IP disputes don’t need the Rolls Royce; they can make do with an agile, speedy Vespa. To alleviate these costs, we drew on the experiences of the UK’s Intellectual Property Enterprise Court. My fellow Commissioner Jonathan Coppel and I met with Justice Richard Hacon (the head judge of the UK IPEC) — a terrific meeting where Justice Hacon took us beyond the research and conveyed why the UK model has worked where others had floundered. By capping costs, trial times and damages, dispute resolution costs are reduced and firms have greater certainty. But most importantly the separate list had allowed the discipline of low cost DNA. And it is for this reason we recommended that the Government should introduce a specialist IP list in the Federal Circuit Court, encompassing features similar to those of the IPEC, including limiting trials to two days, caps on costs and damages, and a small claims procedure. For such a low cost, DNA appears to be alive and well in our Federal Circuit Court. And contemporary research from the UK shows the IPEC model is delivering access to justice to a large number of creators that would never have defended or challenged rights in the past.

Now enforcement might sound tedious, but it is at the end of the day an enduring element of what is fair and what is good public policy. So again we return to equity — access to enforcement is access to justice alike for authors on copyright and firms, especially SMEs or new entrants, for patents, design rights and trade marks. It is also a way of future proofing IP policy so it remains fair, balanced and in the interests of the community today as well as tomorrow.

Now at this point, a few of you may be quietly thinking ‘I thought I was attending a fair use conference, but now I’m being lectured on patents and enforcement’. So let’s talk copyright.


Our patent recommendations were largely about addressing what is perhaps best thought of as unfinished business — a material residual imbalance. In contrast, we found our starting point for copyright policy was arguably about trying to find any semblance of balance. Term (at life plus 70 years) and scope (with our current exceptions) are not balanced, and are firewalled from change by international agreements. But we looked and found some areas where meaningful reform can and should be made.

Thanks to geoblocking, Australians pay more for digital content (around 67 per cent more for music) or get less or latter access (like the diminished library of titles available on Netflix in Australia relative to the US). You know something is amiss when the haves and have nots are delineated by who has a teenager in the home capable of circumventing the geoblock. We heard from many participants there is legal uncertainty about the ability for consumers to access legitimate overseas content. And this is the only fair — and indeed workable — weapon to counter online piracy. Creating fair access and eroding the unfair geographic price discrimination that is geoblocking. So we recommended that consumer rights be clarified (and this also applies to ensuring that rights holders can’t contract around copyright exceptions, or rely on technological protection measures to prevent legitimate uses).

Turning to copyright collecting societies. They play an important role for rights holders and they can make a meaningful difference in lowering transaction costs for authors, creators and content consumers. But they can also wield market power. This lifts the governance high bar for what we need to see from a transparency and accountability perspective from these agencies. There have been questions in this inquiry about the effectiveness of the Code of Conduct for Collecting Societies. And we learned in meetings with UK and European experts, and even their collecting societies, that they had lifted the governance code bar in a substantive way and in their view well above the down under code of conduct. So we recommended that the ACCC review arrangements for collecting societies with a view to strengthening governance and transparency, ensuring that the current code represents contemporary best practice (in substance and form), balances the interests of societies and licensees, and whether the code should be made mandatory. For at the end of the day, and as a de minimis, you need to be able to follow the money. And we couldn’t and nor could rights holders or rights users.

Busting the myths: Parallel Import Restrictions

Turning now to the myth busting part of our inquiry — and here it seemed like a monumental sand dune of argument and assertion to be traversed. Three steps up and then two back. And this was especially the case when it came to any mooted change to copyright, and especially parallel import restrictions on books and fair use.

The inquiry was told definitively by publishers that parallel import restrictions do not raise book prices and was provided with some purported evidence to that effect. But on closer examination this just didn’t stack up. So the Commission purchased data on book prices, compared more than a thousand like-for-like titles in Australia, the UK and the US, and found that books were indeed more expensive — by around 20 per cent on average — than in those other jurisdictions like the UK. Myth busted.

The inquiry was then told by publishers and authors that parallel import restrictions are crucial for local markets and to support local authors. But, alas, this stumbled in considering the workings of the market — for PIRs don’t just apply to books by Australian authors. Hilary Mantel’s books get the same protection as Hannah Kent’s, with the benefits largely going to offshore authors and publishers. So PIRs are effectively a tax on readers in Australia, and the publishers the revenue collection agency. And the higher costs of books are borne by all Australians from the bibliophiles, to the students as they (or their parents) are forced to pay more for Harry Potter, Diary of a Wombat and the dreaded text books.

And we know from our previous analysis that from the annual $25 million book tax (from PIRs) around $15 million flows offshore. So it’s hard not to view PIRs as anything but the least effective way to support local authors and perversely at the expense of local readers. We thought about limiting PIRs (and their tax impost) to only the books of local authors – so at least the support is targeted at local authors (although we’re still not quite sure how much of this they see and I’ll come back to this later). But alas the shackles of our international agreements have relegated that option unavailable. So direct government support becomes the policy no brainer if the goal is to cost effectively support local writers and creators, without harming their readers and with the added bonus of cutting out both the middleman and offshore authors. And we explored this angle more in our final report — including establishing that the Government (and ultimately taxpayers) provide around $40 million of direct support to local authors today.

And on the middleman — we did listen to the case made by locally based publishers that the additional money made from PIRs delivering them higher prices is then used to cross subsidise local authors. So we requested this evidence — show us the money and what you do differently to your counterparts in the US and Europe. But we were met with the sound of deafening silence. So again we could not follow the money. Myth busted.

The inquiry was then told that removing PIRs destroyed the New Zealand publishing sector and decimated New Zealand authors. Indeed, based on some of the submissions and commentary made to the Commission, one might expect that literacy had all but vanished in Middle Earth.

But when the inquiry looked closely at these claims, the timeline didn’t stack up — a gap of more than a decade between PIRs being removed in 1998 and the global restructure of the publishing sector which unsurprisingly reached New Zealand given its market size and locale. Moreover, the removal of parallel import restrictions in New Zealand does not appear to have had significant negative effects on domestic creative effort in the books sector. Analysis by Deloitte Access Economics in 2012 (some 14 years after PIRS removal in New Zealand) found that the number of new NZ book titles that published annually has remained fairly steady. Data on the number of authors shows that, following the reform, the share of authors in overall employment has increased in New Zealand. So rumours of the demise of Kiwi authors are just that – rumours and not evidence. Myth busted.

The inquiry was then told that removing PIRs would lead to the dumping of cheap books printed overseas into Australia.

Again we asked for the US based evidence from the publishers that they purported in our public hearings. But again all we heard was the sound of silence. It’s a hard task to check something that hasn’t happened, but the Commission examined the claim by looking at who actually publishes what in different markets. Using more than a thousand like-for-like titles across the Australian and UK markets, we found that about 95 per cent of books were published in both markets by the same publisher or subsidiary. So the threat that you’ll materially erode your own profit margins if you don’t get your way is not the most compelling business case nor corollary public policy argument. So myth busted.

Busting the myths: fair use

The inquiry then turned its attention to fair use, where the same underlying issue of imbalance persists but it is a faster growing divide.

In a nutshell, the existing fair dealing provisions provide prescriptive exceptions to use copyright material, whereas fair use is a more principles-based approach to dealing with copyright exceptions. The biggest difference between the two in operation — prescriptive exceptions are glacial at best to respond to change, where principles-based exceptions can adapt and respond more readily. The glacial adaptive experience with fair dealing is best captured in legislative refresh around recording shows on VHS and time shifting using PVRs. The family VHS VCR was mothballed down under by the time our copyright act recognised its form of copying.

So the question is one of whether prescription or principles is most appropriate in a modern economy of today and tomorrow. And it is here there’s a paramount point of distinction between PIRs and fair use. We know with parallel import restrictions that technology, the digital age and new business models have proved a great equaliser. Digital books, real time publishing (as we are seeing in countries like France) will continue to discipline the price premium local publishers will extract with PIRs. So perhaps where we find ourselves today, with PIRs costing Australian readers around $25 million each year, is about as bad as it will get.

And while technology and the digital age reduce or constrain the costs of PIRs, the same cannot be said for our system of copyright exceptions. And here’s the policy rub and where the greatest policy imperative looms largest for government. For the inequities and costs of fair dealing are growing and will continue to do so with technological and digital advances.

So it’s critical to put fair use very closely under the magnifying glass.

It also required the heavy glass frames of the myth busters. One claim was that fair use would lead to increased court costs and uncertainty. The question about courts and uncertainty is a complicated one.

The Commission consulted widely on this issue, and the community-wide response was far more negative about the existing regime than one of fair use.

The Commission heard stories about librarians being unable to provide material to the community due to uncertainties around fair dealing. The Commission heard about the gains that could be made by making greater use of grey literature, to which fair dealing did not always extend. The Commission heard how fair dealing was constraining and costing our local documentary film makers. The Commission heard directly from Universities Australia about how institutions were reluctant to use material for Massive Open Online Courses — MOOCs — because fair dealing might not extend to them. The Commission heard about how the status quo meant that millions of dollars of public funds are spent each year to pay license fees for freely available internet materials and even thumbnail images of book covers so that they can be used on school intranet sites. Written evidence from Council of Australian Governments that Australian schools are paying the Copyright Agency over $9 million each and every year for material that is freely available on the internet. And we know there is a further $11 million each year that the Agency collects and cannot redistribute. So it goes into a pool to be distributed to members who were not the creative originator. On listening to the full spectrum of consumers, creators and curators, the story that emerged was one where the status quo was uncertain and inefficient, and in spite of the name, anything but a fair deal.

The real question then is: could fair use be worse? Having already addressed some concerns about court costs and access to justice separately, it is really a question about whether it’s appropriate for rights holders or for content users and ultimately an impartial third-party, like a judge, to determine when an exception to copyright should apply and when is it fair.

Under the model proposed by the Australian Law Reform Commission (ALRC) and endorsed by our inquiry, fair use in Australia would use four fairness factors — purpose, nature, substantiality and the market effect. Now courts are well-versed in applying principles-based laws in many areas such as consumer and employment law. And we also met with folk from the US who showed us practical guidance materials on how teachers, libraries, businesses use such guidance to confidently apply such factors in their day-to-day lives — and these guidance notes which abound in the US, could be readily applicable to Australia.

And in the Commission’s view, there’s ample evidence, both at home and abroad, that with such guidance the community can be trusted to employ fair use fairly. Myth busted … and with a modicum of certainty.

Another simpler myth to bust is the claim that fair use is really free use.

This is simply an oxymoron – it cannot hold as an assertion because of the 4th principle — market effect. The market effect on rights holders is a key component of the fairness factors and what’s allowable. So a use that erodes the market potential for a creator is simply not allowable under this principle. So we asked the publishers and the authors to give us examples of what they are being paid for today under fair dealing that they would not be paid for tomorrow under fair use. And we either heard stony silence or we heard of two US examples — Google books and the case of the transformative rapper.

They argued that Google’s “open slather” digitisation of US library books is tantamount to free use. But the US courts did not agree with this portrayal. They instead found that Google’s Library Project did not provide the books in their entirety as a substitute for original works, and instead only provided very small snippets. And most importantly in assessing the fourth fairness factor — the effect of the use upon the potential market for or value of the copyrighted work — the courts found the snippets did not fall foul. Where the snippet view provides a researcher or student with all the information they need to know and they did not then buy the book in its entirety, the Courts examined the evidence and found that this type of information was most likely to be factual in nature and therefore not even subject to any copyright. Moreover, if you step back for a moment and think about what the Google Library Project represents — it is no more than the 21st century equivalent of browsing in a bookstore. So Google is supporting book markets and thereby authors.

Myth busted.

It was claimed that fair use destroys publishing industries and has done so in Canada, and particularly their educational resource sector. That claim did not stand up to even modest scrutiny: the experience in Canada has been grossly misrepresented and ignores specific market factors there. To begin with, Canada doesn’t even have a system of fair use — they have fair dealing. And our Canadian cousins also jettisoned its educational licensing regime — we have not.

But that didn’t get in the way of some trying to shoe-horn unrelated factors in Canada into a story of potential Armageddon in Australia. And to sell a story of Armageddon you need a big number. The number that’s been oft cited by some local luminaries is that fair use would cost the Australian economy $1.3 billion. The number is based on work by PwC and commissioned by rights holders, and curiously contains the following disclaimer:

This Report was prepared for APRA AMCOS, PPCA, Copyright Agency│Viscopy, Foxtel, News Corp Australia and Screenrights. In preparing this Report we have only considered the requirements of these organisations. Our Report is not appropriate for use by persons other than these organisations, and we do not accept or assume responsibility to anyone other than these organisations in respect of our Report.

As the CEO of an economics consulting firm in a previous life, this is a revealing disclaimer. So we read on. We read the entire PwC report cover to cover. And we found it to be an accurate disclaimer.

But there was a modicum of economics in the report. In particular, the following in relation to the effect of Canada’s introduction of a broader fair dealing provision for educational material:

These impacts, while significant for the industry, represent transfers (i.e. from creators to users) rather than economic costs. (That is, if secondary derivative works are not truly transformative, then fair use would merely represent a transfer of supply and demand between various groups within society and would not represent ‘net new’ economic growth.)

So even if we are to accept at face value our local luminaries oft cited cost of $1.3 billion if Australia were to adopt fair use, this would represent a transfer to Australian readers and consumers of the copyright material. The libraries, the new business entrants, the students, the MOOC makers and the local MOOC recipients. So their big number actually represents a big benefit to many Australians.

So whilst we spent some time carefully unpacking the assertions and claims in the PwC report (and a read of our box on page 197 of our report provides the highlights and a sobering read), late in the day, the inquiry also had access to another resource: a cost-benefit analysis undertaken by Ernst and Young for the Department of Communications. This report specifically analysed the winners and losers from moving from fair dealing to other arrangements, including fair use. It was a refreshing read — a considered albeit conservative analysis of what might happen today if fair use came to Australia. It was a here and now analysis not forward looking. It revealed that there was no immediate Armageddon from fair use, rather there would be immediate net economic benefits. And that’s before taking into account how the shortcomings of the status quo affect matters into the future.

So allow me to share some forward looking thoughts. Because it reveals that moving from fair dealing to fair use is not a zero sum game as many portray.

Think, no access to data for data mining means no incentive to the workforce to develop those skills — skills which other jurisdictions are developing in spades.

Think, hampering access to cloud computing means that Australian firms and families are left to use inefficient, antiquated systems in comparison to other markets and countries that can make use of the latest technology.

Think, schools and universities not paying $9 million each year for material that is freely available.

And as flagged earlier, think, providing universities and educators fair and certain access to material for MOOCs will enable a new way to skill and reskill our workforce. And this is perhaps one of the most compelling equity issues hidden away in the fair use free for all. For it’s not just about the millions of lost export dollars of our universities being constrained and unable to develop and export MOOCs.

It’s about what’s needed to re–equip our workforce to remain relevant. Research reveals that the nature of work is changing such that education needs to be continuous and there is the need to make adult learning routinely available for all. A university student today will have 17 different jobs and what they learn at school or university in no way represents the conclusion of their formal learning if they are to remain productive and more importantly employed. And if you think of the structural changes in today’s labour market, mature age workers facing or avoiding redundancy, today’s workers will need to readily tap into new ways of learning. MOOCs will play a vital role in doing so; and fair use in Australia will play a vital role in making sure they can.

Think Israel — in introducing fair use did so with a mind to what would be of future benefit to creators, innovators and educators. So the world’s cultural and innovation pin up country “gets it”. Indeed it cast its policy narrative in this very way. And if we are to be a truly agile economy, this is a policy change that lends itself to an incredibly positive policy narrative. And the narrative flip side is that fair use is a policy lever to avoid the looming education divide of haves and have not’s. Nor do we need to reflect for too long to see what political and policy outcomes await if we allow that to happen. So you can see why there is no single big number of benefit. Because at the end of the day doing such an analysis is complex and simply doesn’t lend itself to a single number. But what we do know is there is no Armageddon. And there are benefits to be had and they reside where the interests of innovation and equity co-exist.

And we know that these benefits can only grow in the future as technology evolves. But perhaps more importantly, so will the costs of the policy failure if we do not jettison fair dealing to the moth ball smelling attic alongside the VCRs.

So I hope today’s tales of myth busting reveal less a case of the Commission’s ideology (as some have suggested) and more an open mind that we try to bring to bear when considering public policy change. We asked, we listened, we evaluated — using work in the public domain, international experience, work commissioned by the Australian Government, work commissioned by others and our own analysis — in order to determine which arguments are wolves in sheep’s clothing and which we can rely on to frame policy that makes a positive difference for all Australians. And our resulting recommendations to remove parallel import restrictions and introduce an exceptions regime of fair use are based on evidence and with only the interests of all Australians, not just a few in mind.

So this inquiry’s story ends with a suite of policy change across all forms of intellectual property (some 25 recommendations) that we have made to the Government. And taken in their entirety, they represent an opportunity to deliver tangible benefits to most Australians and not just a few.

Consumers and content using businesses would benefit much — from fair, certain and (for books) cheaper access to content and creative endeavour. Government and ultimately taxpayers would benefit from a substantial reduction in health costs (at least $250 million each year) by constraining the costly and strategic use of the PBS with pharmaceutical patents.

Rather than hindering innovation and creativity as claimed by some participants, IP reform would also invigorate innovation and competitive forces. Australian firms will be able to take full advantage of opportunities in cloud computing solutions. Medical and scientific researchers will be able to better utilise text and data mining. Universities and TAFEs will have the flexibility to offer MOOCs. The education sector will avoid paying millions of dollars each year to use materials that are freely available online. University students will pay less for text books and have more MOOC access. Workers needing to remain skill relevant whether due to age or structural change will also have access to skill adaptive MOOCs. Innovative SMEs will be able to innovate without fear of infringing frivolous or strategic patents and be better able to enforce legitimate rights through low-cost dispute resolution mechanisms.

All in all — there is a compelling policy narrative to be had here.

So on a concluding note, let me share a snippet I chanced upon recently. But not from Google. In the very first essay of the copyright treatise: What If We Could Reimagine Copyright?, and authored by the book’s editors Professors Giblin and Weatherall, there is a heading. And it is — ‘The ‘public interest’ (please don’t stop reading)’. It made me pause and reflect on our Inquiry report. And of the words of Theodore Roosevelt’s Man in the Arena — who spends himself in a worthy cause but if in doing so he fails, he does so by daring greatly. For in the arena of copyright policy, perhaps our final chapter should have been entitled ‘the public interest — please don’t stop believing’.

The challenge for policymakers is to focus on the near-silent majority of users, of adapters, of educators and creators that will need fair use to bring about the next wave of innovation, jobs and equitable prosperity. For its absence will simply foster a society of less haves and more have nots.

So for the Commission, fair use has become not a nice to have, or even a good to have, but a policy must have. At the end of the day we asked and answered a simple question – what is fair?

Thank you.




Related Items:

Bots without borders: how anonymous accounts hijack political debate

Gage Skidmore. CC-BY-SA

A bot (short for robot) performs highly repetitive tasks by automatically gathering or posting information based on a set of algorithms. They can create new content and interact with other users like any human would. But the power is always with the individuals or organisations unleashing the bot.

Politicalbots.org reported that approximately 19 million bot accounts were tweeting in support of either Donald Trump or Hillary Clinton in the week before the US presidential election. Pro-Trump bots worked to sway public opinion by secretly taking over pro-Clinton hashtags like #ImWithHer and spreading fake news stories.

This article is by Katina Michael, Professor, School of Computing and Information Technology, University of Wollongong and was originally published on The Conversation. See the original article.

Bots have not just been used in the US; they have also been used in Australia, the UK, Germany, Syria and China.

Whether it is personal attacks meant to cause a chilling effect, spamming attacks on hashtags meant to redirect trending, overinflated follower numbers meant to show political strength, or deliberate social media messaging to perform sweeping surveillance, bots are polluting political discourse on a grand scale.

Fake followers in Australia

In 2013, the Liberal Party internally investigated an unexpected surge in Twitter followers for the then-opposition leader, Tony Abbott. On August 10, 2013, Abbott’s Twitter following soared from 157,000 to 198,000, having grown until then by around 3,000 per day.

A Liberal Party spokesperson revealed that a spambot had most likely caused the sudden increase in followers.

An April 2013 study found 41% of Abbott’s then-most-recent 50,000 Twitter followers were fake. Most of the Coalition’s supporters do not use social media.  

Fake trends and robo-journalists in the UK

As the UK’s June 2016 referendum on European Union membership drew near, researchers discovered automated social media accounts were swaying votes for and against Britain’s exit from the EU.

A recent study found 54% of accounts were pro-Leave, while 20% were pro-Remain. And of the 1.5 million tweets with hashtags related to the referendum between June 5 and June 12, about half a million were generated by 1% of the accounts sampled.

Following the vote, many Remain supporters claimed social media had an undue influence by discouraging “Remain” voters from actually voting.

Fake news and echo chambers in Germany

German Chancellor Angela Merkel has expressed concern over the potential for social bots to influence this year’s German national election. The right-wing Alternative for Germany (AfD) already has more Facebook likes than Merkel’s Christian Democrats (CDU) and the centre-left Social Democrats (SPD) combined. Merkel is worried the AfD might use Trump-like strategies on social media channels to sway the vote. It is not just that the bots are generating the fake news. The algorithms that Facebook deploys as content is shared between user accounts create “echo chambers” and outlets for reverberation.

Spambots and hijacking hashtags in Syria

During the Arab Spring, online activists were able to provide eyewitness accounts of uprisings in real time. In Syria, protesters used the hashtags #Syria, #Daraa and #Mar15 to appeal for support from a global theatre. It did not take long for government intelligence officers to threaten online protesters with verbal assaults and one-to-one intimidation techniques. Syrian blogger Anas Qtiesh writes:

These accounts were believed to be manned by Syrian mokhabarat (intelligence) agents with poor command of both written Arabic and English, and an endless arsenal of bite and insults.

But when protesters continued despite the harassment, spambots created by Bahrain company EGHNA were co-opted to create pro-regime accounts. They flooded the hashtags with pro-revolution narratives. This was essentially drowning out the protesters’ voices with irrelevant information – such as photography of Syria. @LovelySyria, @SyriaBeauty and @DNNUpdates dominated #Syria with a flood of predetermined tweets every few minutes from EGHNA’s media server.  

Since 2014, the Islamic State terror group has “ghost-tweeted” its messages to make it look it has a large, sympathetic following. This is to attract resources, both human and financial.

Tweets have consisted of alleged mass killings of Iraqi soldiers and more. This clearly shows how extremists are employing the same social media strategies as governments.

Sweeping surveillance in China

In May 2016, China was exposed for purportedly fabricating 488 million social media comments annually in an effort to distract users’ attention from bad news and politically sensitive issues.

A recent three-month study found 13% of messages had been deleted on Sina Weibo (Twitter’s equivalent in China) in a bid to crack down on what government officials identified as politically charged messages.

It is likely that bots were used to censor messages containing key terms that matched a list of banned words. Typically, this might include words in Mandarin such as “Tibet”, “Falun Gong” and “democracy”.

What effect is this having?

The deliberate act of spreading falsehoods by using the internet, and more specifically social media, to make people believe something that is not true is certainly a form of propaganda. While it might create short-term gains in the eyes of political leaders, it inevitably causes significant public distrust in the long term.

In many ways, it is a denial of citizen service that attacks fundamental human rights. It preys on the premise that most citizens in society are like sheep, a game of “follow the leader” ensues, making a mockery of the “right to know”.

We are using faulty data to come to phoney conclusions, to cast our votes and decide our futures. Disinformation on the internet is now rife – and if that has become our primary source of truth, then we might well believe anything.


Related Items:

Mobiles, metadata and the meaning of ‘personal information’

The Federal Court last week determined not to resolve the great privacy question leftover like a bad hangover from 2013: When is information ‘about’ Ben, and when is it ‘about’ a device or a network?

While at first glance you might think that the Privacy Commissioner losing an appeal would be bad news for privacy, the decision in Privacy Commissioner v Telstra Corporation Limited [2017] FCAFC 4 is not quite the train wreck that some have suggested.  It has not gutted the definition of ‘personal information’, nor has it said that metadata from telecoms is not protected by the Privacy Act.  It simply clarified that the word ‘about’ is an important element in the definition of ‘personal information’.

This article is by Anna Johnston, Director of Salinger Privacy and is republished here with permission. Anna is a former Deputy Privacy Commissioner for NSW, and a former Chair of the Australian Privacy Foundation. Salinger Privacy consult, present, publish, train and blog on privacy law and practice. Follow Anna on Twitter @SalingerPrivacy or subscribe to her excellent newsletter. See the original article

That might not sound like something worth arguing about, but understanding this little word ‘about’ has been critical in the on-going case involving Telstra and the definition of ‘personal information’ – upon which all our legislated privacy principles rely.

First, the background. When the Australian Government was preparing in 2013 to introduce its mandatory data retention laws, to require telcos to keep ‘metadata’ on their customers for two years in case law enforcement types needed it later, tech journo Ben Grubb was curious as to what metadata, such as the geolocation data collected from mobile phones, would actually show. He wanted to replicate the efforts of a German politician, to illustrate the power of geolocation data to reveal insights into not only our movements, but our behaviour, intimate relationships, health concerns or political interests.

While much fun was had replaying the video of the Attorney General’s laughable attempt to explain what metadata actually is, Ben also worked on a seemingly simple premise: “the government can access my Telstra metadata, so why can’t I?

Exercising his rights under what was then National Privacy Principle (NPP) 6.1, Ben sought access from his mobile phone service provider, Telstra, to his personal information – namely, “all the metadata information Telstra has stored about my mobile phone service (04…)”.

At the time of his access request, the definition of ‘personal information’ was “information or an opinion (including information or an opinion forming part of a database), whether true or not, and whether recorded in a material form or not, about an individual whose identity is apparent, or can reasonably be ascertained, from the information or opinion”.

(Since then, the definition of ‘personal information’ has changed slightly, NPP 6.1 has been replaced by APP 12, and the telecom data retention laws were passed with a provision making it very clear that data that is required to be kept under the new data retention provisions is to be considered ‘personal information’ under the Privacy Act. Nonetheless, Ben Grubb’s case has ramifications even under the updated laws, because the breadth of the definition of ‘personal information’ was at issue.)

Telstra refused access to various sets of information, including location data on the basis that it was not ‘personal information’ subject to NPP 6.1. Ben lodged a complaint with the Australian Privacy Commissioner.  While the complaint was ongoing, Telstra handed over a folder of billing information, outgoing call records, and the cell tower location information for Ben’s mobile phone at the time when Ben had originated a call, which is data kept in its billing systems.

What was not provided, and what Telstra continued to argue was not ‘personal information’ and thus need not be provided, included ‘network data’. Telstra argued that that geolocation data – the longitude and latitude of mobile phone towers connected to the customer’s phone at any given time, whether the customer is making a call or not – was not ‘personal information’ about a customer, because on its face the data was anonymous.

The Privacy Commissioner ruled against Telstra on that point in May 2015, finding that a customer’s identity could be linked back to the geolocation data by a process of cross-matching different datasets. Privacy Commissioner Timothy Pilgrim made a determination which found that data which “may” link data to an individual, even if it requires some “cross matching … with other data” in order to do so, is “information … about an individual”, whose identity is ascertainable, meaning “able to be found out by trial, examination or experiment”. The Privacy Commissioner ordered that Telstra hand over the remaining cell tower location information.

Telstra appealed the Privacy Commissioner’s determination, and in December 2015 the Administrative Appeals Tribunal (AAT) found in Telstra’s favour – but not for the reason you might have expected.  The case clearly turns on how the definition of ‘personal information’ should be interpreted, with both parties arguing about whether or not Ben was ‘identifiable’ from the network data, including how much cross-matching with other systems or data could be expected to be encompassed within the term ‘can reasonably be ascertained’.

Indeed the AAT judgment went into great detail about precisely what data fields are in each of Telstra’s different systems, and what effort is required to link or match them up, and how many people within Telstra have the technical expertise to even do that, and how difficult it might be. But then – nothing. Despite both parties making their arguments on the topic of identifiability, the AAT drew no solid conclusion about whether or not Ben was actually identifiable from the network data in question.

Instead, the AAT veered off-course, into questioning whether the information was even ‘about’ Ben at all. Using the analogy of her own history of car repairs, Deputy President Stephanie Forgie stated:

“A link could be made between the service records and the record kept at reception or other records showing my name and the time at which I had taken the care (sic) in for service. The fact that the information can be traced back to me from the service records or the order form does not, however, change the nature of the information. It is information about the car … or the repairs but not about me”.

The AAT therefore concluded that mobile network data was about connections between mobile devices, rather than “about an individual”, notwithstanding that a known individual triggered the call or data session which caused the connection. Ms Forgie stated:

“Once his call or message was transmitted from the first cell that received it from his mobile device, the data that was generated was directed to delivering the call or message to its intended recipient. That data is no longer about Mr Grubb or the fact that he made a call or sent a message or about the number or address to which he sent it. It is not about the content of the call or the message. The data is all about the way in which Telstra delivers the call or the message. That is not about Mr Grubb. It could be said that the mobile network data relates to the way in which Telstra delivers the service or product for which Mr Grubb pays. That does not make the data information about Mr Grubb. It is information about the service it provides to Mr Grubb but not about him”.

Well. That was a curve ball no-one saw coming.

(Even Telstra had proceeded through to the AAT on the assumption that the data they held was at least about Ben, with legal counsel for Telstra saying “I’m dealing here with the question of mobile network data in relation to Mr Grubb’s mobile telephone service. It’s difficult for me to see how that could not be information about him. It’s information about his service”.  Telstra had instead been arguing the point that whether or not Ben was identifiable from that data in a pragmatic sense, given the way the data was held in separate systems, and not necessarily indexed with reference to the customer.)

The AAT’s interpretation seemed to conflate object with subject, by suggesting that the primary purpose for which a record was generated is the sole point of reference when determining what that record is ‘about’. In other words, the AAT judgment appears to say that what the information is for also dictates what the information is about.

In my view, the AAT’s interpretation of ‘about’ was ridiculous. Why can’t information be generated for one reason, but include information ‘about’ something or someone else as well? Why can’t information be ‘about’ both a person and a thing? Or even more than one person and more than one thing?

But more importantly, the AAT’s interpretation was damaging.  It completely undermined our privacy laws.

Even car repair records, which certainly have been created for the primary purpose of dealing with a car rather than a human being, will have information about the car owner. At the very least, the following information might be gleaned from a car repair record: “Jane Citizen, of 10 Smith St Smithfield, tel 0412 123 456, owns a green Holden Commodore rego number ABC 123”.

If the AAT’s position – that a car repair record has no information ‘about’ Jane Citizen – had been left unchallenged by the Privacy Commissioner, then Jane would have no privacy rights in relation to that information, and the car repairer would have no privacy responsibilities either.

If Jane’s home address was disclosed by the car repairer to Jane’s violent ex-husband, she would have no redress. If the car repairer failed to secure their records against loss, and Jane’s rare and valuable car was stolen from her garage as a result, Jane would have no cause for complaint.  Jane wouldn’t even have the right to access the information held by the car repairer, to check that it was correct.

Imagine how far you could you take this argument.  Banks could avoid their privacy responsibilities by arguing that their records are only ‘about’ transactions, not the people sending or receiving money as part of those transactions.  Hospitals could claim that medical records are ‘about’ clinical procedures, not their patients.  Retailers could claim their loyalty program records are ‘about’ products purchased, not the people making those purchases.

Fortunately, the Privacy Commissioner quickly moved to appeal the AAT’s ruling to the Federal Court.  But unfortunately, the grounds on which the Privacy Commissioner appealed were too narrow.

Instead of arguing that information could be ‘about’ more than one thing – i.e. that metadata could be ‘about’ both the delivery of a network service and the customer receiving that service – the Privacy Commissioner’s legal team argued that the phrase ‘about an individual’ was redundant, and should simply be ignored.

The Court summarised the submission made on behalf of the Privacy Commissioner as “that if there is information from which an individual’s identity could reasonably be ascertained, and that information is held by the organisation, then it will always be the case that the information is about the individual … In other words, the words ‘about an individual’ would ‘do no work’ and have no substantive operation”.

The Federal Court, in an unanimous decision by Justices Dowsett, Kenny and Edelman, flatly rejected that line of argument: “We do not accept this submission”.

So the Privacy Commissioner played a high stakes game, and lost.  The result is a decision that ultimately takes us nowhere.

The Federal Court made it clear that it was not deciding whether or not the metadata to which Ben Grubb was seeking access actually met the definition of ‘personal information’ – because it was not asked to.  (And indeed, appeals to the Federal Court from the AAT can only be brought on questions of law.)

The Court noted: “There was no ground of appeal which alleged that the AAT erred in its conclusion that none of the information was about Mr Grubb. In other words, the Privacy Commissioner did not seek to establish that any of the information was about Mr Grubb”.  And just to hammer home the point, the Court said: “this appeal concerned only a narrow question of statutory interpretation which was whether the words ‘about an individual’ had any substantive operation. It was not concerned with when metadata would be about an individual”.

If the Federal Court had actually been asked or allowed to apply the definition to the facts of this case, we might have had a proper answer.  We might even have had a broader answer than that proffered by the AAT, because the Federal Court diverged from the AAT’s view in one critical respect: unlike the ludicrously narrow, binary ‘information can only be about one thing’ view taken earlier by the AAT, the Federal Court judges said that information and opinions “can have multiple subject matters”.

That’s right: if only they had been asked (or allowed) to do so, the Federal Court might have overturned the AAT’s case, on the basis that the information in question could be about both “the way in which Telstra delivers the service or product for which Mr Grubb pays” and “about Mr Grubb”.

So, to re-cap …

The court made no decision about whether or not the metadata was ‘about’ Ben Grubb, because it wasn’t asked to.

The court made no decision about whether or not Ben Grubb’s identity could be ascertained from the metadata (alone or in conjunction with other data), because it wasn’t asked to.

The court made no decision about whether or not Ben Grubb’s metadata was ‘personal information’, because it wasn’t asked to.

This case was about a question of law, not the application of that law to a particular set of facts.

The only thing decided today was that the phrase “about an individual” is an important element in the definition of personal information, as the definition existed in 2013.

The Court reiterated that there are two elements: an ‘identifiability’ element, and an ‘about’ element.  The Federal Court said this:  “The words ‘about an individual’ direct attention to the need for the individual to be a subject matter of the information or opinion. This requirement might not be difficult to satisfy. Information and opinions can have multiple subject matters”.

So where does this leave us?

First, I doubt that any organisations – not even Telstra – will start popping champagne corks in the belief that they are somehow off the hook in terms of their privacy obligations.  I saw no evidence of reckless abandonment of privacy obligations in the wake of the AAT judgment, and rightly so.  Whether government or business, organisations are pragmatic.  They know that maintaining customer trust is essential, and so arguing the toss with a customer or citizen about whether or not a record is ‘about’ that individual is not going to engender that trust.

Second, remember that the definition of ‘personal information’ changed in 2014.  It now says “information or an opinion about an identified individual, or an individual who is reasonably identifiable…”.  So that element of ‘about’ is still there, but it is now a little more intertwined with the element of ‘identifiability’.  It’s not clear whether that subtle change in language makes any practical difference, but you cannot just assume that today’s Federal Court judgment directly applies to the law as it stands today.

So if Ben Grubb were to tomorrow ask Telstra anew for access to his metadata, things could end up very differently.  Since he first asked in 2013, the definition of ‘personal information’ has changed; a law has been passed to state explicitly that metadata kept by telecoms under the new data retention rules is personal information subject to the Privacy Act (where it relates to an individual or a communication to which the individual is a party); and the Federal Court has said that information can be about ‘about’ more than one thing or person at a time, so the AAT’s more binary characterisation can probably be ignored.

But finally, that element of ‘about’ is still problematic.  By saying that the individual needs to “be a subject matter” of the information, this judgment may have had the effect of slightly narrowing the definition of ‘personal information’, more so than if the language of “relating to” had been used instead.  (By contrast, the latest European privacy law, the GDPR, defines ‘personal data’ more simply as “any information relating to an identified or identifiable natural person”.  Neat, huh?)

However importantly, the judges also said this: “even if a single piece of information is not ‘about an individual’ it might be about the individual when combined with other information”.  In my view, this has left open the possibility that a piece of data might still be captured by the definition of ‘personal information’, even though at first glance it appears to have as its subject matter/s not an individual, but a network, a communication or a device.  The judges stressed the need to consider “the totality of the information”.  In other words, linkability to an identifiable individual might still make something ‘personal information’, and thus within the scope of our privacy laws.

So what next … will the Privacy Commissioner appeal to the High Court?  Or will he ask the Government to introduce an amendment to the legislation, to make our definition more like the GDPR’s?

Perhaps instead of muddying the waters further with yet more legislative or judicial activity, what we need first is some updated guidance from the Privacy Commissioner.

Related Items:

Help oppose the expansion of data retention to civil cases

The Attorney-General's Department is considering expanding access to the mandatory data retention scheme to include civil cases.

This could mean that retained data is made available for a range of civil matters including:

  • copyright infringement
  • family law cases
  • employment-related cases

We'll be vigorously resisting any such expansion in this scheme and instead will be pushing for a comprehensive review of this deeply-flawed scheme in 2017.

We encourage you to add your voice to this opposition by making a submission to the inquiry. Submissions are due on the 27th January.

See our guidance for help in writing your own submission.


And, sign our petition!


With your support, we can fight this unwarranted mission creep that has the potential to greatly increase the privacy threats inherent in this scheme.

Please support our work with a one-off donation or a recurring donation.

Related Items:

“Everyone Made Themselves the Hero.” Remembering Aaron Swartz

Ragesoss / CC BY-SA 3.0

On January 18, 2012, the Internet went dark. Hundreds of websites went black in protest of the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA). The bills would have created a “blacklist” of censored websites based on accusations of copyright infringement. SOPA was en route to quietly passing. But when millions of Americans complained to their members of Congress, support for the bill quickly vanished. We called it the Internet at its best.

As we approach the fifth anniversary of the blackout, we also note a much sadder anniversary. A year after we beat SOPA, we lost one of the most active organizers behind the opposition. While being unfairly prosecuted under the Computer Fraud and Abuse Act, Aaron Swartz took his own life on January 11, 2013.

This post is by Elliot Harman and was originally published on EFF's Deeplinks blog. It has been edited slightly for local context. See the original article.

When you look around the digital rights community, it’s easy to find Aaron’s fingerprints all over it. He and his organization Demand Progress worked closely with EFF to stop SOPA. Long before that, he played key roles in the development of RSS, RDF, and Creative Commons. He railed hard against the idea of government-funded scientific research being unavailable to the public, and his passion continues to motivate the open access community. Aaron inspired Lawrence Lessig to fight corruption in politics, eventually fueling Lessig’s White House run.

There’s no better way to remember Aaron’s life and work than by hearing his words. And there’s no more inspiring account of the fight over SOPA than Aaron’s famous talk, “How We Stopped SOPA” (transcript).

Aaron warned that SOPA wouldn’t be the last time Hollywood attempted to use copyright law as an excuse to censor the Internet:

Sure, it will have yet another name, and maybe a different excuse, and probably do its damage in a different way. But make no mistake: The enemies of the freedom to connect have not disappeared. The fire in those politicians’ eyes hasn’t been put out. There are a lot of people, a lot of powerful people, who want to clamp down on the Internet. And to be honest, there aren’t a whole lot who have a vested interest in protecting it from all of that. Even some of the biggest companies, some of the biggest Internet companies, to put it frankly, would benefit from a world in which their little competitors could get censored.

Five years later, it’s clear that Aaron was right. In the courts, record labels are pushing for an interpretation of copyright law that would enable them to block entire websites because of their users’ activities, or force ISPs to cut off users’ Internet connections based on mere accusations of copyright infringement. Big content companies even wrote a memo to President-elect Trump calling for a new law that would require website owners to use copyright bots to censor their users’ activity. Threats to free speech online are on the horizon—and they’re going to come hitched to copyright law.

It’s tempting to become pessimistic in the face of countless threats to free speech and privacy. But the story of the SOPA protests demonstrates that we can win in the face of seemingly insurmountable odds. In his talk, Aaron showed how all of us can become heroes in the fight for civil liberties:

I’ve told this as a personal story, partly because I think big stories like this one are just more interesting at human scale. The director J.D. Walsh says good stories should be like the poster for Transformers. There’s a huge evil robot on the left side of the poster and a huge, big army on the right side of the poster. And in the middle, at the bottom, there’s just a small family trapped in the middle. Big stories need human stakes. But mostly, it’s a personal story, because I didn’t have time to research any of the other part of it. But that’s kind of the point. We won this fight because everyone made themselves the hero of their own story. Everyone took it as their job to save this crucial freedom. They threw themselves into it. They did whatever they could think of to do.

As a president comes to power who’s promised to ratchet up surveillance and censorship, we need heroes more than ever. Whether it’s by calling your local MP or Senators to speak up for a free and open Internet, urging your company to protect its users’ data from government surveillance, or by joining Electronic Frontiers Australia to defend digital freedom in Australia, you can be the hero in the story of how we stopped the next big threat to your digital rights.

Killswitch: the battle to control the Internet

EFA is organising a series of exclusive screenings around the country of the award-winning documentary, Killswitch: the battle to control the Internet, which tells the story of Aaron Swartz and Edward Snowden.

See the Killswitch trailer



Related Items:

Big data: more isn't always better

Big data is big news these days. But most organisations just end up hoarding vast reams of data, leaving them with a massive repository of unstructured – or “dark” – data that is of little use to anyone.

Given the potential benefits of big data, it’s crucial that we find better ways to gather, store and analyse data in order to make the most of it.

Stories of big data successes have triggered significant investments in big data initiatives. This has prompted many organisations to gather significant volumes of external and internal data into so-called “data lakes”. These are repositories that contain data in any format, whether structured, like databases, or unstructured, like emails or audio and video.

As a result, the growth in the amount of data being generated, collected and stored continues at an exponential rate.

This article is by Shazia Sadiq, from The University of Queensland and was originally published on The Conversation. See the original article.

But according to a recent IBM study, more than 80% of all data is inactive, unmanaged, often unstructured, lacking meaningful metadata, and even unknown to the organisation. The proportion of this dark data is expected to reach 93% by 2020.

For example, data generated from vehicle on-board devices can be expected to reach 350MB of data every second. Where does all this data go and who is using it?

Organisations can also generate significant internal data. For example, a recent study found that a company with 1,500 employees had around 2.5 million spreadsheets, each of which were only used by 12 people on average.

What’s more, there is evidence of a variety of unstructured data such as document versions, project notes and emails that is left behind from organisational processes and subsequently sits dormant in data servers.

Use it or lose it

Lessons learnt from years of research in information system use have shown that the assumption that “more is better” when it comes to data is unfounded.

Even in traditional IT projects that follow carefully crafted analysis and design life cycles, the misalignment between perceived and actual value has been a notoriously difficult problem, often leading to poor returns on investment.

In big data projects, the data can often be externally sourced with little or no knowledge of its schemata, quality or expected utility. Thus the risk of making investments that will not deliver is greatly heightened.

The old adage of “use it or lose it” is by no means obsolete, and brings attention back to the purpose of how we use big data. Organisations may retain data for a variety of reasons, including data retention regulations, but perceived future value is typically the main reason.

Although storage is relatively cheap, given the volume of data being assimilated, the maintenance and energy consumption of data centres is not trivial. Furthermore, there are costs and risks related to the security of such unmanaged data.

Thus defining the purpose is pivotal to ensure that big data investments are targeted towards a meaningful problems, and data collection and storage is well justified.

Approaches such as design thinking, which encourages people to use creative solution-focused thinking, are proving to be highly successful in genuine problem formulation for big data.



What is Design Thinking?

When appropriately applied, design thinking can equip data scientists to bring together desirability (customer need) and viability (business value) with technological feasibility, and thereby guide them towards developing meaningful solutions.

Garbage in, garbage out

When the gap between data creation and use becomes larger, it makes it more likely that data quality decreases. This means an organisation will have to employ a lot of effort cleaning old data if it wants to use it today.

According to the US Chief Data Scientist DJ Patil:

Data is super messy, and data cleanup will always be literally 80% of the work. In other words, data is the problem.

Earlier this year, a group of global thought leaders from the database research community outlined the grand challenges in getting value from big data. The key message was the need to develop the capacity to “understand how the quality of that data affects the quality of the insight we derive from it”.

The golden principle of “garbage in, garbage out” is still true in the context of big data. Without scientifically credible knowledge that provides the ability to efficiently evaluate the underlying quality characteristics of the data, there is a significant risk of organisations and governments accumulating large volumes of low value density data, or investing in low return-on-investment data products.

Moreover, the lack of knowledge on the underlying data (distributions, semantics and other nuances) could result in analytical traps, where the data analysis can lead to erroneous, and possibly dangerous, conclusions.

Data exploration is emerging as a promising approach to empower users with exploratory capabilities to investigate the quality of the data and gain awareness of data’s shortcomings in terms of their intended use, and do so before they invest in expensive data cleaning and curation tasks.

The search for enlightenment from the data deluge will consume the energy and investments of the data-driven society in the foreseeable future. Whereas there is immense power in the scale of data, when left unattended will propel organisations into the abyss of dark data.

All this underscores the growing need for well-trained data scientists who have the ability to articulate a well-justified business, scientific or social purpose and align it with the technological efforts for data collection, storage, curation and analysis.

Related Items: