Artists call for consent and compensation in AI training
NAVA survey captures artists’ response to Productivity Commission’s AI proposal.
NAVA survey captures artists’ response to Productivity Commission’s AI proposal.
The federal government’s recent three-day economic reform roundtable concluded with unions and tech companies reportedly exploring a model for compensating creators for AI training data. NAVA asserts that any agreement must be firmly grounded in existing copyright law, not in workarounds.
Australia’s Copyright Act provides essential protections for artists. It ensures that the use of artworks, whether for publication, reproduction, or AI training, requires permission and payment. Yet in early August, the Productivity Commission released an interim report proposing a new exception to copyright for text and data mining. Similar to frameworks adopted overseas, this exception would give AI developers legal cover to use copyrighted material without permission or compensation.
NAVA strongly opposes this recommendation and joins arts peak bodies and thousands of artists and writers in rejecting the proposal, which would entrench a system where creators are not asked, not paid, and not protected.
Enforcement is already a major challenge for individual copyright holders. The argument that there are potential avenues for compensation under a “fair dealing” exception for text and data mining misses the point: artists would still bear the burden of identifying and pursuing unauthorised use of their work. Power imbalances, platform opacity, and legal complexity make this all but impossible for most. These are not flaws of copyright law itself, but of the systems that fail to uphold it.
A recent NAVA survey of more than 890 visual artists found:
Artists repeatedly emphasised that AI scraping without permission is not innovation, it is exploitation. Many called it theft, saying the use of their creative labour is being used in AI training without credit, consent, or compensation, deepening the economic precarity already faced by visual artists. They also raised growing concerns about their identities and visual styles being mimicked or monetised without consent, and some reported that AI tools had scraped personal images to create fake accounts or impersonations. Many now feel unsafe sharing work online.
Beyond copyright concerns, artists voiced strong fears about the ethical, cultural, and environmental impacts of AI. They noted that generative AI is producing faster, cheaper content that pushes aside slower, process-based, or experimental practices. Respondents warned that the pressure to adopt AI is not about enhancing creativity, but meeting unrealistic expectations around productivity.
Environmental impacts were another recurring theme. Artists called attention to the enormous water and energy demands of AI infrastructure, particularly data centres, and urged greater public transparency on the environmental and community costs of these systems.
Artists want to be included in shaping the rules that will govern AI and creative practice. They are calling for clear legal protections and greater transparency in AI development, stronger education around artists’ rights, public investment in local creative industries, and policy frameworks that value artistic labour, creative process, and Indigenous Cultural and Intellectual Property (ICIP).
Survey respondents also called for enforceable tools to find out if their work is in training datasets, along with opt-out rights and penalties for infringement. Most said they feel powerless under current laws and platform policies, unable to track usage or afford enforcement. The barriers, they said, are legal, financial, and emotional, and all too high.
Almost half of respondents (46%) said they currently use generative AI in their creative practice, while 41% do not. Another 13% said they may use it in the future. Among those who use it, AI is most commonly used for writing-related tasks, such as editing or grammar correction (49%), drafting written content (49%), and grant writing or admin (36%). Around 40% use it for research and development, and 34% for brainstorming ideas. Fewer artists use it for visual outputs: only 22% use AI to generate sketches or reference images, and just 6% use it to produce final artworks. This shows that artists are not rejecting AI. While many are experimenting with the technology, particularly in administrative and research contexts, they continue to call for its use to be lawful, transparent, and fair.
Confusion around copyright ownership of AI-generated works adds another layer of vulnerability. Many artists expressed frustration over unclear rules around authorship and attribution, especially when AI has been trained on human-made content without permission. Clarifying these issues is essential for ensuring accountability, attribution, and remuneration.
NAVA supports tools that help artists thrive, but not at the expense of their rights. What artists reject is a system that extracts their work without permission, under the guise of innovation.
NAVA urges the Productivity Commission and the Australian Government to reject any new copyright exceptions that allow AI companies to scrape artists’ work without authorisation. Instead, the focus should be on strengthening the enforcement of existing laws and developing stand-alone AI legislation that upholds artists’ rights. Protections must include transparent AI training datasets, meaningful consent processes, and clear avenues for attribution and compensation.
Written submissions to the Productivity Commission’s interim report are due by 5pm AEST, Monday 15 September 2025.
Caroline Zilinsky, Erasure 2025. Oil on linen, 137 x 122cm.
ID: A painting of a person carrying shopping bags walking on a road, with a group of large yellow robotic quadrupeds approaching them from the front.