On 8 December 2023, the European Union (EU) reached an important agreement on regulating Artificial Intelligence (AI) to ensure safety, protect fundamental rights, and maintain democracy while fostering business growth. The EU’s new Artificial Intelligence Act (EU AI Act) aims to strike a balance between fostering innovation and safeguarding fundamental rights, democracy, and the environment. The rules set obligations for AI systems based on their potential risks and impact levels. This article provides a breakdown of the new rules and recommendations for Australian businesses.

This fact sheet outlines your rights and obligations as an AI artist regarding intellectual property and copyright.
Banned AI systems
The EU AI Act recognises AI’s potential risks to citizens’ rights and democracy. The new law prohibits certain AI systems, including:
- biometric systems using sensitive characteristics like sexual orientation, race, political, religious, or philosophical beliefs;
- scraping facial images from the internet for facial recognition databases;
- emotion recognition in workplaces and educational institutions;
- social scoring based on personal characteristics;
- AI systems manipulating human behaviour to bypass free will; and
- AI used to exploit vulnerabilities based on factors like age, disability, or social and economic situation.
Law Enforcement Exemptions
While stringent rules are in place, there are exceptions (with safeguards) for law enforcement purposes. In particular, there must be prior judicial authorisation to use biometric identification systems in publicly accessible spaces for law enforcement. Likewise, there are strictly defined lists of crimes.
Continue reading this article below the formObligations for High-Risk Systems
For AI systems classified as high-risk, there are clear obligations to ensure fundamental rights, safety, and democratic values. This includes mandatory fundamental rights impact assessments relevant to sectors like insurance and banking.
Guardrails for General AI Systems
The new laws also set transparency requirements for general-purpose AI (GPAI) systems to address the wide range of tasks AI systems can perform. Examples of these systems include OpenAI’s GPT-4, which powers ChatGPT, and Google’s Bard. Transparency requirements can look like the following:
- technical documentation;
- compliance with EU copyright law; and
- detailed summaries about training content.
Support for Innovation and SMEs
Notably, the EU AI Act aims to support businesses, especially small and medium-sized enterprises (SMEs), by promoting regulatory sandboxes and real-world testing. These initiatives, overseen by national authorities, allow the development and training of innovative AI solutions before they enter the market.
Sanctions and Entry into Force
The prohibitions on banned AI systems will take effect in 6 months, and the transparency requirements will take effect in 12 months. The full set of rules will take effect in around 2 years.
The fine will depend on the severity of the infringement and the size of the company responsible for the breach.
Recommendations for Australian Businesses
The EU AI Act applies to all businesses that deploy AI systems in the EU market or make them available within the EU, regardless of location. Accordingly, Australian businesses conducting any of the following will have to comply with the legislation:
- developing and marketing AI systems;
- deploying AI systems; and
- using AI systems in their products or services.
Given the evolving landscape of AI regulations, we recommend that even Australian businesses not operating in the EU take the following steps:
Recommendation | Explanation |
Audit AI Development | Review your AI development and usage within the organisation and across the supply chain. |
Define AI Principles | Establish your AI principles and redlines, considering ethical considerations beyond legal requirements, including parameters set by the EU AI Act. |
Assess and Augment Risks and Controls | Evaluate and enhance existing risks and controls for AI at the enterprise and product lifecycle levels to meet applicable EU AI Act requirements. |
Identify AI Risk Owners | Identify relevant AI risk owners and internal governance teams to ensure effective oversight. |
Enhance Vendor Due Diligence | Revisit vendor due diligence processes related to AI procurement and third-party services, products, and deliverables created using AI, especially generative AI systems. |
Review Contracts | Assess existing contract templates and make necessary updates to mitigate AI-related risks. |
Monitor Global Developments | Stay informed about AI and AI-related laws, guidance, and standards worldwide to update the company’s AI governance framework in response to global developments. |
Key Takeaways
The EU’s new Artificial Intelligence Act (EU AI Act) sets obligations for AI systems based on their potential risks and impact levels. As an Australian business owner, the new laws will apply if you deploy AI systems in the EU market or make them available within the EU. By proactively addressing your systems, you can effectively align with best practices and navigate the evolving AI regulatory landscape.
LegalVision is actively assisting organisations in understanding their legal and ethical responsibilities concerning AI product development and usage. For guidance on AI regulation in Australia, our experienced artificial intelligence lawyers can assist as part of our LegalVision membership. For a low monthly fee, you will have unlimited access to lawyers to answer your questions and draft and review your documents. Call us today on 1300 544 755 or visit our membership page.
We appreciate your feedback – your submission has been successfully received.