Online Magazine
TechTalk Audio: Responsible AI governance

To ensure the responsible use of AI, regulations will soon be introduced in the EU. What these will look like in concrete terms and what impact they will have on companies: That's what the global delivery lead for Accenture's internal Responsible AI Program, Rudraksh Bhawalkar, explains in the second episode of TechTalk Audio on responsible AI.
Tobias Imbach spoke with Rudraksh Bhawalkar
To the point:
- The EU AI Act is planned to come into effect by the end of 2024. And it will impact every individual and every corporation in the world that does business within the European Union.
- The prime focus area for the EU AI Act are so-called "high risk systems" – like screening resumes when hiring people or educational assessments at a university. With regard to these systems, you want to make sure that they don't take biased decisions.
- The penalties, if an AI does not adhere to the regulations, can go as high as 30 million euros or 6% of the revenue of a company.
- There are certain ways in which you can analyze your AI application and check, based on the output of the testing and the scoring, whether it is responsible or not.
RESPONSIBLE AI – HEARD IT MANY TIMES, BUT NEVER KNEW EXACTLY WHAT IT IS?
In Episode 1 of our TechTalk Audio series on Responsible AI, Rudraksh Bhawalkar, Global Delivery Lead of Accenture's internal Responsible AI program, gives an introduction to this meaningful term.
Listen to the 8-minute episode here.
