The introduction of the EU AI Act marks a significant milestone in the regulation of artificial intelligence (AI) in Europe. As one of the first comprehensive sets of rules in the world to regulate the use and development of AI, this law aims to promote innovation while minimizing risks for citizens and businesses. It is crucial for companies to understand the implications and requirements of the EU AI Act.
According to Art. 2 of the EU AI Act, virtualQ is classified as a
“product manufacturers who place on the market or put into service an AI system together with their product and under their own name or trademark”
to be classified.
As a manufacturer, we are responsible for the compliance and safety of our software and take care to comply with all relevant regulations.
What is the EU AI Act?
The EU AI Act categorizes AI applications and systems into four levels to ensure effective and safe use:
- Unacceptable risk: applications that pose an unacceptable risk, such as state-operated social scoring systems. These are strictly prohibited in order to protect fundamental rights.
- High risk: Such applications are subject to strict legal requirements. These include tools such as CV screening systems for recruitment processes. Companies must implement comprehensive security and monitoring measures.
- Limited risk: Applications that pose a limited risk are subject to transparency obligations. This requires users to be informed when they interact with an AI system to promote trust and accountability.
- Minimal risk: These applications, which are neither explicitly prohibited nor classified as high-risk or limited-risk, are largely unregulated. They offer flexibility of use while adhering to the ethical standards of the EU AI Act.
Unrestricted use of our software
According to the Compliance Checker, our software falls outside the scope of the EU AI Act and is therefore not subject to any legal restrictions. This means that our customers have the freedom to use it without any additional regulatory hurdles.
Perform the check yourself: https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/
Trust our software to support your processes efficiently while ensuring compliance with all applicable regulations.
Risk of bias: virtualQ is not vulnerable to bias
Bias in artificial intelligence is a significant issue that is addressed in detail in the EU AI Act to promote fairness and transparency. We would like to emphasize at this point that our software and the underlying data are not affected by the issue of bias.
Our commitment to transparency and ethics, even outside the scope
Although our software does not fall under the scope of the EU AI Act, we are nevertheless committed to the highest standards of ethics and transparency. We follow the “human-in-the-loop” principle to ensure that human judgment remains integrated into critical decision-making processes. In addition, we clearly inform our users about the data we collect to promote trust and accountability.
Our Head of Engineering Ralph Winzinger is a certified “Machine Learning Specialist”. He is joined by other certified specialists in our company who work as a team to ensure that our AI development complies with best practices and is continuously improved. In this way, we create a responsible basis for the use of our technology.
Adoption of the EU Code of Conduct: our commitment to responsible AI
In line with the EU recommendations, we have decided to adopt the Code of Conduct for trustworthy AI as soon as it is available (https://artificialintelligenceact.eu/de/article/95/).
This code is guided by three key points highlighted by the EU to ensure that our AI technologies are used responsibly and in line with ethical standards.
- Ethical guidelines for trustworthy AI
- Minimizing the impact of AI on environmental sustainability – lightweight processing during inference; model training only when needed
- Promoting AI literacy among our employees involved in the development, operation and use of AI and addressing the topic in our all-hands meetings
In this way, we not only want to strengthen the trust of our users, but also actively contribute to the creation of a fair and transparent AI landscape.
EU AI Act in the contact center: Responsibility and compliance despite “out-of-scope”
The EU AI Act regulates the use of AI and divides applications into four risk levels to promote innovation while minimizing risks. Although our software is outside the scope of the Compliance Checker, we are committed to voluntary compliance with the highest ethical standards.
With certified machine learning specialists on the team, transparent data practices and the principle of “human-in-the-loop”, we ensure that our software is free of bias. We have also decided to adopt the Code of Conduct for Trustworthy AI as soon as it is available. This will enable us to further promote the responsible and sustainable use of AI.