AI has clearly become the hot topic of the year. With new technologies and use cases quickly spreading across the market, both regulating bodies and companies are trying their best to keep up. Are the risks too hyped up? How will it impact the workplace? How can we still ensure compliant use from our employees? What measures do we have to take into consideration while developing AI tools?
During DataGard's “AI and the Future of Privacy” discussion panel at EPIC, our experts shed light on this complex issue, and provided valuable insights into how AI will impact the workplace. We give you a summary of the current landscape and 3 easy steps to implement education in your workplace and foster a compliant AI culture that will prove incredibly valuable to business owners and IT managers.
Key Takeaways
- The EU AI Act will be the world´s first set of rules around Artificial Intelligence.
- The use of AI in the workplace can increase automation and improve the employee's overall happiness in the workplace.
- Overreliance on such technologies can lead to a loss of skills in the workplace.
- Frameworks and trainings must be put in place to ensure compliant use and critical thinking whilst using AI.
- It is top management´s responsibility to create proper frameworks regarding AI and ensure compliance with regulation.
- Adopting privacy-by-design is the first step to ensure your product or service complies with regulations.
Understanding the current regulatory landscape
Technology tends to be two steps ahead of regulation, and this time, there was no exception. While AI and GenAI tools are already widely spread, there are currently no regulations in place to mitigate risks nor to ensure the correct use of this technology. However, in April 2021, the European Commission set foot on implementing a regulatory framework for AI, which is currently still in the works. Once approved, the EU AI Act will be the world´s first set of rules on the topic.
As Laura Sanjath (Austrian Federal Economic Chamber) stated during the panel, it is still very unclear what the EU AI Act will cover and even to what extent GenAI will be part of the scope. What we do know is that the EU has set clear lines on certain systems such as social scoring, biometric identification, and cognitive behavioural manipulations.
How other nations will follow is still unclear, but this presents the EU with a great opportunity that, if done right, might be key for the future of the European industry.
Leaving the regulations aside, what does the emergence of AI mean in practical terms, and most importantly, how can companies leverage this technology correctly?
Related: Discover what the legislation could mean for privacy and compliance in your business— Download your complete guide to the EU AI Act.
Impact on the workforce: Overreliance on AI
As businesses rush to tap generative AI’s vast potential to transform the way we work, analysts watching this modern-day gold rush have responded with both optimism and caution. For every new business opportunity unveiled by AI comes very real concerns about a host of issues: data privacy and security, transparency about decision-making, and of course, job security.
During the panel, Markus Stulle (Director, Deloitte Germany) explained how their development team was now fully hybrid, with part of the work being done by developers and the other by machines. Leveraging this technology allowed the developers to drop repetitive tasks, letting them focus on decision-making and strategy. This not only helped the team to be more efficient with their time, but it also increased their happiness, with 70% of the team reporting “I love my work”, a significant increase compared to previously.
Similar examples, such as the one Markus presented, are not an odd sight. We increasingly see organisations leveraging AI as an extension of their team. And while this presents a positive use case, it is important to reflect on the possible risks that overreliance on AI might bring along.
For example, Dr. Olaf Uhlenwinkel (VP Sales Dach, Aminos) shed light on the complications that AI might cause in case of failure. He used the example of commercial planes, which can land without the support of a human. If the plane were to be disengaged last minute right before the landing, the pilots would find it much harder to get into “flight mode” and take control of the plane, increasing the chances of an accident happening.
“If we get used to using this new technology, we are not going to challenge or test these results”. - Lukas Staffer (Senior Researcher, Zurich University)
Here, we identify a potential issue caused by the comfortability that AI causes on humans, and it presents a key challenge, not only for the airlines, but also to all businesses that interact, use, and develop this technology. We now must set up rules to ensure that there are safeguards in place to prevent this overreliance on AI from affecting the quality and safety of our work.
This way, we will ensure that:
- We as humans don´t lose valuable skills (nobody wants to fly with a pilot who hasn’t flown a plane in months) and
- We develop measures to remain critical about what the AI tool is telling us.
Using the piloting example, this could look something like implementing routine flight trainings for the pilots or disabling autopilot 30 minutes before landing.
What does this mean for you?
As a business owner or manager of a team, think about how you want to integrate AI into your workplace. What areas could AI help your team become more efficient? What measures will you have to put in place to ensure your employees remain critical and alert? How can you educate your team about the challenges and risks?
Education as the building blocks for compliance:
Here are 3 easy steps you can follow to make sure your organisation is fully educated about AI.
Step 1: Create a culture of compliance:
Before starting to implement said tools, it is key to set the tone across the organisation regarding compliant use of AI. For that, top leadership should align on to what extent they want to use this technology and how they want to leverage it, analysing all the possible risks and opportunities.
“You need education, but that’s the same with privacy. There is no difference (with) GDPR, you have to educate your people (…), and it'’s the top level of responsibility in both cases: AI and GDPR.” – George Huber (Partner, GPK Pegger Kofler)
Once there has been an agreement on what this will look like, the next step will be educating the employees, fostering a culture of compliance, and integrating it into the organisation’s DNA. Providing courses and training to your employees, conducting regular audits, and providing clear guidelines and policies are just some of the first steps you must take as an organisation.
Step 2: Establish a framework for GenAI tool usage in the workplace:
As part of your policy creation process, there must be a clear understanding and written record of what information your employees are allowed to feed into GenAI tools (such as ChatGPT ) and with which purpose.
You can even leverage softwares to safeguard your efforts and prevent sensitive information from being processed. For example, some platforms offer services that filter all the content that is being uploaded into GenAI tools and can detect if any sensitive data is being uploaded before submitting it. This can prevent information security breaches and ensure that neither client nor company data gets leaked.
Step 3: Foster Privacy-by-design:
Lastly, when developing any product, especially when it involves AI, make sure to stick to the basic principles of Privacy-by-Design. Some of these include making sure all personal data is pseudonymised and encrypted.
While there was a strong consensus across the panel for strict guidelines, George Huber (Partner, GPK Pegger Kofler) took this opportunity to remind the audience of the importance of transparency and documentation, urging everyone to conduct an impact assessment and conduct regular tests to ensure that the product isn’t processing data in a non-compliant way.
Meet the Speakers
Georg Huber |
Lukas Staffler |
Laura Sanjath |
Dr. Olaf Uhlenwinkel |
Markus Stulle |
Thomas Regier |
Do you have unanswered questions about the topic? Don't hesitate to reach out to us for a free consultation.