AI in HR: Powerful and Responsible

HR-manager en medewerker in gesprek

AI is changing how organisations develop talent. Faster feedback. Better insights. More control over growth. But as AI in HR becomes more powerful, one essential question grows with it:

“Who stays in control?”

HR managers and leaders work daily with sensitive data: performance, ambitions, behaviour, development. Data that affects people. Data that must be protected.

That is why the choices Nela makes around AI in HR are not technical details — they are fundamental design decisions that determine whether employees can trust AI.

What is the difference between AI that decides and AI that supports?

There is a significant difference between AI that evaluates people and AI that helps people grow.

Nela does not make autonomous decisions about employees. No scores that automatically have consequences. No recommendations about promotion or contracts. No assessments without human involvement.

What Nela does do: recognise patterns, structure insights, and offer concrete suggestions, so that as a manager you can have better conversations. The choice always remains with you.

This is not a limitation. It is a principled choice, and exactly what the EU AI Act requires of AI systems deployed in relation to people and their development. Human oversight is not optional; it is an obligation. Nela is built from the ground up on this principle: transparent, controllable, and always with the human at the wheel.

The question is not whether employees use AI, but how

Many organisations are still hesitant about deploying AI in HR. Understandable. But that hesitation does not solve an underlying problem.

Because in the meantime, employees are already using AI. Every day. Someone wanting to formulate a development goal asks ChatGPT. Someone preparing for a review conversation asks ChatGPT. Someone unsure how to handle a difficult feedback situation asks ChatGPT.

Often via a free account. With no security. Without the context of the organisation. And with no control over where that data ends up.

The choice is therefore not: AI in HR or not. The choice is: uncontrolled AI outside the organisation’s view, or AI that is consciously deployed, safely, in context, with the right safeguards.

That is exactly what Nela is built for.

Why choose European LLM technology in HR?

Most AI applications run on large American models. That works, but it raises legitimate questions about data security, legal protection, and compliance with European regulations.

Nela works with European LLM technology, including Mistral AI. Not because it is easier, but because it better fits what HR data demands:

  • European governance: transparent about how models are built and managed
  • Alignment with the EU AI Act and GDPR: legally clear, no grey areas
  • Controlled implementation: data storage within Europe, GDPR-compliant

For HR professionals, this is not a detail. When you work with performance and development data of employees, the question of where that data ends up is just as important as what you do with it.

How does Nela protect employee privacy?

One of the most powerful parts of Nela is the AI chat: a space where professionals can reflect freely, explore difficult situations, and prepare conversations.

That space only works if employees feel safe there.

That is why the chat is fully anonymous. Conversations are not visible to managers. Data is not shared, not stored as an HR file, and not resold. Full stop.

People only grow when they feel safe enough to be honest. Nela is built so that safety is always present, not dependent on settings or permissions, but baked into the design.

Responsible AI in HR: what does it deliver for your organisation?

As an HR manager or leader, you want AI that works and that you can justify to employees, to management, to regulators.

With Nela you choose:

  • AI where the human always remains in control, as required by the EU AI Act
  • European technology that meets the strictest privacy requirements
  • A platform that increases psychological safety rather than undermining it
  • Insights that help you have better conversations

That is what we mean by performance made simple. Not only simple to use, but also simple to trust.

Want to see what safe AI in HR looks like in practice? Schedule a meeting and discover how Nela makes development concrete and responsible.

Frequently asked questions about AI in HR

Q: Does HR software with AI fall under the EU AI Act?
A:
Yes. AI systems deployed in relation to personnel management, performance, or development fall under the EU AI Act. The law sets strict requirements around transparency, human oversight, and data security. Nela is designed to meet those requirements: no autonomous decisions about people, data storage within Europe, and full transparency about how the system works.

Q: Can AI make independent decisions about employees?
A:
No. According to the EU AI Act, human assessment is always required for decisions that affect people, such as evaluations, promotions, or dismissal. AI may support and advise, but the final decision always rests with a human. Nela is explicitly built on this principle.

Q: How do I know whether my HR data is safe with an AI tool?
A:
Ask your provider where data is stored, whether the platform is GDPR-compliant, and whether European or American AI models are used. European models come with stricter privacy safeguards and a clearer legal position. Nela stores data within Europe and works with European LLM technology.

Q: What are the obligations for organisations that use AI in HR?
A:
Since February 2025, organisations are required to ensure AI literacy among employees who work with AI. They must understand how the system works, what its limitations are, and that AI may never independently make a personnel decision. Nela supports organisations in this with transparent operation and clear user documentation.

About the author:

Niels Datema is CEO of Nela, the AI platform for performance management. He writes about talent development, feedback culture, and the role of AI in modern organisations.