How is AI used in our software?
This article answers frequently asked questions about the use of AI in our software and covers all related topics.
1. Where exactly is AI applied in the Honestly software?
- In the Topics section
The AI assistant groups similar open-text responses into topic areas.
Language support tasks like translation are carried out by AI.
- In the Initiatives section
AI provides recommendations for initiatives based on dashboard data.
- In the Bubble Chart on the dashboard
With the help of AI, all open-text responses are summarised and their sentiment analysed. The results are clearly displayed in the chart
- When translating in the Topics and Chat sections
2. Which AI is specifically used?
We use Microsoft’s Azure AI Foundry, specifically generative models, which are operated in the Azure Cloud in Europe. The models are fully isolated from the public OpenAI service. Processing is carried out exclusively by Microsoft in their EU data centres.
3. Is the AI trained with our data?
No. Microsoft does not use the data to improve OpenAI models or its own products. Honestly also does not use your data for model training. The models used are stateless and non-learning, unless fine-tuning is explicitly initiated – which we do not do.
4. How transparent is the way the AI works?
The Topics AI uses language models to identify frequent terms, concepts, and patterns in open-text responses and automatically assign them to topics. The logic is based on probabilities (semantic similarity).
5. Is the use of AI mandatory?
No. The functions are optional. Companies can disable them – using Honestly is fully possible without AI.
6. What data is processed and where?
Only open-text responses from surveys, which may potentially contain personal information. The data is encrypted and processed on Microsoft servers in the EU.
7. How does Honestly address the risk of personal data being included in free-text responses?
We recommend using free-text fields intentionally and with specific questions to minimise the likelihood of personal data being entered. If desired, the Topics AI can be applied only to certain questions or disabled entirely.
8. Is a Data Protection Impact Assessment (DPIA) necessary?
Depending on your company’s assessment, a DPIA may be required – for example, when conducting extensive analysis of employee feedback. Upon request, we provide information to support this process, such as function descriptions, data flows, and sub-processor information.
9. How can Honestly support with legal reviews (e.g. with the works council or data protection team)?
We can provide:
Function description (German/English)
List of all subprocessors (incl. Microsoft)
Technical description of the Topics AI
Privacy policy
Support with DPIA
Data processing agreement & TOMs
FAQs for employees
Deactivation option
10. What co-determination rights does the works council have?
According to §87 BetrVG, co-determination is required if technical systems are used, or could be used, to monitor behaviour or performance. Even if no active monitoring takes place, the mere possibility may be sufficient. Therefore, we recommend involving the works council as early as possible and keeping them continuously informed and engaged throughout the entire project. This creates transparency, fosters a trusting collaboration, and ensures the success of the project.
11. What risks exist when using LLMs (Large Language Models) in an HR context?
Interpretation errors in semantic analyses
Lack of explainability in decisions
Entry of personal data by employees
Perception as a monitoring tool
Challenges in GDPR-compliant implementation
We address these risks through:
Transparency
Clearly defined data flows
Data processing agreements with subprocessors
Freedom of choice (enable/disable function)
Training and help articles