Human-robot coexistence
South Korea projects human-robot coexistence by 2030. Japan's Moonshot R&D Program targets autonomous AI robots that 'learn, adapt, and act alongside human beings' by 2050. The privacy frameworks for these systems need to be designed now.
Your hospital deploys a care robot that reads patients' facial expressions, vocal tone, and body language to adapt its bedside manner. It looks like a friendly pet. Patients talk to it. They confide in it. They forget — or never understood — that every emotional signal they emit is being processed, stored, and potentially transmitted to third parties.
That is the privacy problem no cookie banner will solve. And it is arriving faster than most legal teams are prepared for.
The Gap Between Industrial Robots and Emotional Ones
Industrial robots operated behind cages. They welded, painted, assembled. They never encountered a human face, never heard a whispered confession, never triggered an emotional bond.
The next generation is different. These robots operate in hospitals, homes, shopping malls, and eldercare facilities. They are designed with human-like faces, expressive eyes, and voices calibrated to sound warm. They use affective computing — the ability to detect and interpret emotional states from biometric data including facial expressions, voice patterns, gait, posture, and physiological signals.
Japan's Moonshot R&D Program targets autonomous AI robots that "learn, adapt, evolve in intelligence and act alongside human beings by 2050." South Korea projects human-robot coexistence by 2030. In healthcare specifically, robots with emotion-sensing capabilities are being developed to enhance patient self-efficacy during rehabilitation and chronic care.
This is not a theoretical concern. The research is unambiguous: people who lack social connections anthropomorphize non-human agents at higher rates, form attachment relationships with them, and disclose personal information more freely. Design choices that make robots warmer and more engaging — the very features that improve service quality — simultaneously amplify privacy risk.
What the GDPR and EU AI Act Actually Require
In the EU regulatory context (which shapes Swiss practice through adequacy decisions and market access), emotional robots face a layered set of obligations.
High-risk classification. Under Annex III of the EU AI Act, emotion recognition systems are classified as high-risk AI. This triggers the full set of Chapter III requirements: risk management systems, data governance, technical documentation, transparency obligations, human oversight, and accuracy/robustness standards.
Transparency at first interaction. Article 50 of the AI Act requires that users be informed they are interacting with an AI system "in a clear and distinguishable manner at the latest at the time of the first interaction or exposure." Deployers of emotion recognition systems must additionally inform exposed individuals about the system's operation and process personal data in accordance with the GDPR.
Biometric data as special category. Facial recognition, voice recognition, gait analysis, eye tracking — all standard inputs for emotional robots — constitute biometric data under GDPR Article 4(14). Processing this data falls under the special category regime of Article 9, which is prohibited by default unless explicit consent or another Article 9(2) exception applies.
Informed, specific, freely given consent. GDPR Article 4(11) defines consent as "freely given, specific, informed and unambiguous." Silence, pre-ticked boxes, or inactivity do not count. For special-category biometric data, the standard rises to explicit consent. The controller must be able to demonstrate that consent was given (Article 7(1)).
Plain language obligation. Article 12 GDPR requires that all information be provided "in a concise, transparent, intelligible and easily accessible form, using clear and plain language." Article 12(7) even permits standardized, machine-readable icons to summarize processing.
The Consent Paradox
Here is the structural problem: consent for robot data processing is requested once, before the relationship begins. But the privacy risk escalates over time as the user develops an emotional bond with the robot.
A patient who gave informed consent on day one may, three months later, be confiding health fears to the robot in front of family members — generating sensitive data that was never contemplated in the original consent scope. The legal framework assumes a rational, informed data subject making a one-time decision. The psychological reality is a gradually deepening relationship where legal considerations fade into the background.
Bill Gates predicted "a robot in every home." If that happens, it will not be practical for corporate lawyers to visit each household to obtain consent. The robots themselves must be designed to conduct consent-related privacy communication — an advanced, embodied version of the cookie consent dialogue.
How to do that without making it meaningless is the open question.
Proactive Privacy Communication: Five Tools That Work
The proactive law approach, endorsed by the European Economic and Social Committee for better EU regulation, offers a framework. Instead of reactive compliance (minimum legal text, maximum obscurity), it focuses on prevention and promotion: designing communication so users actually understand their rights and can exercise them.
Applied to emotional robots, five concrete tools emerge from information design research.
1. Multichannel Communication
A robot is not a web page. It has a voice, a screen, and physical gestures. Privacy communication should use all three channels simultaneously.
Imagine a care robot that needs to explain its data processing: it displays a clear animation on its screen showing what data flows where, narrates the animation in plain spoken language, and uses physical gestures to emphasize key points. This is not science fiction — it is an extension of "talking comic contracts," legally binding agreements designed for people with limited literacy that combine visual, textual, and audio elements.
The key insight: when information is available through alternative channels, even users who would never read a privacy policy can make informed choices.
2. Plain Language
Legal privacy communication defaults to legalese layered with technical jargon. For emotional robots, this combination is fatal to comprehension.
Plain language is not imprecise language. As legal writing scholar Joseph Kimble argues, plain language is actually more accurate than traditional legal writing because it reveals ambiguities and errors that complex prose tries to hide. The GDPR itself mandates "clear and plain language" in Article 12 — yet most privacy policies read as if that article does not exist.
For robot communication, every privacy-related statement — spoken or displayed — should be written at a reading level appropriate for the most vulnerable anticipated user group.
3. Tone of Voice
Legal documents almost universally adopt a tone that is formal, impersonal, technical, and legalistic. That tone actively discourages engagement. Users tune out.
Robots can choose a different tone. A healthcare robot communicating privacy choices to an elderly patient should use a warm, respectful, and reassuring tone — not because it is manipulative, but because it is the only way the information will actually be received. Tone of voice applies to speech, on-screen text, and even gestures: a friendly expression reinforces openness, while overly formal body language can signal that the information is unimportant.
4. Visualization
Privacy policies are abstract. Risks are hypothetical. Visualization makes them concrete.
A robot's screen can show an animated scenario: "Here is what happens when I collect your facial expression data. It goes here. It is stored for this long. These people can access it." Design patterns — repeatable solutions to common comprehension problems — can structure the visual presentation so it is consistent, predictable, and genuinely informative.
Article 12(7) of the GDPR already envisions standardized icons for processing summaries. Robots can go further: interactive visual explanations that adapt in real time.
5. Tailoring
One-size-fits-all privacy communication fails because users have different literacy levels, cognitive abilities, language preferences, and emotional states. The same emotional capabilities that create privacy risks also enable the solution: a robot that can detect a user's confusion, frustration, or disengagement can adapt its privacy communication accordingly.
This might mean switching from text to voice, simplifying language, offering a visual explanation instead of a verbal one, or pausing to ask whether the user has questions. The robot learns from interactions over time and refines its approach.
Swiss Implications
For Swiss practitioners, the relevance is immediate. The revised Federal Act on Data Protection (nFADP), effective since September 2023, aligns closely with GDPR principles on consent, transparency, and data minimization. Swiss healthcare providers deploying care robots face obligations under both the nFADP and sector-specific patient data protection requirements.
The EU AI Act's extraterritorial reach means that any Swiss entity whose AI-enabled robots affect individuals in the EU must comply. And given Switzerland's ongoing pursuit of EU data protection adequacy, the proactive approach described here is not gold-plating — it is baseline readiness.
More practically: if your client is deploying robots in eldercare, hospital, or retail settings, the privacy communication design is not an afterthought to be handled by the IT department. It is a legal design challenge that requires collaboration between legal, UX, information design, and engineering teams from day one.
What You Should Be Doing Now
Emotional robots at scale are not here yet. But the regulatory framework is already in place, and the design decisions being made today will determine whether future deployments are compliant or catastrophically exposed.
Three actions for legal teams advising technology companies or deployers:
Audit your client's robot communication design. If the privacy communication is a wall of text on a screen, it fails the GDPR plain language requirement and will certainly fail with vulnerable user populations. Push for multichannel, plain-language, visualized communication as a compliance requirement, not a nice-to-have.
Map the consent lifecycle. One-time consent is legally fragile when the user-robot relationship evolves over months. Design consent mechanisms that can be revisited, refreshed, and re-confirmed as the relationship deepens and the data processing scope changes.
Address the anthropomorphism risk explicitly. If your client is deliberately designing robots to be warm, empathetic, and engaging — and they should be, for usability — then the privacy communication must be proportionally stronger. Warmth without transparency is manipulation, and regulators will treat it as such.
Privacy-by-Design Checklist for Emotional AI
0/0The gap between what the law requires and what current practice delivers is wide. Proactive privacy communication design is how you close it — before the regulator or the plaintiff does it for you.