Meta Ray-Ban Display: five obvious safety risks to businesses

And five measures companies should take

Meta Ray-Ban Display is a range of smart glasses with a mini HUD (head-up display) in the lens, camera and microphones, and AI assistance.

This makes them attractive (navigation, translation, augmented reality), but also tricky in a business context: discreet personal recordings, cloud processing, possible third-party reviews and a massive data leakage lever - from whiteboards to train journeys to customer appointments - even in critical infrastructure, defence, finance or healthcare environments.

What is Meta Ray-Ban Display?

Meta Ray-Ban Display is the latest stage in the Ray-Ban smart glasses collaboration between Meta and EssilorLuxottica – with a small display in the right lens that shows “glanceable” content such as messages, maps or call information while the wearer continues to see their surroundings.

It’s similar to Meta's futuristic AR prototype "Orion", but a more productised offshoot of the consumer smart glasses line - just with a HUD. However, Meta is adding new AI features all the time, to its latest Display hardware, which premiered in 2025 including controversial plans for facial recognition.

What can smart glasses do?

‘Smartphone moments’ without a smartphone

In hands-on reports, the HUD is primarily described as being used for text messages, navigation (pedestrian map), video calls and "quick glances" - in other words, those interactions that would otherwise prompt a reflexive reach for your mobile phone.

Translation & context AI

The display version is positioned as a device that provides real-time translations and visual AI responses in everyday situations. For the Ray-Ban Meta product line (without display), live translation, object/scene recognition and "Hey Meta" as an AI entry point have also been part of public product communication and reporting since the 2023/24 product generation.

Camera, audio, livestream – the ‘creator DNA’

The Ray-Ban Meta Generation 2023 was presented by Meta/EssilorLuxottica as glasses with live streaming, Meta AI, a 12 MP ultra-wide camera, 1080p video and improved microphones (including five mics). These creator features remain the baseline – the display variant simply adds the "head-up level" on top.

How does it work?

Sensors & interaction

Depending on the version, the glasses combine a camera, microphones, open-ear speakers and controls on the arm. The display version also features a monocular HUD in the right lens.

Control: voice, touch – and EMG gesture band

Reports on the display glasses also mention a wristband (Neural Band) that will detect gestures via (surface) EMG on the wrist (pinch/swipe equivalents). This lowers the inhibition threshold for use: less "Hey Meta" in meetings, more discreet interaction.

App ecosystem and cloud processing

In the debate surrounding Ray-Ban Meta glasses in 2025, when the latest glasses hardware first emerged, it was widely reported that Meta AI functions could be active by default and that voice interactions could be stored (including the statement that storage was possible "for up to one year" and that users could only take corrective action by manually deleting the data).

This has obvious implications for privacy and security: as soon as video, images or audio are processed for AI features, the "secure meeting room" is potentially no longer. Wearables capture a great deal of data, often transmitting it to cloud-based third-party infrastructure, and thus create new areas of attack and compliance risk.

Security risks for businesses

Risk 1: Covert capture – discreet recording of individuals

Smart glasses are deliberately inconspicuous; it is often difficult to tell whether someone is filming or recording. With the recording LED may be small or easily concealed, this presents a fundamental problem in for the workplace. Data protection authorities, including Ireland and Italy, have already expressed concerns.

Whiteboards, architectural diagrams, roadmaps, prototypes, customer data on screens - everything can be documented as a POV (point of view) video in passing, without anyone holding up a mobile phone.

Risk 2: Circumvention of privacy signals (LED stickers, mods)

In 2025, reports circulated about special stickers designed to cover the recording LED on Meta glasses becoming available. At the same time, it was reported that Meta was attempting to use software to detect when the LED was blocked.

Other reports mention hardware modifications that could permanently disable the LED; this is particularly relevant for business security because it would render even "LED control" as a mitigation measure ineffective.

If indicators can be manipulated, smart glasses policy becomes an insider threat issue.

Risk 3: Who sees the recordings?

At the beginning of March 2026, it was reported that recordings from Ray-Ban Meta contexts had been viewed by employees of a Meta subcontractor as part of data annotation (including highly sensitive content). Meta is now being sued.

According to reports, Meta confirmed that content shared by users with Meta AI may be reviewed in part by contractors.

This should be a wake-up call for companies. As soon as employees use AI features "just briefly" (e.g. translation, "What do I see here?", visual assistance), an internal situation could turn into a third-party data flow – with all the consequences that entails for trade secrets and regulatory obligations.

Risk 4: Live streaming & real-time exfiltration

Livestreaming was explicitly mentioned as a feature in the Ray-Ban Meta Generation 2023 – including switching between glasses camera and phone camera. This is particularly critical from a business perspective because exfiltration is no longer "copying files" but "broadcasting meetings live".

A single POV live from a laboratory, production facility or boardroom can reveal more than a classic data leak because context (environment, conversations, whiteboard) is also transmitted.

Risk 5: Data protection, co-determination, compliance overhead

Smart glasses are a minefield in terms of labour law and data protection law.

Analyst firm Lexology points to concerns on the part of supervisory authorities as to whether those affected would receive sufficient notice and whether smart glasses in the workplace could quickly lead to the "routine collection" of personnel or customer data.

In addition, policy changes were discussed by Meta in 2025 that would allow voice data to be stored by default and users to only control it retrospectively via app deletion – from a GDPR perspective (data minimisation, transparency), this requires an explanation.

Specific implications for IP and trade secrets

The ‘whiteboard moment’: a POV clip is sufficient to capture architecture, roadmap, customer names and credentials on one screen.

The ‘factory walkthrough’: Production processes, supplier labels, machine parameters - everything is recorded with context. Legal assessments explicitly warn of confidentiality risks when wearables are used in the workplace.

The customer appointment: Personal data and contract details are recorded; at the same time, it is unclear which third parties (cloud/annotators) could gain access once AI functions are used. The inconspicuous livestream: From an IP perspective, the worst combination: inconspicuous device + real-time broadcast. The livestream feature is explicitly highlighted for the product line.

What companies should do now

1) Define ‘no-camera zones’ – and make them visible

Set clear zones: boardrooms, R&D, security ops, customer data areas. The point is not to find smart glasses "stupid", but to address their particular inconspicuousness (notice/consent problem).

2) Policy update: Wearables ≠ smartphones

Explicitly add: smart glasses, camera wearables, AI wearables (including recording/streaming ban in defined areas). Many policies talk about mobile phones/cameras – but smart glasses are "always ready" and difficult to detect.

3) Visitor/supplier rules: ‘Glasses check’ like ‘badge check’

For supplier tours, audits, customer days: wearable notice in the invitation, security notices at reception, issuance of neutral storage sleeves if necessary. The legal core is "notice" – and this is particularly difficult to achieve with glasses.

If voice/image interactions can be stored and manual reviews occur in some cases, it must be clear: What data is allowed to enter such systems in the first place? Reports of annotation viewing are a strong argument for treating the issue as a third-party risk.

Train managers and teams: Recording LEDs can be overlooked; there are reports of stickers and modifications. This means that visible filming is no longer the norm, and "you would have noticed" is not a security concept.

A checklist for CISOs

This article first appeared in German on Computing’s sister site Computing Deutschland