FiddlerTeamsUnencrypted

Hop-by-hop vs client-side encryption in messaging: what a proxy can see (Part 1)

If you work in enterprise or government you’ve heard this sentence a thousand times:

“It’s HTTPS. Everything is encrypted in transit.”

True. But it hides the part that matters in SaaS messaging:

TLS protects the network hop — not the lifetime of your message inside the provider.

So the real question isn’t “Is it encrypted on the wire?” (it is).

The real question is:

How many places does your message exist as plaintext while it is being processed, routed, indexed, and stored?

That’s what people mean (often without realizing) when they compare:

  • Hop-by-hop encryption (TLS between components)
  • End-to-end content encryption (message body stays encrypted beyond TLS)

This post is a learning exercise using two examples — Microsoft Teams messaging and Webex Messaging — not to “rank” them, but to understand the architectural consequences.

Lab warning

Only do TLS inspection in a lab or with explicit authorization. Decrypting HTTPS traffic can expose sensitive data and tokens. Always redact identifiers before sharing screenshots.

Two encryption concepts that get mixed up

1) Hop-by-hop encryption (TLS as the main protection)

In hop-by-hop, the message is protected by TLS while it travels between endpoints. But each endpoint that needs to do work (validate tokens, apply policy, store, index, etc.) has to decrypt the traffic to see what it contains.

Microsoft describes this model clearly: Teams data is encrypted in transit between clients and services and between services, using industry-standard technologies like TLS (and SRTP for media). 

That’s strong transport security — and it also implies a key architectural reality:

If a service needs to process content, it must see it decrypted in memory at that hop.

2) End-to-end content encryption (the message body is encrypted before TLS)

In content E2E, the message body is encrypted by the client application before it is sent. TLS still encrypts transport, but the payload inside TLS is already ciphertext.

Cisco describes this for Webex: Webex apps encrypt user-generated content (including messages and files) before transmitting over TLS, and keys are generated/distributed by KMS. 

So the SaaS can still route and store messages — but it’s handling ciphertext + metadata by design.

Why should you care? The “plaintext lifetime” problem

 Here’s the part that tends to resonate:

TLS protects data in transit. Your concern is data in use.

A message in SaaS isn’t just “sent” and “received”. It is:

  • accepted by an edge gateway (TLS terminates)
  • authenticated/authorized (token validation)
  • routed to the correct message service / partition / region
  • processed for delivery and sync
  • often indexed for search
  • governed for retention / eDiscovery / auditing
  • stored and replicated

Microsoft’s model explicitly states encryption is used between services as well. 

That’s good, but it still means: services exist that can read plaintext, otherwise features like indexing/search/compliance can’t be implemented.

So the “why-care” isn’t “TLS is weak.” It’s this:

In hop-by-hop designs, plaintext exists at multiple internal processing points.

 

In content-E2E designs, plaintext is minimized to endpoints and explicit, controlled exception paths.

That difference changes:

  • what a corporate TLS inspection proxy could see,
  • what provider-side services could access by default,
  • and why organizations start asking about customer-controlled keys or even own KMS.

Hands-on: reproduce this yourself with a local HTTPS proxy (Fiddler)

Hop-by-hop (simplified)

Client → TLSAPI gateway decrypts → internal service → TLS → internal service → store/index/compliance

  • Every network hop can be TLS.
  • But every service hop that needs to operate on content sees plaintext in memory.

Content E2E (simplified)

Client encrypts message bodyTLS → SaaS routes/stores ciphertext → recipient decrypts locally

  • TLS still protects transport.
  • But decrypting TLS alone does not reveal the message body.

Cisco explicitly positions this as protection even if TLS is intercepted (because content is encrypted above TLS).

So when does a global infrastructure as used by Cisco and Microsoft also need to be considered as part of a “semi” public internetwork infrastructure?

Goal: Treat Fiddler as “a trusted TLS termination point” (like a corporate proxy or a SaaS edge gateway) and answer one question:

If TLS terminates, is the message body immediately readable — or is it still ciphertext?

Step 1 — Setup (high level)

  1. Install Fiddler.
  2. Enable HTTPS decryption (test environment; be careful with tokens).
  3. Manually trust the Fiddler certificate
  4. Start capture.
  5. Use both the native Microsoft Teams and Webex App clients in parallel (send 1–2 test messages).
  6. Filter on the relevant endpoints and inspect the response bodies.
 

 

Evidence Callout A — Teams message fetch shows readable message bodies after TLS termination

In your trace, Teams pulled messages via a URL shaped like:

GET https://teams.microsoft.com/api/chatsvc/emea/v1/users/…/conversations/…/messages?…

And in the JSON response, the message body appeared as readable HTML:

				
					{
  "messagetype": "RichText/Html",
  "content": "<p>hallo</p>",
  "composetime": "2025-..Z",
  "tenantId": "<TENANT_GUID>",
  ...
}
				
			

What this proves (and what it doesn’t):

✅ It proves that if a trusted component terminates TLS, Teams message content can be visible at the application layer.

❌ It does not prove “Teams is unencrypted.” Microsoft explicitly states Teams data is encrypted in transit using TLS. 

Why this matters architecturally:

If message content is readable at the SaaS edge (after TLS termination), it can be processed by internal systems for things like compliance, indexing, and policy enforcement. That’s one reason DLP is often simpler to implement centrally (we’ll cover that in Part 2).

Evidence Callout B — Webex message fetch shows ciphertext + KMS key reference after TLS termination

In your Webex trace, message content looked fundamentally different. You fetched activities via:

POST https://conv-a.wbx2.com/conversation/api/v1/bulk_activities_fetch?includeChildren=true

The response contained a message object where the “content” / “displayName” fields looked like encrypted blobs (JWE-like structure), and a key reference such as:

				
					{
  "objectType": "comment",
  "displayName": "eyJhbGciOiAiZGlyIiwgImtpZCI6ICJrbXM6Ly9...<ciphertext>..."
},
"encryptionKeyUrl": "kms://.../keys/<key-guid>"
				
			

What this proves:

✅ Even after TLS termination (Fiddler decrypting HTTPS), the message body is still ciphertext.

✅ This matches Cisco’s published architecture: Webex encrypts user-generated content before transmitting it over TLS, using keys managed by KMS. 

“But I don’t see KMS traffic… not even in Wireshark”

That’s a common moment of confusion, and it’s totally normal.

A kms://… identifier in the payload is not an HTTPS URL you can simply “follow”. It’s a key identifier scheme used by the application and the platform. Actual key retrieval/exchange can be:

  • performed earlier in the session (before your capture window),
  • batched/cached,
  • performed over separate channels,
  • and in any case encrypted on the wire.

Optional Wireshark exercise (lightweight, not a rabbit hole)

If you inspect TLS handshakes, you’ll often see a lot of SNI values (server names) for different services (conv, wdm, identity, etc.).

Your SNI list includes entries like conv-a.wbx2.com, wdm-a.wbx2.com, identity.webex.com, etc. That’s expected: the app talks to multiple services. But you typically won’t see “kms://” because that string lives at the application layer(inside encrypted traffic), not as a hostname.

The “aha” conclusion (why this isn’t just a proxy trick)

Both platforms can honestly say:

    • “We encrypt data in transit with TLS.” (true for both) 

But your experiment highlights the architectural difference:

    • Hop-by-hop: When TLS terminates at a trusted hop, message bodies can become readable for processing.
    • Content E2E: Even when TLS terminates, message bodies remain ciphertext unless you have the content keys.

This is why people start asking about key control.

Why “bring your own keys / own KMS” even enters the conversation

If you accept that plaintext exists in hop-by-hop processing, your risk management is mostly about:

  • limiting who can access plaintext inside the provider boundary,
  • auditing those access paths,
  • and ensuring plaintext doesn’t leak via logs/debugging/tooling.

If you want to reduce that exposure, you move in the direction of:

  • encrypting content above TLS (client-side),
  • and controlling who can decrypt via key management.

Microsoft’s Customer Key is an example of customer-controlled root keys for encryption at rest, and Microsoft explicitly notes you authorize Microsoft 365 to use those keys for value-added services like eDiscovery and search indexing. 

Webex’s Hybrid Data Security is an example where key creation/storage (and associated compliance/search services) can be hosted in the customer’s environment, keeping keys under customer control. 

Different approaches, same underlying driver:

You’re deciding how much plaintext you want inside the provider boundary — and who controls the keys that can produce plaintext.

 

Takeaway: If you can decrypt the transport tunnel, hop-by-hop often reveals message text. Client-side content encryption does not, message bodies remain ciphertext unless you also have the content keys.

Next: if content stays encrypted, how does DLP still work?

Now we’re back at the original question that triggered this investigation: If content is encrypted before it enters the SaaS workflow, how do DLP, eDiscovery, retention, and auditing still work in enterprise/government environments? ➡️ Read Part 2: Why DLP feels easier in Teams than in Webex — what changes when message content is client-encrypted.
Tags: No tags

Comments are closed.